Content Empire
All work
Automated Pipeline

Content Empire

Fully autonomous content production pipeline running on a configurable schedule. Monitors RSS feeds for new topics in target domains, evaluates each for novelty and relevance against existing knowledge, researches selected topics via web scraping for supporting facts and context, generates full articles using local LLM inference, and queues completed drafts for editorial review.

Integrates with the content intelligence engine for bidirectional knowledge graph updates — new articles feed facts back into the graph, and the graph informs article generation with existing context so content builds on prior knowledge rather than starting from scratch each time. Integrates with the image generation engine for article illustrations — hero images generated to match article content, unattended.

Full chain from RSS detection to draft article with hero image runs without human intervention. The pipeline handles the time-intensive parts of content production: monitoring sources, identifying what's worth covering, gathering supporting information, writing a structured first draft, and generating accompanying visuals. Editorial review is the only manual step.

Articles queue as drafts with metadata: source feeds, research URLs, generation parameters, knowledge graph entities referenced, and confidence scores. The editor can approve, request revision with specific feedback, or reject. Scheduled execution via system service with configurable intervals, feed lists, and domain focus. Rate-limited LLM calls managed through the inference gateway.

// Tech stack

FastAPIPythonhttpxPydanticfeedparserSQLiteGemini APIlaunchd
Live in production