How I Built a Blog Post Pipeline with Parallel Critics
Turned article writing into a 5-phase pipeline. Instead of sequential editing — 4 agents review the text simultaneously.
The problem
My blog posts were taking too long and still coming out soft. I’d write a draft, edit it once, call it done. The output was fine — but “fine” is the editorial equivalent of 2% conversion. Seven hedging words per paragraph. Lists where every bullet is the same length. Claims I couldn’t actually back up.
I wanted a system, not a habit. And I’d just read an article by Sereja on building a content pipeline — a “documenting conversations” format where the author shows the process, not just the polished result. That framing clicked. I didn’t want to copy it; I wanted to build something modular that I could reuse across three different sites I write for.
First attempt: structure
I started by laying out a directory:
~/.claude/skills/blogpost-pipeline/
├── phases/
│ ├── 01-questions.md
│ ├── 02-research.md
│ ├── 03-draft.md
│ ├── 04-critics.md
│ └── 05-deploy.md
├── formats/
│ ├── standard.md
│ └── conversation.md
└── config/
└── sites.yaml
Each phase is a separate file with specific instructions for that step. The formats describe article structure. The config tells the skill where to save the file and how to notify when done.
Five phases total:
- Questions — before anything gets written, ask about topic, angle, audience, format, target site
- Research — pull material via Exa AI semantic search plus regular web search
- Draft — write the full article in the chosen format
- Critics — 4 specialized agents run simultaneously, each looking for something different
- Deploy — save to the correct directory, send a Telegram preview
The critics phase: fan-out/fan-in for text
Phase 4 is the actual point of the whole thing. Instead of one editing pass, I run four agents at once:
AI Slop Detector ─┐
Rhythm Analyzer ─┼─► Merge ─► Final edits
Specificity Checker ─┤
Fact Checker ─┘
AI Slop Detector hunts clichés: “dive into,” “it’s worth noting,” “game-changer,” “seamlessly.” Any phrase that appears in 10,000 other blog posts gets flagged.
Rhythm Analyzer checks sentence length variation. Three short sentences in a row reads like a children’s book. Six long sentences in a row puts people to sleep. It flags monotonous stretches.
Specificity Checker targets vague claims. “This is very useful” → what does useful mean? For whom? In what situation? It marks every sentence that doesn’t answer those questions.
Fact Checker demands proof. Any claim that implies a number, a comparison, or a cause-and-effect relationship needs a source or gets flagged as unverified.
All four run against the same draft in parallel. Then I collect their output and make edits in a single pass. When two critics disagree — one flags a sentence as too long, the other wants more explanation — I sort it out manually.
What broke
Problem 1: duplicate skills. When I activated the new skill, the command list showed two /blogpost entries — the old four-pass skill I’d built months ago, and the new pipeline. I deleted the old one and a related content-factory skill that had grown into essentially the same thing. Cleaned.
Problem 2: the skill skipped the questions. First run, I typed /blogpost and it immediately started writing. No questions. The result was 2,000 words of context-free text — “AI is a powerful tool for modern content creators.” The kind of article that exists only to fill space.
I added this to the top of SKILL.md:
## CRITICAL: Start with Questions
**DO NOT start writing immediately.**
Before ANY writing, you MUST use AskUserQuestion to gather context:
1. Topic & angle
2. Audience
3. Format (standard/conversation)
4. Target site
Only after receiving answers → proceed to research and writing.
That fixed it. The skill now opens with 2-3 minutes of questions. I answer, and the first draft comes out with actual context.
What the critics found on this article
When I ran the four critics on the draft of this post, they flagged:
- 7 spots with hedging words (“maybe,” “kind of,” “potentially”)
- 3 monotonous list sections where every bullet was roughly the same length
- 5 abstract claims that needed specific examples or numbers
- 2 technical inaccuracies in how I described the fan-out pattern
Four sequential editing passes on that would have taken me 30-40 minutes. The parallel agents finished in 2 minutes and produced a consolidated list I worked through in one final edit.
Takeaway
Sequential editing is a bottleneck by design. Each pass waits for the previous one, and each editor second-guesses what the last one “already fixed.” Parallel critics don’t know what each other flagged. They see the same raw draft and bring independent judgment.
The structure — four specialized roles, each looking for one thing and ignoring everything else — is what makes the output useful. A general “edit this” prompt gets you general feedback. Splitting the concerns gets you surgical feedback.
One skill in ~/.claude/skills/blogpost-pipeline/, symlinked to each project. I launch /blogpost, answer 3-4 questions, get a publishable article in the right format for the right site.
This article was written by the skill it describes.