You Just Published a Blog Post. Now Do It Four More Times.
It is 9 PM. You just hit "publish" on a 2,000-word technical post about git worktrees. Good writing. Clear examples. You feel done.
You are not done.
Your Twitter audience wants a punchy thread -- six tweets, each under 280 characters, structured like a mini-story. Your LinkedIn network wants a polished take -- professional tone, hooks in the first two lines, hashtags at the bottom. Your newsletter subscribers want the personal version -- the "why I actually care about this" angle with a behind-the-scenes detail. And dev.to wants a cross-post with a canonical URL, a TL;DR up top, and an intro rewritten for their community vibe.
Four platforms. Four formats. Four voices. Same ideas, different containers.
Each rewrite takes 15 to 30 minutes. You open four browser tabs, copy the original, and start reshaping. By the third rewrite, you are cutting corners. The LinkedIn post becomes a lazy summary. The newsletter loses the technical depth that made the original worth reading.
Here is the thing: this is not a writing problem. It is a pipeline problem wearing a writing costume. The core content already exists. The transformations are mechanical -- adjust tone, enforce character limits, reformat structure. Think of it like compiling the same source code for different architectures. The logic stays the same. Only the output format changes.
Mechanical transformations are exactly what AI agents do well.
This article shows you how to build a content pipeline that takes one Markdown file and produces platform-ready output for Twitter/X, LinkedIn, email newsletters, and dev.to -- in a single command.
The Architecture: A Funnel That Fans Out
Picture a factory assembly line. Raw material enters at one end. First station: quality inspection -- what is this piece made of, what are its strongest properties? That analysis report follows the material to every subsequent station. Each station shapes the material for a different customer, but they all work from the same inspection report.
That is exactly how this pipeline works. Two stages.
Stage 1: Base analysis. Read the Markdown source. Extract the core message, key arguments, supporting evidence, and quotable lines. This creates a shared understanding of what the content actually says.
Stage 2: Fan-out. Send that extracted structure to multiple platform adapters running in parallel. Each adapter knows its platform's rules and reshapes the content accordingly.
Why not skip Stage 1 and send the raw Markdown directly to four platform prompts? Because each prompt would independently decide what the "main point" is. They would diverge. One picks up on the code examples. Another fixates on the productivity angle. A third invents a thesis that was never in the original. The base analysis stage is a single source of truth. Every adapter works from the same foundation.
This is a standard prompt chaining pattern -- break a complex task into focused stages, where each stage's output feeds the next. The fan-out at Stage 2 means all four platform versions generate simultaneously rather than sequentially. On a typical 2,000-word post, the entire pipeline completes in 8 to 15 seconds.
Concrete Example: Git Worktrees Blog Post
To make this real, consider transforming a post titled "Git Worktrees: Parallel Development Without the Branch-Switching Tax" into four platform formats. The original is 1,800 words covering what git worktrees are, why they eliminate context-switching overhead, and how to set them up with AI CLI agents.
Step 1: Define Platform Output Schemas
Each platform has different constraints. Think of schemas as molds in a factory -- they force the output into exactly the shape each platform expects. Define them as structured output so the agent produces machine-parseable JSON, not freeform text.
{
"twitter_thread": {
"tweets": [
{ "text": "string (max 280 chars)", "position": "number" }
],
"thread_length": "number (5-12 tweets)"
},
"linkedin_post": {
"hook": "string (max 150 chars, first visible line)",
"body": "string (max 3000 chars)",
"hashtags": "string[] (3-5 tags)"
},
"newsletter": {
"subject_line": "string (max 60 chars)",
"preview_text": "string (max 90 chars)",
"body_html": "string (600-800 words)",
"cta_text": "string",
"cta_url": "string"
},
"devto": {
"title": "string",
"tags": "string[] (max 4)",
"canonical_url": "string",
"body_markdown": "string (adjusted from original)"
}
}
Two purposes here. First, the schemas constrain the agent so you get exactly what each platform expects -- no 400-character tweets, no LinkedIn posts missing the hook. Second, the output is programmatically consumable. You can pipe the JSON straight into a posting script or CMS API without manual copy-paste.
Step 2: Configure CLAUDE.md for Platform Tone
Your CLAUDE.md file is like a style guide pinned to the factory wall. Every worker (every prompt) sees it. Add a content pipeline section that encodes tone and style rules for each platform.
## Content Pipeline -- Platform Adaptation Rules
### Twitter/X Thread
- Open with a bold claim or surprising stat. No "I just wrote about..."
- Each tweet must standalone but flow as a narrative
- Use short sentences. One idea per tweet.
- End with a concrete takeaway or call-to-read
- No hashtags in thread body. One relevant hashtag in the final tweet only.
- Technical terms are fine. Jargon is not.
### LinkedIn Post
- First two lines must hook. LinkedIn truncates after ~210 chars.
- Professional but not corporate. Write like a senior engineer, not a marketing team.
- Use line breaks aggressively. Dense paragraphs die on LinkedIn.
- End with a question to drive comments.
- 3-5 hashtags at the bottom, separated from the body.
### Newsletter
- Write as if speaking to a colleague over coffee.
- Assume the reader has 2 minutes. Front-load value.
- Include one thing the blog post does not: a personal observation or behind-the-scenes detail.
- CTA links to the full blog post.
### dev.to Cross-Post
- Add a canonical URL pointing to the original.
- Rewrite the introduction to fit dev.to culture: practical, slightly informal, community-oriented.
- Keep all code blocks and technical depth intact.
- Add a "TL;DR" section at the top that the original may not have.
Step 3: The Pipeline Script
Here is the complete shell script. It uses Claude Code in headless mode (the -p flag) with structured output. Think of each claude -p call as a worker at a station -- they all receive the same analysis report, but each produces a different output.
#!/bin/bash
set -euo pipefail
# ── Configuration ──────────────────────────────────────────────
SOURCE_FILE="${1:?Usage: ./content-pipeline.sh <markdown-file>}"
OUTPUT_DIR="./dist/content-$(date +%Y%m%d-%H%M%S)"
BLOG_BASE_URL="https://yourdomain.com/blog"
SLUG=$(basename "$SOURCE_FILE" .md)
mkdir -p "$OUTPUT_DIR"
# ── Stage 1: Base Analysis ─────────────────────────────────────
echo "[1/3] Analyzing source content..."
claude -p "
Read the file $SOURCE_FILE and extract:
1. The single core message in one sentence.
2. The 3-5 key arguments, each in one sentence.
3. Up to 5 quotable lines (punchy, self-contained phrases).
4. The target audience (who benefits from reading this).
5. One surprising or counterintuitive claim from the article.
Output as JSON with keys: core_message, key_arguments, quotable_lines,
target_audience, surprising_claim.
" --output-format json > "$OUTPUT_DIR/analysis.json"
echo " Analysis saved to $OUTPUT_DIR/analysis.json"
# ── Stage 2: Fan-Out to All Platforms (Parallel) ───────────────
echo "[2/3] Generating platform versions..."
# Twitter/X Thread
claude -p "
You are a content adapter. Using the source file $SOURCE_FILE and the
analysis in $OUTPUT_DIR/analysis.json, create a Twitter/X thread.
Rules from CLAUDE.md apply. Output as JSON matching the twitter_thread
schema: an array of tweet objects with text (max 280 chars) and position.
Thread length: 5-12 tweets.
" --output-format json > "$OUTPUT_DIR/twitter.json" &
PID_TWITTER=$!
# LinkedIn Post
claude -p "
You are a content adapter. Using the source file $SOURCE_FILE and the
analysis in $OUTPUT_DIR/analysis.json, create a LinkedIn post.
Rules from CLAUDE.md apply. Output as JSON matching the linkedin_post
schema: hook (max 150 chars), body (max 3000 chars), hashtags (3-5).
" --output-format json > "$OUTPUT_DIR/linkedin.json" &
PID_LINKEDIN=$!
# Newsletter
claude -p "
You are a content adapter. Using the source file $SOURCE_FILE and the
analysis in $OUTPUT_DIR/analysis.json, create a newsletter edition.
Rules from CLAUDE.md apply. Output as JSON matching the newsletter schema:
subject_line (max 60 chars), preview_text (max 90 chars), body_html
(600-800 words), cta_text, cta_url (use $BLOG_BASE_URL/$SLUG).
" --output-format json > "$OUTPUT_DIR/newsletter.json" &
PID_NEWSLETTER=$!
# dev.to Cross-Post
claude -p "
You are a content adapter. Using the source file $SOURCE_FILE and the
analysis in $OUTPUT_DIR/analysis.json, create a dev.to cross-post.
Rules from CLAUDE.md apply. Output as JSON matching the devto schema:
title, tags (max 4), canonical_url ($BLOG_BASE_URL/$SLUG),
body_markdown (full article adapted for dev.to).
" --output-format json > "$OUTPUT_DIR/devto.json" &
PID_DEVTO=$!
# Wait for all platform adaptations
wait $PID_TWITTER $PID_LINKEDIN $PID_NEWSLETTER $PID_DEVTO
echo " All platform versions generated."
# ── Stage 3: Summary ──────────────────────────────────────────
echo "[3/3] Pipeline complete. Output:"
echo " $OUTPUT_DIR/analysis.json -- base content analysis"
echo " $OUTPUT_DIR/twitter.json -- Twitter/X thread"
echo " $OUTPUT_DIR/linkedin.json -- LinkedIn post"
echo " $OUTPUT_DIR/newsletter.json -- newsletter edition"
echo " $OUTPUT_DIR/devto.json -- dev.to cross-post"
echo ""
echo "Total tokens used across all calls:"
cat "$OUTPUT_DIR"/*.json | wc -c | awk '{printf " ~%.0f output chars\n", $1}'
The critical detail is the & after each platform command and the wait call at the end. Those four platform adaptations run as parallel background processes -- four simultaneous API calls. The total wall-clock time equals the slowest single adaptation (typically the newsletter at 8-12 seconds), not the sum of all four. Like four cooks working four burners at once instead of one cook working them sequentially.
Step 4: Run It
chmod +x content-pipeline.sh
./content-pipeline.sh posts/git-worktrees-parallel-dev.md
Output:
[1/3] Analyzing source content...
Analysis saved to ./dist/content-20260322-143022/analysis.json
[2/3] Generating platform versions...
All platform versions generated.
[3/3] Pipeline complete. Output:
./dist/content-20260322-143022/analysis.json -- base content analysis
./dist/content-20260322-143022/twitter.json -- Twitter/X thread
./dist/content-20260322-143022/linkedin.json -- LinkedIn post
./dist/content-20260322-143022/newsletter.json -- newsletter edition
./dist/content-20260322-143022/devto.json -- dev.to cross-post
Sample Output: Twitter/X Thread
Here is what the pipeline produces for the git worktrees post:
Tweet 1:
Git worktrees let you work on three features simultaneously
without stashing, switching branches, or losing your terminal state.
Most developers have never heard of them.
Tweet 2:
The problem: you are deep in a feature branch. A hotfix comes in.
You stash, switch branches, fix, commit, switch back, pop stash.
Context destroyed. Flow state gone. This happens 5-10 times a week.
Tweet 3:
git worktree add ../hotfix main
That command creates a second working directory linked to the same repo.
Your feature branch stays untouched. Open the hotfix dir in a new pane.
Tweet 4:
Each worktree has its own working directory, its own HEAD,
its own index. But they share the same .git object store.
No cloning. No duplicate history. Instant creation.
Tweet 5:
The real power: pair worktrees with AI CLI agents.
One agent refactors in worktree A. Another writes tests in worktree B.
They run in parallel. No merge conflicts because they touch different files.
Tweet 6:
Three worktrees. Three terminal panes. Three agents working simultaneously.
That is not a future workflow. That is Tuesday.
Full guide: [link]
Sample Output: LinkedIn Post
Most developers context-switch 5-10 times per day between branches.
Each switch means stashing work, losing terminal state, and rebuilding
mental context. It takes 15-25 minutes to recover deep focus after
an interruption. The math is brutal: 5 switches x 20 minutes = nearly
2 hours lost daily.
Git worktrees eliminate this entirely. One command creates a parallel
working directory on a different branch. Your current work stays
untouched. No stashing. No switching. No lost state.
I have been running three worktrees simultaneously with AI CLI agents --
one per feature, each in its own terminal pane. The agents work in
parallel because they operate on isolated directory trees.
The setup takes 5 minutes. The time savings compound every single day.
What is the most time you have lost to a single context switch?
#git #developerproductivity #terminalworkflow #aitools
Sample Output: Newsletter
Subject: The git command that mass-produces flow state
Preview: You probably haven't used git worktree. You should.
Hi --
I spent last week rewriting how I handle parallel development,
and the results were dramatic enough to share immediately.
The core idea: git worktree lets you have multiple working directories
from the same repository, each on a different branch, all active
simultaneously. When a hotfix interrupts your feature work, you do not
stash and switch. You open a new directory and fix it there. Your
feature branch never notices.
What the blog post does not mention: I resisted worktrees for two years
because the name sounded like a data structure lecture. The actual UX
is one command: git worktree add ../hotfix main. That is it.
Pair this with terminal panes (one per worktree) and an AI CLI agent
in each, and you get genuine parallel development. Three features
progressing simultaneously. No merge conflicts because each worktree
operates on isolated files.
The full technical breakdown, including the multi-agent setup, is here:
[Read the full post]
-- Danny
Sample Output: dev.to Cross-Post
---
title: "Git Worktrees: Parallel Development Without the Branch-Switching Tax"
tags: [git, productivity, terminal, ai]
canonical_url: https://yourdomain.com/blog/git-worktrees-parallel-dev
---
## TL;DR
Git worktrees let you check out multiple branches simultaneously in
separate directories from the same repo. Combined with AI CLI agents
in parallel terminal panes, you get true concurrent development with
zero context-switching overhead.
---
Hey dev.to -- if you have ever lost 20 minutes of focus because
a hotfix forced you to stash and switch branches, this one is for you.
[Full article body with code blocks preserved, introduction adapted
for dev.to audience, canonical URL pointing to original...]
Cost Comparison: Manual vs. Pipeline
Let us talk numbers.
| Manual Rewriting | AI Pipeline | |
|---|---|---|
| Time per post | 60-90 min (4 platforms) | ~30 seconds (wall clock) |
| Per-article cost | Your hourly rate x 1-1.5 hrs | ~$0.15-0.40 in API tokens |
| Quality consistency | Degrades with fatigue | Consistent (governed by CLAUDE.md rules) |
| Monthly cost (8 posts) | 8-12 hours of your time | ~$1.50-3.00 in API tokens |
| Scaling to 10 platforms | Linear time increase | Add one more background process |
Here is where it gets interesting. The token cost breakdown for one run on a 2,000-word source article: the base analysis stage uses roughly 3,000 input tokens and 500 output tokens. Each platform adapter uses roughly 4,000 input tokens (source + analysis) and 800-2,000 output tokens depending on format length. Total across all five calls: approximately 25,000 input tokens and 6,000 output tokens. At Claude Sonnet rates, that is under $0.15 per run. At Opus rates for higher quality, approximately $0.40.
If you batch all platforms into a single batch request instead of individual calls, costs drop by roughly 50%. Batch processing trades speed for savings -- results arrive within hours rather than seconds. For content that does not need to publish immediately, batching eight articles at once brings the per-article cost below $0.10. That is less than the price of a gumball.
Extending the Pipeline
The script handles four platforms. Adding more is like adding another station to the assembly line -- copy one platform block, adjust the prompt and schema, add another background process. Common extensions:
- Hacker News submission: title + 2-3 sentence comment for the submission thread
- Reddit post: subreddit-specific tone, self-post format, mandatory flair selection
- Podcast show notes: bullet-point summary, timestamp anchors, guest quotes extracted
- Slide deck outline: one key point per slide, speaker notes, visual suggestions
Each extension is one more claude -p call in the fan-out stage.
You can also chain a review stage after the fan-out: a final prompt reads all generated versions and checks for factual inconsistencies, tone drift, or content that diverges from the base analysis. Think of it as the quality inspector at the end of the assembly line. It adds 3-5 seconds and catches the occasional hallucination where a platform version invents a statistic not present in the original.
The Multi-Pane Advantage
You run the pipeline. Thirty seconds later, four JSON files sit in your output directory. The natural question: "Is any of this actually ready to post?"
You need to review four outputs, each with different formatting rules. The productive way is to see them all at once. One terminal pane shows the Twitter thread. Another shows the LinkedIn post. A third renders the newsletter HTML. A fourth displays the dev.to Markdown.
You scan all four simultaneously. You spot that one tweet exceeds 280 characters. You notice the LinkedIn hook is too generic. You catch that the newsletter accidentally dropped a key argument the Twitter thread preserved. You adjust the CLAUDE.md rules and re-run.
Now compare that to checking each file sequentially in a single terminal. By the time you read the fourth output, you have forgotten whether the first one captured the same key argument. Side-by-side comparison is not a nice-to-have -- it is how you maintain consistency across platform versions. Same reason code review tools show diffs side by side instead of sequentially.
Recap
The content pipeline pattern is simple: analyze once, adapt in parallel. One Markdown file becomes five platform-ready versions in under 30 seconds, at a cost of $0.15-0.40 per run. The structured output schemas act as molds, ensuring each version respects platform constraints. The CLAUDE.md configuration encodes your voice and style rules so the agent does not drift into generic marketing copy.
The biggest leverage comes from the fan-out architecture. Four platform adaptations running simultaneously means the pipeline's wall-clock time barely grows as you add more targets. Write once, publish everywhere -- not as a slogan, but as a shell script you can run today.
Ready to streamline your terminal workflow?
Multi-terminal drag-and-drop layout, workspace Git sync, built-in AI integration, AST code analysis — all in one app.