Tuesday Morning, Three Browser Tabs Deep
It is 9 AM. Your product lead needs an updated competitor pricing comparison by noon. You open three browser tabs — one for each competitor's pricing page. You scroll. You copy numbers into a spreadsheet. You notice one competitor changed their plans since last month. You go back, re-check, realize you missed the enterprise tier hiding behind a "Contact Sales" button. You start over.
Two hours later you have a comparison table. It will be outdated within a week. Next Tuesday, you do it all again.
This is how most teams handle competitive intelligence. It is manual, slow, and the results have the shelf life of fresh fish. A product manager tracking five competitors across pricing, features, and changelog updates spends three to five hours per week on work that is fundamentally mechanical: open a URL, find the right numbers, copy them into a table, repeat.
There is a better way. An AI agent with a headless browser can visit every competitor page, extract exactly the data you need into a structured format, and do it for all competitors in parallel. Three competitors analyzed in two minutes, not two hours. The whole thing runs from your terminal, and you can schedule it to repeat every Monday morning without touching a browser.
This article walks through the exact setup. The concrete example compares the pricing pages of three AI CLI tools — Claude Code, Gemini CLI, and Codex CLI — but the pattern works for any competitive analysis scenario.
What You Need
Tools:
- Claude Code — the AI agent that orchestrates the analysis. Requires a Claude subscription ($20/month Pro minimum). A full comparison of AI CLI tool options is in the 2026 AI CLI Tools Guide.
- Playwright MCP server — gives Claude Code the ability to browse the web via a headless Chromium browser. MCP stands for Model Context Protocol — it is the standard way to give AI agents access to external tools. No separate Playwright install needed; the MCP server handles it.
- Termdock (recommended) — for running multiple agent sessions side by side. Any terminal works for single-agent use.
Time: 15 minutes for initial setup. Under 10 minutes per analysis run after that.
Step 1: Configure the MCP Server for Web Browsing
Claude Code does not browse the web by itself. It needs an MCP server that exposes browser actions as callable tools — think of it as giving the agent hands to click and eyes to read web pages.
The Playwright MCP server launches a headless Chromium instance and provides tools like navigate, snapshot, click, and evaluate (for running JavaScript on the page).
Add this to your project's .mcp.json file:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@anthropic-ai/mcp-playwright@latest"],
"env": {
"DISPLAY": ""
}
}
}
}
Or configure it globally in ~/.claude/mcp.json if you want web browsing available across all projects.
Verify the server is reachable by starting Claude Code and asking it to navigate to any URL:
claude
> Navigate to https://example.com and tell me the page title
If Claude Code responds with the page title, you are in business. If it says no browser tools are available, check that npx can find the @anthropic-ai/mcp-playwright package and that no firewall is blocking Chromium's outbound connections.
Step 2: Define the Extraction Schema
The difference between useful competitive intelligence and a wall of unstructured text is one thing: you decided what to extract before you started extracting. That is the schema.
For a pricing comparison, it looks like this:
## Extraction Schema: Pricing Page
For each pricing tier found on the page, extract:
- **Tier name** (e.g., "Pro", "Enterprise")
- **Monthly price** (USD, monthly billing)
- **Annual price** (USD, annual billing, per-month equivalent)
- **Usage limits** (requests, tokens, seats — whatever the unit is)
- **Key features included** (list the top 5 differentiating features)
- **Key features excluded** (notable omissions compared to higher tiers)
- **Free tier details** (if one exists)
- **Last verified date** (today's date)
Output as a markdown table with one row per tier.
Save this as schemas/pricing-comparison.md in your project. You will reference it in agent prompts. Keeping schemas as separate files means you can swap them without editing the prompt — use schemas/feature-comparison.md for feature analysis, schemas/changelog-tracker.md for release monitoring, and so on. Same engine, different fuel.
Step 3: Write the Agent Prompt
The prompt tells the agent what to do with the browser tools and the schema. Here is the prompt for a single competitor:
You are a competitive intelligence analyst. Your task:
1. Navigate to the provided URL using the playwright browser tools
2. Take a snapshot of the page to understand its structure
3. Extract pricing information according to the schema below
4. If the pricing page links to sub-pages (e.g., "See full feature comparison"),
navigate to those pages and extract additional data
5. Output the result as a structured markdown table
## Extraction Schema
[contents of schemas/pricing-comparison.md]
## Target
- Company: {COMPANY_NAME}
- URL: {PRICING_URL}
## Rules
- Extract only what is visible on the page. Do not infer or guess.
- If a data point is not available, write "Not listed" instead of guessing.
- Include the exact URL you extracted each data point from.
Step 4: Run a Single Competitor Analysis
Test with one competitor before going parallel. Prototype first, scale second.
claude --prompt "$(cat <<'EOF'
You are a competitive intelligence analyst. Use the playwright browser tools
to navigate to https://claude.ai/pricing and extract all pricing tier
information.
For each tier, extract:
- Tier name
- Monthly price (USD)
- Usage limits (messages per window, features)
- Key included features (top 5)
Output as a clean markdown table. Only include data visible on the page.
EOF
)"
Claude Code will:
- Launch the headless browser via the Playwright MCP server
- Navigate to the pricing URL
- Take a page snapshot to understand the DOM structure
- Extract the pricing data
- Format and return a markdown table
A typical single-page extraction takes 30-60 seconds. Pages with dynamic content (JavaScript-rendered pricing calculators, interactive sliders) take longer because the agent may need to interact with UI elements to reveal all tiers.
Handling Dynamic Content
Some pricing pages hide information behind tabs, toggles, or "Show more" buttons. The real prices live two clicks deep. The Playwright MCP server's click tool handles this:
Navigate to the page, then:
1. Snapshot the page
2. If there are tabs for "Monthly" and "Annual" pricing, click each tab
and extract both sets of prices
3. If there is a "See all features" or "Compare plans" link, click it
and extract the full feature matrix
You describe what to look for. The agent figures out which elements to click. It is like telling a research assistant "get me the annual pricing too" instead of writing a CSS selector.
Step 5: Scale to Parallel Execution
One competitor at a time is already faster than manual research. Three competitors running simultaneously is where the math changes completely.
The Parallel Pattern
Open three terminal panes. In Termdock, split your window into three vertical panels. In each pane, launch a Claude Code session targeting a different competitor:
Pane 1 — Claude Code pricing:
claude --prompt "Analyze pricing at https://claude.ai/pricing. Extract all tiers, \
prices, usage limits, and key features. Output as markdown table." \
> output/claude-code-pricing.md
Pane 2 — Gemini CLI pricing:
claude --prompt "Analyze pricing at https://ai.google.dev/gemini-api/docs/pricing. \
Extract all tiers, prices, rate limits, and key features. Output as markdown table." \
> output/gemini-cli-pricing.md
Pane 3 — Codex CLI pricing:
claude --prompt "Analyze pricing at https://openai.com/chatgpt/pricing/. \
Extract all tiers, prices, usage limits, and key features. Output as markdown table." \
> output/codex-cli-pricing.md
All three agents run simultaneously. Each one launches its own headless browser instance, navigates to the target URL, and extracts data independently. Wall-clock time: 1-2 minutes for all three, compared to 6-10 minutes sequentially.
Why Parallel Agents Need Separate Panes
Each Claude Code session maintains its own MCP server connection and browser instance. They do not share state. This is a feature, not a limitation. Browser sessions that share cookies or cache can leak state between competitors — imagine one session's login token persisting into another. Isolated sessions guarantee clean extraction.
The practical requirement: one terminal per agent. A terminal multiplexer like Termdock makes this manageable. You see all three agents working in a single window, catch errors in real time, and re-run a single agent if one site is temporarily down — without disrupting the others.
Step 6: Merge and Compare
Once all three agents finish, you have individual markdown files. The last mile is feeding them back to an agent for cross-competitor comparison:
claude --prompt "$(cat <<'EOF'
I have three competitor pricing analyses. Compare them and generate:
1. A unified comparison table with all competitors as columns
2. A "best value" recommendation for three developer profiles:
- Solo developer, budget-conscious
- Professional developer, moderate usage
- Team lead, heavy usage across multiple projects
3. Key differentiators: what does each tool offer that the others do not?
## Competitor Data
### Claude Code
$(cat output/claude-code-pricing.md)
### Gemini CLI
$(cat output/gemini-cli-pricing.md)
### Codex CLI
$(cat output/codex-cli-pricing.md)
EOF
)" > output/competitive-comparison.md
The output is a single document with:
- A normalized comparison table (all prices in the same format, same tier categories)
- Qualitative analysis (differentiators, gaps, positioning)
- Actionable recommendations per user profile
This is the deliverable you send to a product team, drop into a strategy doc, or use to inform your own pricing decisions. It took five minutes, not five hours.
Making It a Repeatable Weekly Task
A competitive analysis that runs once is useful. One that runs every Monday morning while you pour coffee is a strategic advantage.
The Automation Script
#!/bin/bash
# weekly-competitive-analysis.sh
# Run with: bash weekly-competitive-analysis.sh
set -euo pipefail
DATE=$(date +%Y-%m-%d)
OUTPUT_DIR="competitive-intel/$DATE"
mkdir -p "$OUTPUT_DIR"
COMPETITORS=(
"Claude Code|https://claude.ai/pricing"
"Gemini CLI|https://ai.google.dev/gemini-api/docs/pricing"
"Codex CLI|https://openai.com/chatgpt/pricing/"
)
SCHEMA=$(cat schemas/pricing-comparison.md)
# Phase 1: Parallel extraction
PIDS=()
for entry in "${COMPETITORS[@]}"; do
IFS='|' read -r name url <<< "$entry"
slug=$(echo "$name" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
claude --prompt "You are a competitive intelligence analyst. \
Navigate to $url using playwright browser tools. \
Extract pricing data per this schema: $SCHEMA \
Company: $name. Output as markdown." \
> "$OUTPUT_DIR/${slug}-raw.md" &
PIDS+=($!)
done
# Wait for all extractions
for pid in "${PIDS[@]}"; do
wait "$pid"
done
echo "Extraction complete. Merging..."
# Phase 2: Merge and compare
COMBINED=""
for entry in "${COMPETITORS[@]}"; do
IFS='|' read -r name url <<< "$entry"
slug=$(echo "$name" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
COMBINED+="### $name\n$(cat "$OUTPUT_DIR/${slug}-raw.md")\n\n"
done
claude --prompt "Compare these competitor pricing analyses. \
Generate a unified comparison table and key differentiators. \
Data: $COMBINED" \
> "$OUTPUT_DIR/comparison-report.md"
echo "Report saved to $OUTPUT_DIR/comparison-report.md"
Scheduling It
Add the script to your crontab for Monday mornings:
# Run competitive analysis every Monday at 8:00 AM
0 8 * * 1 cd /path/to/project && bash weekly-competitive-analysis.sh
Or if you prefer a manual trigger with a consistent workflow, create an alias:
alias comp-analysis='cd /path/to/project && bash weekly-competitive-analysis.sh'
Tracking Changes Over Time
Each run saves to a date-stamped directory. To detect what changed between weeks:
diff competitive-intel/2026-03-15/comparison-report.md \
competitive-intel/2026-03-22/comparison-report.md
Or ask Claude Code to summarize the differences in plain language:
claude --prompt "Compare these two weekly competitive reports and highlight \
what changed: pricing updates, new tiers, removed features, positioning shifts. \
\
Week of March 15: $(cat competitive-intel/2026-03-15/comparison-report.md) \
\
Week of March 22: $(cat competitive-intel/2026-03-22/comparison-report.md)"
This gives you a changelog of competitor moves — price changes, new tiers, feature additions — without reading every page yourself. Over months, these diffs build into a timeline of your competitive landscape.
Extending Beyond Pricing
The pattern is the same regardless of what you are extracting. Swap the schema and the URLs. Everything else stays.
Feature comparison: Change the schema to extract feature lists, integration support, platform availability. Navigate to each competitor's feature page instead of the pricing page.
Changelog monitoring: Point agents at /changelog or /releases pages. Extract dates, version numbers, and feature descriptions. Diff against last week's extraction to see what shipped.
Job postings analysis: Scrape competitor career pages to understand their hiring priorities. A company that suddenly posts five ML engineer roles is signaling a strategic shift before they announce anything publicly.
Documentation depth: Count pages, sections, and code examples in competitor docs. Documentation quality correlates with developer adoption. A competitor with twice your code examples is winning the onboarding battle.
Each variation only requires a new schema file and updated URLs. The extraction, parallel execution, and comparison pipeline stays the same.
Practical Limits and Workarounds
Rate limiting: Some sites block rapid automated requests. The Playwright MCP server behaves like a real browser — it runs full Chromium with a real user agent — which avoids most bot detection. If you do hit rate limits, instruct the agent: "Wait 3 seconds between page loads."
JavaScript-heavy pages: Single-page apps that render pricing dynamically work fine. Chromium executes JavaScript fully. The agent can wait for elements to render before extracting data.
Login-gated content: If competitor data requires authentication, you can configure the Playwright session with pre-set cookies or have the agent perform a login flow. This is more fragile and requires maintaining credentials — use it only for data that is genuinely behind a login wall.
Accuracy verification: AI extraction is not infallible. For critical business decisions, spot-check the extracted data against the actual pages. The agent includes source URLs in its output for exactly this purpose. Trust but verify.
The Workflow Summary
| Step | What Happens | Time |
|---|---|---|
| 1. Configure MCP | One-time Playwright MCP setup | 5 min (once) |
| 2. Define schema | Write extraction schema for your use case | 10 min (once) |
| 3. Parallel extract | N agents scrape N competitors simultaneously | 1-2 min |
| 4. Merge and compare | One agent synthesizes all data | 1-2 min |
| 5. Review output | Spot-check the comparison table | 2-3 min |
| Total per run | 5-7 min |
Compare this to the manual workflow: 2-5 hours per analysis, results that go stale before you finish, and no structured way to see what changed between weeks.
Where This Fits in the Bigger Picture
Competitive analysis is one workflow. The AI Agent Workflow Guide covers the full range of terminal-based automation patterns — from daily standup reports to data analysis pipelines to multi-agent code review. Each one follows the same structure: define the task, configure the tools, let agents execute in parallel, merge the results.
The cost of running this workflow is minimal. Three Claude Code sessions extracting one page each uses a small fraction of your daily message allocation. If cost is a concern, the AI CLI Cost Optimization Guide shows how to route different tasks to different tools based on complexity. For competitive analysis specifically, Claude Code's structured output quality justifies the cost — the Free AI CLI Tools Ranking covers alternatives if you want to experiment with free options first.
The parallel execution pattern — multiple agents, each handling one competitor, all running simultaneously — is where Termdock earns its place. Split your terminal into three panes, launch an agent in each, and watch all three extract data at the same time. A task that used to eat your Tuesday morning now finishes while you are still on your first cup of coffee. Try it once. You will not go back to tabs and spreadsheets.
Ready to streamline your terminal workflow?
Multi-terminal drag-and-drop layout, workspace Git sync, built-in AI integration, AST code analysis — all in one app.