The Bottom Line
You can build a terminal pipeline that reads an RFP, matches its requirements against a library of past successful proposals, generates a first draft that sounds like your organization, and exports it in markdown, PDF, and DOCX -- all in under five minutes. This article shows you exactly how: the proposal library structure, the CLAUDE.md configuration, the generation script, and the format conversion step.
The Problem: Every Proposal Starts From Scratch
Your team has won dozens of proposals over the past three years. Somewhere in shared drives and email attachments, you have executive summaries that landed, technical approaches that scored well, past performance narratives that evaluators highlighted. But when a new RFP arrives on Monday morning, someone opens a blank document and starts typing.
The numbers are predictable. Roughly 60% of any proposal is reusable content: company overview, team qualifications, methodology descriptions, compliance matrices, past performance citations. The other 40% is requirement-specific: tailored technical approach, specific staffing plan, pricing aligned to the Statement of Work. Yet teams spend 80% of their time on the 60% that already exists somewhere, because finding and adapting it is harder than rewriting it.
This is not a writing problem. It is a retrieval problem with a formatting problem bolted on top. The content exists. The challenge is matching it to new requirements and reshaping it to fit a different format, page limit, and evaluation criteria.
The Concrete Scenario: SaaS Vendor Evaluation RFP
To make this tangible, consider a real workflow. Your company provides cloud infrastructure consulting. A mid-size enterprise issues an RFP for a SaaS vendor evaluation -- they need a partner to assess five enterprise SaaS platforms, recommend a shortlist, and manage the procurement process.
The RFP specifies:
- Section 1: Executive Summary (2 pages max)
- Section 2: Understanding of Requirements (5 pages)
- Section 3: Technical Approach and Methodology (10 pages)
- Section 4: Past Performance -- three relevant projects (3 pages each)
- Section 5: Team Qualifications and Key Personnel (5 pages)
- Section 6: Pricing (separate volume)
- Submission format: PDF, with an editable DOCX copy
You have won similar evaluations before. Two past proposals for SaaS assessments, one for ERP vendor selection, and several for general IT advisory. The raw material exists. The pipeline needs to find it, adapt it, and package it.
Architecture: Library, Context, Generation, Export
The pipeline has four stages:
- Proposal Library -- a structured directory of past proposals, tagged by type and section
- Context Assembly -- CLAUDE.md encodes your organization's voice, boilerplate, and the library index so the agent knows what is available
- Requirement-Matched Generation -- the agent reads the RFP, identifies which sections map to which library entries, and generates a draft
- Format Export -- Pandoc converts the markdown draft into PDF and DOCX
Each stage is a discrete step. You can run them independently, inspect intermediate output, and re-run any stage without starting over.
Stage 1: Build the Proposal Library
The library is a directory tree. Each past proposal lives in its own folder, with sections broken into separate markdown files and a metadata file that describes what the proposal was for.
proposals/
library/
2025-saas-assessment-acme/
metadata.yaml
executive-summary.md
technical-approach.md
past-performance.md
team-qualifications.md
2025-erp-selection-globex/
metadata.yaml
executive-summary.md
technical-approach.md
past-performance.md
team-qualifications.md
2024-it-advisory-initech/
metadata.yaml
executive-summary.md
technical-approach.md
past-performance.md
templates/
executive-summary-template.md
compliance-matrix-template.md
The metadata file is what makes retrieval work:
# metadata.yaml
project_name: "SaaS Platform Assessment for Acme Corp"
client: "Acme Corporation"
date: "2025-06-15"
outcome: "won"
contract_value: "$340,000"
type: "saas-assessment"
tags:
- vendor-evaluation
- saas
- procurement
- cloud
sections:
executive-summary:
page_limit: 2
score: "outstanding"
evaluator_feedback: "Clear value proposition, specific metrics"
technical-approach:
page_limit: 8
score: "good"
evaluator_feedback: "Thorough methodology, would benefit from more detail on scoring criteria"
past-performance:
relevance: "high"
projects_cited: 3
Two things matter here. First, section-level feedback from past evaluations. When the agent generates a new executive summary, knowing that past evaluators praised "specific metrics" means the agent will prioritize quantitative claims. Second, the outcome: won field. The library should only contain successful proposals -- or at minimum, the agent should weight successful proposals more heavily than losses.
Converting existing proposals into this format is a one-time investment. Ask Claude Code to do it:
claude -p "Read the PDF at ./raw-proposals/acme-saas-2025.pdf. \
Break it into sections (executive summary, technical approach, \
past performance, team qualifications). Save each section as a \
separate markdown file in proposals/library/2025-saas-assessment-acme/. \
Create a metadata.yaml with the project details." \
--allowedTools "Read,Write,Bash"
Repeat for each past proposal. A library of 10 proposals takes about 30 minutes to build, and you only do it once. New proposals get added after each win.
Stage 2: Configure CLAUDE.md as Organizational Memory
The CLAUDE.md file turns Claude Code into a proposal writer that knows your organization. This is the context layer -- it encodes everything the agent needs that is not in the RFP or the library files.
## Role
You are a proposal writer for [Company Name], a cloud infrastructure
consulting firm. You write in a professional but direct tone. Avoid
buzzwords. Prefer concrete metrics over vague claims. When citing past
performance, include contract value, duration, and measurable outcomes.
## Organization Facts
- Founded: 2018
- Employees: 85 (42 technical consultants, 12 project managers)
- Key certifications: AWS Advanced Partner, Azure Expert MSP, ISO 27001
- Win rate (last 12 months): 38% on competitive bids
- Notable clients: [redacted for public proposals, use "a Fortune 500
financial services firm" pattern]
## Proposal Library
Past successful proposals are in proposals/library/. Each has a
metadata.yaml with project details, outcome, and evaluator feedback.
When generating new proposal sections:
1. Search the library for proposals with matching tags
2. Prioritize sections that scored "outstanding"
3. Adapt language and specifics -- never copy verbatim
4. Preserve the structural patterns that evaluators praised
## Writing Style Rules
- Active voice. "We will conduct" not "An assessment will be conducted."
- Quantify everything. "Reduced procurement cycle by 34%" not "Significantly
improved procurement timelines."
- One idea per paragraph. Evaluators skim. Make it easy.
- Section headers must mirror the RFP's section titles exactly.
- Compliance matrix: every "shall" statement in the RFP gets a row with
proposal section reference and compliance status.
## Format Constraints
- Body text: 11pt, single-spaced
- Margins: 1 inch all sides
- Headers: bold, 13pt
- Page numbers: bottom center
- These will be applied during Pandoc export. In markdown, focus on content.
This CLAUDE.md functions as a RAG-like knowledge layer. Instead of retrieving from a vector database, the agent reads the library index and metadata files to find relevant past proposals. The structured metadata means the agent can match by type (saas-assessment), by tag (vendor-evaluation), and by quality (score: outstanding). This is simpler than setting up a full retrieval pipeline, and for a library of 10-50 proposals, it works reliably because the entire index fits in the context window.
Stage 3: The Generation Script
Here is the script that reads an RFP and generates a proposal draft.
#!/bin/bash
set -euo pipefail
# ── Configuration ────────────────────────────────────────────
RFP_FILE="${1:?Usage: ./generate-proposal.sh <rfp-file>}"
PROJECT_NAME="${2:?Usage: ./generate-proposal.sh <rfp-file> <project-name>}"
OUTPUT_DIR="./proposals/drafts/$PROJECT_NAME"
mkdir -p "$OUTPUT_DIR"
# ── Step 1: RFP Analysis ────────────────────────────────────
echo "[1/4] Analyzing RFP requirements..."
claude -p "
Read the RFP document at $RFP_FILE. Extract:
1. All required proposal sections with page limits
2. Every 'shall' statement (mandatory requirements)
3. Evaluation criteria and their weights
4. Submission format requirements
5. Key dates (questions deadline, submission deadline)
6. Any special instructions or restrictions
Output as JSON with keys: sections, requirements, eval_criteria,
format_requirements, key_dates, special_instructions.
Use extended thinking to ensure no requirements are missed.
" --output-format json > "$OUTPUT_DIR/rfp-analysis.json"
echo " RFP analysis saved."
# ── Step 2: Library Matching ─────────────────────────────────
echo "[2/4] Matching requirements to proposal library..."
claude -p "
Read the RFP analysis at $OUTPUT_DIR/rfp-analysis.json.
Read all metadata.yaml files in proposals/library/.
For each required proposal section, identify:
1. Which past proposals have relevant content (match by type and tags)
2. Which specific sections scored highest with evaluators
3. What gaps exist (sections with no library match)
Output as JSON with keys: section_matches (array of {section, matches,
best_source, confidence}), gaps (array of sections needing original content).
" --output-format json > "$OUTPUT_DIR/library-matches.json"
echo " Library matching complete."
# ── Step 3: Section Generation (Parallel) ────────────────────
echo "[3/4] Generating proposal sections..."
# Read the sections from the RFP analysis
SECTIONS=$(claude -p "
Read $OUTPUT_DIR/rfp-analysis.json. List only the section names,
one per line, no numbering.
" 2>/dev/null)
# Generate each section in parallel
PIDS=()
while IFS= read -r section; do
slug=$(echo "$section" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
claude -p "
You are writing the '$section' section of a proposal for $PROJECT_NAME.
Context:
- RFP analysis: $OUTPUT_DIR/rfp-analysis.json
- Library matches: $OUTPUT_DIR/library-matches.json
- Past proposals: proposals/library/
Instructions:
1. Read the library matches for this section
2. Read the best-matching past proposal sections
3. Generate a new section that:
- Addresses every requirement from the RFP relevant to this section
- Adapts language and structure from the highest-scoring past sections
- Includes specific metrics and past performance data
- Stays within the page limit specified in the RFP
4. Use extended thinking to plan the section structure before writing
Output as markdown. Start with the section heading.
" > "$OUTPUT_DIR/${slug}.md" &
PIDS+=($!)
done <<< "$SECTIONS"
# Wait for all sections
for pid in "${PIDS[@]}"; do
wait "$pid"
done
echo " All sections generated."
# ── Step 4: Assemble and Convert ─────────────────────────────
echo "[4/4] Assembling final document..."
# Combine all sections into one markdown file
claude -p "
Read all .md files in $OUTPUT_DIR/ (exclude any non-section files).
Combine them into a single proposal document in the order specified
by $OUTPUT_DIR/rfp-analysis.json.
Add:
- A title page with project name, your company name, and today's date
- A table of contents
- Section dividers
- Page break markers (use \pagebreak for Pandoc)
Write the assembled document to $OUTPUT_DIR/proposal-complete.md
"
echo " Assembly complete."
echo ""
echo "Draft saved to $OUTPUT_DIR/proposal-complete.md"
The parallel generation in Step 3 is where the real time savings land. A six-section proposal where each section takes 30-60 seconds to generate would take 3-6 minutes sequentially. In parallel, it takes the duration of the longest section -- typically under 90 seconds.
Extended thinking is used in two places: the initial RFP analysis (to ensure no requirements are missed) and each section generation (to plan structure before writing). For a complex RFP with dozens of "shall" statements, extended thinking catches requirements that a single-pass read misses.
Stage 4: Format Export with Pandoc
The markdown draft needs to become a PDF and a DOCX. Pandoc handles both conversions with a single reference document for consistent styling.
# ── Format Conversion ────────────────────────────────────────
# Create a Pandoc reference template (one-time setup)
# This encodes your formatting: fonts, margins, header styles
pandoc -o proposals/templates/reference.docx --print-default-data-file reference.docx
# Convert to PDF (via LaTeX)
pandoc "$OUTPUT_DIR/proposal-complete.md" \
-o "$OUTPUT_DIR/proposal-complete.pdf" \
--pdf-engine=xelatex \
-V geometry:margin=1in \
-V fontsize=11pt \
-V mainfont="Times New Roman" \
--toc \
--toc-depth=2
# Convert to DOCX (using reference template)
pandoc "$OUTPUT_DIR/proposal-complete.md" \
-o "$OUTPUT_DIR/proposal-complete.docx" \
--reference-doc=proposals/templates/reference.docx \
--toc \
--toc-depth=2
echo "Exported:"
echo " $OUTPUT_DIR/proposal-complete.pdf"
echo " $OUTPUT_DIR/proposal-complete.docx"
The reference DOCX template is a one-time setup. Open it in Word, set your fonts, margins, heading styles, and header/footer. Every future Pandoc conversion inherits those styles. This means your proposal formatting is version-controlled alongside the content -- no more "which template version did we use for the Acme bid?"
If the RFP specifies unusual formatting (different margins for appendices, landscape pages for compliance matrices), encode those as Pandoc metadata or LaTeX commands in the markdown. The agent can insert \newpage and \begin{landscape} directives during assembly.
The Full Run
Here is what the complete pipeline looks like:
./generate-proposal.sh ./rfps/enterprise-saas-eval-2026.pdf "saas-eval-meridian"
Output:
[1/4] Analyzing RFP requirements...
RFP analysis saved.
[2/4] Matching requirements to proposal library...
Library matching complete.
[3/4] Generating proposal sections...
All sections generated.
[4/4] Assembling final document...
Assembly complete.
Draft saved to ./proposals/drafts/saas-eval-meridian/proposal-complete.md
Exported:
./proposals/drafts/saas-eval-meridian/proposal-complete.pdf
./proposals/drafts/saas-eval-meridian/proposal-complete.docx
Total wall-clock time: 3-5 minutes for a six-section proposal. Compare that to the typical 2-3 day first-draft cycle with a manual process.
The output is a first draft, not a final submission. A senior proposal manager still reviews for strategic positioning, verifies past performance claims, and adjusts the pricing narrative. But the mechanical work -- finding relevant past content, adapting boilerplate, maintaining consistent voice, formatting -- is done.
Encoding Proposal Patterns as Claude Code Skills
For organizations that respond to proposals frequently, wrapping the pipeline into a Claude Code Skill makes it reusable across projects without re-explaining the workflow each time.
Create a .claude/skills/proposal-generator.md:
## Skill: Proposal Generator
### Trigger
When the user says "generate proposal" or provides an RFP document
and asks for a proposal draft.
### Steps
1. Read the RFP document the user provides
2. Run RFP analysis: extract sections, requirements, evaluation criteria
3. Search proposals/library/ for matching past proposals
4. Generate each section in parallel, using best-matching library entries
5. Assemble into a single document with title page and TOC
6. Export to PDF and DOCX via Pandoc
### Rules
- Follow all writing style rules from CLAUDE.md
- Never fabricate past performance data. Only cite projects from the library.
- If a section has no library match, flag it as "needs original content"
and generate a skeleton with placeholders.
- Include a compliance matrix mapping every RFP "shall" to a proposal section.
With this Skill, generating a proposal becomes conversational:
> Here is the RFP for the Meridian SaaS evaluation project.
> Generate a proposal draft.
The agent follows the Skill's steps, reads the library, and produces the draft. No script invocation needed -- though the script remains available for batch processing or CI integration.
Multi-Pane Proposal Review
The generation step produces a draft. The review step determines whether it ships. Reviewing a proposal effectively means checking multiple things simultaneously: does the executive summary align with the technical approach? Does the past performance section cite the same metrics the methodology promises to deliver? Does the compliance matrix cover every "shall"?
The productive layout for proposal review:
- Left pane (50%): The generated proposal draft, scrolling through sections
- Top-right pane (25%): The original RFP, open to the evaluation criteria section
- Bottom-right pane (25%): A past proposal from the library that scored "outstanding," for comparison
You read a paragraph in the draft, glance right to verify it addresses the RFP requirement, then glance down to compare the structure against what won last time. This three-way comparison catches misalignments that sequential reading misses -- the executive summary promises "weekly status reports" but the methodology section says "bi-weekly."
When you find an issue, you edit the markdown directly and re-run the Pandoc export. The feedback loop is tight: edit, export, check PDF, adjust. No waiting for a document to render in a cloud editor. No version conflicts because someone else is editing the same Google Doc.
Beyond Grant Proposals: Where This Pattern Applies
The library-match-generate-export pattern works for any templated document generation:
Client proposals. Consulting firms responding to Requests for Proposal. The library contains past winning proposals; the agent matches by industry, service type, and engagement size.
Grant applications. Research groups applying for funding. The library holds past funded proposals; the agent adapts specific aims, methodology, and budget justifications to new funding announcements.
RFP responses. Government contractors responding to solicitations. The library maps past performance to NAICS codes and contract vehicles; the agent generates compliant responses with proper FAR/DFAR references.
Sales proposals. SaaS companies responding to enterprise procurement questionnaires. The library stores approved answers to security, compliance, and integration questions; the agent matches questions to existing approved responses.
In each case, the structure is identical: build a library of past successes, encode organizational knowledge in CLAUDE.md, let the agent do the retrieval and adaptation, and export in the required format.
Cost and Time Comparison
| Manual First Draft | AI Pipeline | |
|---|---|---|
| Time to first draft | 2-3 days (team of 2-3) | 3-5 minutes |
| Content reuse rate | Low (hard to find past content) | High (structured library search) |
| Voice consistency | Varies by writer | Governed by CLAUDE.md rules |
| Requirement coverage | Often misses items | Systematic extraction of every "shall" |
| Format conversion | Manual in Word/InDesign | Automated via Pandoc |
| API cost per proposal | N/A | ~$1-3 in tokens |
The pipeline does not replace the proposal team. It replaces the first 60-70% of their effort -- the retrieval, adaptation, and formatting work that consumes days but adds no strategic value. The team focuses on the 30-40% that matters: strategic positioning, win themes, pricing strategy, and executive review.
Getting Started
The minimum viable setup:
- Create the library directory. Convert 3-5 past winning proposals into the section-based markdown format with metadata.yaml files.
- Write the CLAUDE.md. Encode your organization's voice, facts, and writing rules.
- Run the script. Point it at a real RFP. Review the output critically.
- Iterate on the CLAUDE.md. The first draft will reveal what rules are missing -- add them and re-run.
- Install Pandoc. Set up the reference DOCX template for your brand styling.
After two or three proposals, the library is strong enough that first drafts need only light editing rather than major rewrites. The CLAUDE.md accumulates your team's proposal knowledge -- what evaluators respond to, what language scores well, what formatting mistakes cause point deductions. That institutional knowledge stops living in people's heads and starts living in a file that every agent session reads.
Ready to streamline your terminal workflow?
Multi-terminal drag-and-drop layout, workspace Git sync, built-in AI integration, AST code analysis — all in one app.