The Mirror Does Not Ask Follow-Up Questions
She had practiced her answer fourteen times. Standing in the bathroom, laptop balanced on the sink counter, resume taped to the mirror beside the Post-it notes. "Tell me about a time you led a team through a difficult project." She answered. The mirror stared back. She answered again, better this time. The mirror did not notice.
The problem with practicing in front of a mirror is that the mirror is polite. It never says, "Wait -- you mentioned a conflict with the PM but you skipped the resolution. What happened?" It never catches the gap between what you said and what you meant. It never follows a vague answer with a sharper question.
Her friends tried to help. They played interviewer for twenty minutes at a coffee shop, nodded encouragingly, and said "that sounded great" to an answer that would have gotten her rejected in the first round. Friends are kind. Interviewers are not.
What she needed was an opponent. Not hostile -- fair. Someone who had read her resume, noticed the one-year gap, and was absolutely going to ask about it. Someone who would listen to her system design answer and say, "Your database choice does not handle the scale you described. Walk me through the math." Someone who remembered that she mentioned event-driven architecture ten minutes ago and now wanted to know if she actually understood the tradeoffs.
That opponent can exist in your terminal. Not as a list of questions from a website. As a conversational partner that reads your background, generates questions specific to you, follows up on weak spots, and tells you exactly what to fix -- with severity levels so you know what matters.
This article shows you how to build it.
What You Are Building
An AI interview partner with three modes:
- Technical interview -- coding concepts, system design, architecture decisions. The AI asks questions based on the technologies on your resume, not generic trivia.
- Behavioral interview -- STAR method coaching. The AI listens to your stories, identifies missing components (Situation, Task, Action, Result), and pushes you to be specific.
- Pitch practice -- investor meetings, client presentations, oral exams. Feed it your deck or thesis outline. It asks the hard questions your audience will ask.
The key mechanism: the AI remembers your answers within a session. If you give a weak answer to question three, question seven will circle back to the same gap. This is what separates it from a static question list. A list does not adapt. This does.
Prerequisites
- Claude Code installed and authenticated. If you have not set it up, the first hour tutorial gets you running in under 60 minutes.
- A Claude Pro subscription ($20/month) or an Anthropic API key.
- Your resume, pitch deck, or study material in a text-based format (PDF, markdown, plain text).
- Basic terminal comfort. If you can type
cdandls, you have enough.
Step 1: Set Up Your Interview Workspace (5 Minutes)
Open your terminal and create a project folder.
mkdir interview-prep && cd interview-prep
Now put your background material in this folder. This is the raw material the AI will use to generate questions specific to you. The more context you provide, the sharper the questions.
For a job interview, create a file called background.md:
# My Background
## Resume Summary
- 3 years as a backend engineer at a fintech startup
- Built a payment processing pipeline handling 50k transactions/day
- Migrated monolith to microservices (Java to Go)
- Led a team of 3 junior developers
- BS Computer Science, National Taiwan University
## Target Role
Senior Backend Engineer at a mid-size SaaS company
## Technologies I Claim to Know
- Go, Java, Python
- PostgreSQL, Redis, Kafka
- Docker, Kubernetes, AWS (ECS, Lambda, SQS)
- gRPC, REST API design
## Things I Am Nervous About
- I have not done system design interviews before
- My distributed systems knowledge is mostly practical, not theoretical
- I left my last job after 1.5 years -- short tenure
For a pitch practice, the file might contain your executive summary, key metrics, and the ask. For an oral exam, paste your study notes or thesis outline.
The honesty in "Things I Am Nervous About" is important. A real interviewer will probe your weak spots whether you admit them or not. Telling the AI where you feel vulnerable means it can help you prepare for exactly those moments.
Step 2: Write the Interviewer's Brain (10 Minutes)
This is the core of the system. You are going to write a CLAUDE.md file that turns Claude Code into an interview partner. This file defines who the interviewer is, how they behave, and what they pay attention to.
claude
Inside Claude Code, say:
Create a CLAUDE.md file in this directory with the following interviewer
system. Read background.md first to understand my profile.
The CLAUDE.md should define three interview modes:
1. TECHNICAL MODE
- Ask questions based on technologies listed in my background
- Start with a warm-up question, then escalate difficulty
- For system design: give a real scenario, ask me to design it, then
probe my choices ("why this database?", "what happens at 10x scale?")
- Track which topics I answer confidently vs weakly
- After each answer, give brief feedback with a severity tag:
[STRONG] - good answer, minor polish needed
[GAP] - missing something important, explain what
[RED FLAG] - this answer would hurt you in a real interview, here is why
2. BEHAVIORAL MODE
- Use the STAR method (Situation, Task, Action, Result)
- After I tell a story, identify which STAR component is weak or missing
- Push for specifics: numbers, timelines, outcomes
- Common behavioral categories: leadership, conflict, failure, ambiguity
- Flag when my answer sounds rehearsed vs authentic
3. PITCH MODE
- Act as a skeptical but fair investor/examiner
- Ask about market size, differentiation, unit economics, team
- Challenge assumptions: "You say the market is $2B -- show me the math"
- Test for clarity: can I explain the core idea in one sentence?
- After each round, rate: clarity (1-5), conviction (1-5), gaps identified
General rules for all modes:
- Read background.md at the start of every session
- Remember all my answers within the session
- If I give a weak answer, return to that topic later with a harder question
- After every 5 questions, give a progress summary
- At the end of a session (when I say "end"), give a full debrief:
overall readiness score, top 3 strengths, top 3 areas to improve,
and specific practice recommendations
- Be tough but fair. Do not be encouraging for the sake of it.
- If an answer is good, say so briefly and move on.
If an answer is bad, explain exactly why and what a strong answer
looks like.
Claude Code will read your background file, understand your profile, and generate a CLAUDE.md that encodes the interviewer logic. This file is the interviewer's brain. Every future conversation in this folder will follow these rules.
Step 3: Run a Technical Interview (20 Minutes)
Start a new Claude Code session in the same folder.
claude
Type:
Start a technical interview. Focus on system design and backend architecture.
The AI will read your CLAUDE.md (the interviewer rules) and your background.md (your profile). It will start with something manageable -- maybe a question about Go concurrency patterns since you listed Go as a primary language. Then it will escalate.
A typical session looks like this:
AI: "You migrated from a Java monolith to Go microservices. Walk me through how you decided which services to extract first."
You: "We started with the payment service because it had the clearest boundaries..."
AI: [STRONG] Good instinct -- bounded contexts are the right starting criterion. But you did not mention how you handled data that was shared between the payment service and the old monolith during migration. How did you manage the transition period?
This is the feedback loop. The AI heard your answer, identified what was good (bounded context reasoning), flagged what was missing (data migration strategy), and asked a follow-up that probes the gap. A static question list cannot do this. It does not know what you said.
After five questions, you will get a progress summary:
## Progress After 5 Questions
**Confident areas**: Service decomposition, API design, Go concurrency
**Gaps detected**: Database migration strategy, distributed tracing,
failure handling at scale
**Recommendation**: Next questions will focus on failure scenarios
and observability.
Step 4: Run a Behavioral Interview (20 Minutes)
Same folder, new session.
claude
Start a behavioral interview. Focus on leadership and conflict resolution.
The AI will ask a behavioral question. You tell your story. Then it dissects the story.
AI: "Tell me about a time you disagreed with a technical decision made by someone senior to you."
You: "My tech lead wanted to use MongoDB for our transaction data. I thought PostgreSQL was better because we needed ACID compliance. I showed him benchmark data and he agreed to switch."
AI: [GAP] You have a clear Situation and Action, but the Task and Result are thin. Specifically: what was at stake if the wrong database was chosen? You said "benchmark data" -- what benchmarks? What was the outcome after switching? Did the relationship with your tech lead change? The best behavioral answers include measurable results. Try again with numbers.
This is STAR method coaching in real time. The AI is not asking you to memorize a framework. It is listening to your actual story and telling you which parts are weak.
Step 5: Run a Pitch Practice (15 Minutes)
If you are a founder preparing for investors, or a student preparing for an oral defense, this mode applies.
Add a pitch.md file with your deck content or thesis summary, then:
claude
Start a pitch practice session. Read pitch.md for my deck content.
Act as a skeptical Series A investor.
The AI will ask the questions investors actually ask. Not "tell me about your product" -- that is a softball. More like:
AI: "Your deck says you have 2,000 monthly active users growing 15% month-over-month. At that rate you hit 10,000 in six months. But your burn rate suggests you need funding in four months. What happens if growth slows to 8%?"
This forces you to think on your feet. The AI reads your materials, does the math, and finds the tension points. If your pitch has a logical gap, it will find it before an investor does.
Why This Works Better Than a Question List
The internet has thousands of "top 50 interview questions" lists. They are useful for exactly one thing: knowing what topics might come up. They are useless for the actual hard part of interviewing, which is answering follow-up questions under pressure.
A question list is a shooting range with stationary targets. An AI interview partner is a sparring match. The difference:
Question list: "Design a URL shortener." You think about it. You check the model answer. You move to the next question.
AI partner: "Design a URL shortener." You describe your approach. The AI says, "You chose a hash-based approach, but what is your collision rate at 1 billion URLs? And you mentioned Redis for caching -- what is your eviction policy when the cache is full?" You did not prepare for those specific follow-ups because they depend on your specific answer. That is the point.
The AI remembers. It adapts. It circles back. This is what makes it closer to a real interview than any static resource.
Making It Better Over Time
After each session, ask:
End the interview. Give me the full debrief.
You will get a structured report: readiness score, strengths, gaps, and specific things to practice. Save these debriefs. Over multiple sessions, you will see patterns -- maybe you consistently struggle with failure-mode questions, or your STAR stories always lack quantified results.
You can also refine the system:
- Update
background.mdas you learn more about the target role. Add the job description. Add the company's tech stack from their engineering blog. - Add a
previous-sessions.mdfile summarizing past feedback so the AI builds on previous sessions instead of starting fresh. - Adjust difficulty by editing the
CLAUDE.md. Tell it to focus on your weakest areas. Tell it to simulate a FAANG-level bar or a startup-style conversation.
The system grows with you. Each session makes the next one more targeted.
What You Learned
- Context makes questions sharp. A generic "tell me about yourself" is useless practice. An AI that has read your resume and asks about the specific gap in your work history is real practice.
- Follow-up questions are the hard part. The first question is predictable. The third follow-up is not. An AI that remembers your answers and probes weaknesses trains the skill that actually matters.
- Feedback needs severity levels. Not every gap is fatal. Knowing the difference between a
[STRONG]answer that needs polish and a[RED FLAG]answer that needs a complete rewrite is how you prioritize limited practice time. - The system improves with use. Update your background, save your debriefs, adjust the difficulty. Each iteration makes the practice more realistic.
The mirror does not ask follow-up questions. Your friends are too kind. A list of questions does not adapt. But an AI that has read your resume, remembers what you said three answers ago, and is configured to be tough but fair -- that is the practice partner most people need and never find.
Open your terminal. Feed it your resume. Start talking.