Daniel Pyrathon

Ask First, Code Later

Daniel Pyrathon1 min read
Software Engineer at Farcaster • Founder of Bountycaster

Ask First, Code Later

I've always been the kind of developer who adds detailed docstrings and explanatory comments to my code. As Joe Armstrong put it in his interview for Coders at Work: "I think the code is the answer to a problem. If you don't have the spec or you don't have any documentation, you have to guess what the problem is from looking at the answer."

For a long time, this habit felt like a good practice—helpful for the team, but not transformative. It kept things organized, yet the real value seemed limited.

Then, LLMs came along.

By feeding a model like Claude Opus a file or section of code, those notes suddenly become incredibly useful. The AI can provide clear explanations, often faster and more consistently than asking a colleague. In my experience, question-and-answer prompts succeed about 90% of the time on the first try, while code generation might only hit around 50%. This is just my informal observation, but the difference has been noticeable in my daily work.

How I Ramped Up on Farcaster's Codebase

When I joined Farcaster, my React knowledge was mostly from personal projects—not enough to dive straight in. Instead of using AI to generate new code, I focused on having it explain the existing codebase to build my understanding.

Here's the simple loop I followed:

  1. Ask for a report: "You are a staff front-end engineer. Explain how the Bottom Sheet works, including key examples, data flow, and the main layers of the stack."

  2. Stash the answer: The model outputs a structured Markdown document, sometimes with Mermaid diagrams for visuals. I save it in a temporary folder like /scratch/ai-notes/.

  3. Iterate: I add follow-up questions to the same document for deeper insights.

  4. Code with context: Once I have a solid grasp (verified by my own review), I move to writing or editing code.

These notes are meant to be temporary—if something proves especially useful, I incorporate it into the project's official documentation. For the setup, I use Claude Code with a simple CLI script that feeds file contents or entire folders into the Anthropic API, allowing the model to analyze the full context.

This approach helped me get up to speed quickly, turning what could have been weeks of confusion into a more manageable process.

Why This Works

In my view, this method plays to the strengths of current LLMs: they're great at digesting and summarizing large amounts of context, but less reliable when synthesizing entirely new code, where errors or inconsistencies can arise more easily.

Starting with information gathering creates a structured workflow—gather facts first, then decide on actions, and finally implement. This keeps the human in control, directing follow-ups and ensuring the direction aligns with the project's needs. It also allows the AI to handle cross-file analysis efficiently, something that would take much longer manually.

Additionally, the generated reports can be reused as context in future prompts, improving overall accuracy without starting from scratch each time.

The Exact Prompt I Use

Here's a template I often start with—feel free to adapt it to your needs:

### Prompt – “AI Interrogation”

Persona: You’re Alex, a new hire exploring <FOCUS-AREA>.  
Goal: Produce a concise **Markdown report** covering  
1. High-level flow  
2. Key files (with paths)  
3. Data & APIs  
4. Edge cases / performance risks  
5. Open questions  

Visuals: Include **Mermaid diagrams** for call graphs or state flows.  
Style: Bullet-dense, ≤ 400 words, `###` headings.  
Output: One markdown doc—no extra chatter.  
Context: You can read the entire repo.  
Now produce the report.

Feedback

I'd love to know if this resonates—try running a similar prompt on your codebase and share in the comments: What's your hit rate, and how does your AI-assisted workflow look? Your experiences could inspire others or even spark ideas for future posts!