How I’m building things in August of 2025

I'm working on two software projects right now: One is called Lucive (which I've introduced before on the newsletter). The other is called Hometrace, which is an app aimed at home inspectors and homeowners to help make home maintenance easier.
I recently adopted Claude Code as my primary tool for software development because it accelerates me to a degree I could not have previously imagined and it works better than agent-augmented IDEs in my experience. I've stopped using Cursor or paying for things like Kiro, even though I still use Kiro as my primary dev environment (idk, I find the colors and layout a bit friendlier). I'm just entirely using Claude Code in the terminal these days. Having it live there seems weird at first vs. using a chat-based GUI like everyone is familiar with. But after a little while I realized that this was actually a superpower. It can talk to any command-line tool that exists, like Git or AWS or whatever. It can do pretty much anything I want on my computer.
When I say this stuff, I know I'm merely adding to the endless, annoying-ass chorus of people praising LLMs. But... it really has been a game changer. I've been able to move ~50x faster than I would have been able to at the beginning of this year, and feasibly run two products with ambitious roadmaps simultaneously.
Sometimes I compare the current moment to the Crypto Craze of five or so years ago. To be honest, I never really understood where crypto was genuinely superior to older technologies like normal databases or normal money, though I pretended to. Or even if it was better in certain places, the friction of using it always seemed too high.
Some iterations of blockchain ephemera were kind of pointless, or used simply to flex. Remember when celebrities were all hocking NFTs or bragging about their Bored Apes and making them their Twitter profile pics? Where did that all go? I did speculate on Bitcoin because it was fun, but the psychosis over "all things decentralized" that swept through society seemed like a bubble even at the time. I think it was.
However, the hype has rightfully exploded around the platform models from OpenAI and Anthropic over the last three years because they're just so obviously useful. And they happen to be really good at coding.
The catch is that despite their skill, it’s tough to get dependable results. You’re left reiterating the same codebase context over and over. I introduced my wife to Claude for prototyping projects in actual code for her job, and she became immensely frustrated almost immediately. It couldn't get designs right. It would boldly declare that it had implemented everything to spec while it obviously had not. It would drift wildly off course.
You have to play the context game
It's become apparent that when you give LLMs too much information (like a whole codebase), they tend to do worse. I talk to Fiona about how my experience with these models is it's like working with a herd of genius amnesiacs. Literally, when you exceed their context windows, they start forgetting things that you've said before. But even if you're staying within the token limits, the performance degrades when you try to give them a lot of stuff to hold in memory. Kiro tries to fix this by implementing spec-driven development into the VSCode experience directly. It enforces guardrails that attempt to emulate best practices with managing this squirrelly context problem. It uses some fancy built-in workflows to create steering documents and also break projects down into smaller, bite-sized tasks. This definitely works, but I find I don't need to pay for this. If you have a little bit of self-discipline, you can use a system of Markdown documents and the Claude.md file to essentially recreate this without paying. I've noted an increase in consistency from my agents, much less repetition of key context in my prompts, and better outputs overall.
Artifacts in my projects
Here's how the project looks when I'm done setting it up:
project-root/
├── CLAUDE.md # AI assistant configuration
├── .context/ # System knowledge base
│ ├── ARCHITECTURE.md # System design & structure
│ ├── BOUNDARIES.md # Development constraints
│ ├── CHANGELOG.md # Project history
│ ├── DATA_MODEL.md # Database & data structures
│ ├── RUNBOOK.md # Operational procedures
│ └── STYLE_GUIDE.md # Code standards
├── .prompts/ # AI workflow templates
│ ├── Implement.md # Implementation prompts
│ ├── Research.md # Research prompts
│ └── Write-spec.md # Specification prompts
├── src/ # Source code
├── tests/ # Test files
├── docs/ # Additional documentation
└── [other project files]
How to prompt an agent to create all of these
Check out my GitHub repository here for all context doc prompts.
Doc descriptions
In every new project, I set up two directories in the root:
- One is a .context directory
- The other is a .prompt directory
Inside the context directory go six documents:
- ARCHITECTURE.md - Technical system design and structure
- High-level system architecture and component relationships
- Technology stack decisions and design patterns
- Module organization and data flow diagrams
- BOUNDARIES.md - Development constraints and restricted areas
- Code areas requiring special permissions or review before modification
- Security-sensitive components and critical system boundaries
- Guidelines for what changes require additional approval
- CHANGELOG.md - Project history and version tracking
- Chronological record of changes and key decisions
- DATA_MODEL.md - Database and data structure documentation
- Entity relationships and database schemas
- Data flow between components and systems
- Migration strategies and data consistency rules
- RUNBOOK.md - Operational procedures and developer workflows (e.g. all my npm commands)
- Build processes, deployment steps, and testing procedures
- Environment setup and configuration instructions
- Troubleshooting guides and maintenance procedures
- STYLE_GUIDE.md - Code standards and development conventions
- Programming language conventions and formatting rules
- Naming patterns, architectural guidelines, and best practices
- Code review criteria and quality standards
In the prompts directory go three documents:
- research.md - Guidelines for gathering technical context and creating feature implementation briefs
- write-spec.md - Procedures for transforming feature briefs into detailed implementation specifications
- implement.md - Instructions for executing technical specifications with precision
I also have the claude.md document in the root. The version that I use is very prescriptive about reading the context documents before doing any research, spec writing, or implementation.
My workflow
Again, these all reference the docs in this repo. I make heavy use of sub-agents, which pretty much every coding assistant provider gives you at this point. I really like using the Claude Code sub-agents because each one spins up its own context, so you're not stepping on the context window of the agent in the main thread.
Phase 1: Research & Analysis
Agent: feature-brief-researcher (from /agents/feature-brief-researcher.md)
- Input: Feature request or requirement
- Process: Analyzes codebase, identifies patterns, assesses risks and constraints
- Output: Detailed technical brief with implementation context
- Reference: Use /prompts/research.md to give the agent guidelines on what to research and create
Phase 2: Specification Planning
Agent: spec-planner (from /agents/spec-planner.md)
- Input: Feature brief from Phase 1
- Process: Transforms brief into executable specification with task breakdown
- Output: Complete implementation spec with testing strategy
- Reference: Use /prompts/write-spec.md to create the spec
Phase 3: Implementation
Agent: general-purpose (from /agents/general-purpose.md)
- Input: Implementation specification from Phase 2
- Process: Executes tasks precisely following established patterns
- Output: Working, tested feature implementation
- Reference: Use /prompts/implement.md for implementation guidance
Phase 4: Check your work.
After all the phases of a feature are complete, I have the main agent check the output of the implementation against the spec. More often than not, we'll find areas that need improvement or things that just didn't get implemented at all. Then, I test things by hand and have the main agent fix anything that's not working right. If we're far enough off course in the implementation, I'll often just scrap everything, hard reset to the last Git commit, and start over again. This time, I'll try to break things down into smaller pieces. The key is to keep each implementation task very tightly scoped and to give the model easy ways to confirm it did the task right.
Setup Workflow
For New Projects:
- Generate Context Documentation: Use prompts in /context/ to create your project's .context/ directory
- Create CLAUDE.md: Copy and customize /CLAUDE.md for your project
- Set up Agent Definitions: Copy agent files from /agents/ into Claude Code
For Existing Projects:
- Audit Documentation: Use /context/ prompts to fill gaps in project documentation
- Adopt Workflow: Integrate the 3-phase process into your development practices
- Customize Templates: Adapt prompts and agents for your specific tech stack
Simplified view of the process
Feature Request → Research Agent → Feature Brief → Spec Agent → Implementation Spec → Implementation Agent → Working Feature → Test and rework if necessary.
Tool stack
This is the set of tools that I'm using right now for different tasks in the build cycle.
- Wispr Flow - Amazing little tool that dramatically accelerates the time to type out anything. The beautiful thing is that it will look for the intent of what you're trying to say rather than just straight-up dictating what you speak. It also works across any app that takes text on your machine. I use it to talk to Claude in the terminal and it saves me so much time.
- Claude Code - I just simply find this to be the best AI coding platform that exists right now, and the fact that it lives in the terminal makes it superior to almost anything I've tried. I've tried the Codex CLI and the Gemini CLI as well. But I like the predictability of the Claude Max 20X plan. And I generally just like the output from Claude better.
- Figma Make and the Figma MCP for translating designs into code for Claude without having to explain every little detail of a screen.
- Playwright MCP - It's often very helpful for Claude to be able to go out and see the actual UI that it's building, and I use Playwright to pass it screenshots and console errors.
- Midjourney for making artwork that supplements the applications I'm building.
- Kiro - IDE. As I said before, I just really like the look and feel of it.
- Vercel V0 - I actually use this quite a bit less these days because I find that I can just prototype static sites or components with Claude directly. But sometimes it's really nice to be able to just type in a prompt and get something visual really really fast. I find that it's still better than Figma Make in many cases.
Conclusion
Building software with AI at this point in 2025 is equal parts exhilarating and maddening. The tools are powerful enough to make me feel like I can ship at an impossible pace, but fickle enough that I have to create entire systems just to keep them on track. That tension is the reality of this moment: the builders who figure out how to harness the chaos without getting lost in it are the ones who will move the fastest and ship the best products.