š» Regenerate, Don't Patch
In Just Give Us the Prompt, I argued that prompts are becoming the new source codeāthe deeper artifact people want when they see generated code. But if thatās true, it changes how we should work.
The Old Workflow
The traditional way to build software:
- Write code from scratch
- Edit it over time
- Accumulate patches, fixes, workarounds
- Eventually the codebase becomes unwieldy
- Rewrite from scratch (if youāre lucky enough to get buy-in)
Each edit is constrained by what came before. Youāre not designing; youāre patching.
And hereās the thing: at the time of creation, not all issues are visible. Bugs surface through complexity that no developer can hold in their head at once. You ship, discover problems, patch them. But each patch is made without the full picture the original design had. The system drifts from its initial coherence.
The New Workflow
If prompts are the real source, and code is just compiled output:
- Write prompts that describe your intent
- Generate code in one shot
- Track your prompts alongside the code (tools like Entire do this)
- When you need changes, replay your prompts with an improved model
- Get a fresh, holistic generation instead of accumulated patches
You ride the AI improvement curve. The same prompts generate better code as models get smarter.
Why This Makes Sense Now
This sounds expensive. Regenerating an entire codebase instead of editing a few lines?
But the cost of generation is dropping fast. Token prices fall. Models get faster. The gap between āedit a fileā and āregenerate the whole thingā shrinks.
At some point, regeneration becomes cheaper than the accumulated cost of patches: the bugs from hasty edits, the technical debt, the time spent understanding tangled code.
The Holistic Advantage
Thereās a quality difference too. Code generated in one pass is coherentāthe AI designs holistically, with the full context in mind. But edits are constrained. The AI has to work around existing structures. Itās architecture vs patches.
Regeneration lets you get back to architecture every time.
What This Requires
For this workflow to work, you need:
- Intent tracking: Your prompts must be preserved and versioned. Entire, git commit messages, or a dedicated prompt log.
- Reproducibility: Same prompts should produce similar results (or better, as models improve).
- Verification: You need confidence that regenerated code works. Tests, type checking, CI.
The last one is key. You can only regenerate fearlessly if you can verify quickly.
Open Questions
- How do you handle state? Databases, user data, migrations?
- What about the parts of your codebase that arenāt AI-generated?
- How do you version prompts meaningfully?
Draft - needs more concrete examples and practical workflow details
In [Just Give Us the Prompt](/just-give-us-the-prompt.md), I argued that prompts are becoming the new source codeāthe deeper artifact people want when they see generated code. But if that's true, it changes how we should work.
## The Old Workflow
The traditional way to build software:
1. Write code from scratch
2. Edit it over time
3. Accumulate patches, fixes, workarounds
4. Eventually the codebase becomes unwieldy
5. Rewrite from scratch (if you're lucky enough to get buy-in)
Each edit is constrained by what came before. You're not designing; you're patching.
And here's the thing: at the time of creation, not all issues are visible. Bugs surface through complexity that no developer can hold in their head at once. You ship, discover problems, patch them. But each patch is made without the full picture the original design had. The system drifts from its initial coherence.
## The New Workflow
If prompts are the real source, and code is just compiled output:
1. Write prompts that describe your intent
2. Generate code in one shot
3. Track your prompts alongside the code (tools like Entire do this)
4. When you need changes, replay your prompts with an improved model
5. Get a fresh, holistic generation instead of accumulated patches
You ride the AI improvement curve. The same prompts generate better code as models get smarter.
## Why This Makes Sense Now
This sounds expensive. Regenerating an entire codebase instead of editing a few lines?
But the cost of generation is dropping fast. Token prices fall. Models get faster. The gap between "edit a file" and "regenerate the whole thing" shrinks.
At some point, regeneration becomes cheaper than the accumulated cost of patches: the bugs from hasty edits, the technical debt, the time spent understanding tangled code.
## The Holistic Advantage
There's a quality difference too. Code generated in one pass is coherentāthe AI designs holistically, with the full context in mind. But edits are constrained. The AI has to work around existing structures. It's architecture vs patches.
Regeneration lets you get back to architecture every time.
## What This Requires
For this workflow to work, you need:
- **Intent tracking**: Your prompts must be preserved and versioned. Entire, git commit messages, or a dedicated prompt log.
- **Reproducibility**: Same prompts should produce similar results (or better, as models improve).
- **Verification**: You need confidence that regenerated code works. Tests, type checking, CI.
The last one is key. You can only regenerate fearlessly if you can verify quickly.
## Open Questions
- How do you handle state? Databases, user data, migrations?
- What about the parts of your codebase that aren't AI-generated?
- How do you version prompts meaningfully?
---
*Draft - needs more concrete examples and practical workflow details*