Code like a bobe
Fourteen-ish years ago, if you wanted to build a digital product, you had to code. That was it. No LLM you could talk to, no friendly layer translating vague thoughts into working apps, and definitely no button that said "deploy." If you wanted a thing to exist, you learned to code.
I always liked products, and I liked building things. So I learned software engineering. But I wasn't one of those people who fell in love with the craft itself. I truly respect people who love code the way musicians love music but I'm just not that guy. My relationship with code has always been utilitarian. I picked up languages and used whichever felt easiest for what I was trying to do. The language never mattered as much as the outcome.
That's why "The hottest new programming language is English" hits. I don't remember syntax particularly well, I can't keep up with every framework that pops up, and I'm definitely not interested in learning a new abstraction every six months just to stay current. What I am interested in is taking something that exists vaguely in my head and turning it into something that exists in the world.
LLMs didn't change the why or what I build but they revolutionised the how. The bottleneck has moved upstream. The hard part now isn't syntax, it's figuring out what you want, breaking it into steps that won't turn your LLM into a shit factory, and knowing when the output is actually right. Syntax is cheap today but judgment is still precious.
Why this works for me (and why it might not work for you)
Turns out those theoretical software engineering best practices ended up being useful. I'm a decent software engineer. Not exceptional. Just decent. I understand how the pieces of a stack fit together: what databases do, how APIs behave, where auth livesSupabase handles this for me now, how front-end and back-end responsibilities split.95% of this can be learned by reading the Technically blog
That baseline matters more than I expected, because working with LLMs feels like working with a very fast, slightly nervous junior engineer. You tell it what you want. It produces something. Then you test it, poke it, and decide whether it's actually doing what you asked. And when it's wrong, which is often, you need to isolate where the problem is, add logging, and narrow the blast radius.
This isn't about replacing engineering skill. It's about leverage. LLMs amplify system intuition but they don't magically install it in your brain I hope this doesn't disparage anyone without a software background, this is all easy to learn and pick up, even during the practice and process of vibecoding, and are more general problem solving skills than anything domain specific to computer science / software engineering.
The stack collapsed at the same time as LLMs got good
For most of my career, I didn't build things end-to-end and deploy them myself. I worked on parts of systems. Deployment was either a different team's responsibility or, honestly, a source of dread.
What changed wasn't just "LLMs are good now." The surrounding tooling matured at the same time. Supabase quietly removed a massive amount of backend and database friction. Vercel made deployment feel like a button click rather than a ritual that you would almost definitely mess upIt's still conceptually crazy that we have whole teams of people (devops) just making sure code on a laptop in a server room acts the same as code on your laptop. A whole layer of friction quietly disappeared. Things that used to feel like "real engineering work" like databases, auth, deployment stopped being gates and started being defaults.
When you combine that with LLM-generated code, something beautiful clicks. It becomes genuinely easy to build something useful, deploy it, and share it. I've started to think deployment is actually a thinking tool. We used to treat it like the end of the process but with the removal of friction I realised that thinking was backwards. Seeing something live forces clarity in a way no amount of local iteration does.
So, onto the actual workflow.
Step zero: I talk. I don't type.
Every project starts the same way: I talk to ChatGPT about the idea. I literally mean speak. I press the transcribe buttonIf I'm not in the ChatGPT app eg. when I'm working in v0, Cursor, or Claude I use Wisprflow to do the exact same thing. and brain-dump for five to ten minutes like I'm explaining the idea to a friend. It's unstructured. I tangent. I contradict myself. I say something, then immediately say, "actually no, that's wrong." I also tell ChatGPT exactly what I'm doing. I'll say: I'm about to dump a bunch of thoughts. I want you to be a brainstorm partner. Don't jump straight to building.
I'm not trying to be clear or elegant at this point I want everything out of my head and into the LLM's context window. Speaking helps because it's faster and it stops me from over-editing my own thinking too early.
The reason this works is alignment. What you're trying to do at this stage is get the model's understanding as close as possible to what's actually in your head as identical to your mental model of this idea as possible. The fastest way to do that is volume over precision. Every tangent, every half-formed idea, every sub-detail you might normally censor becomes useful once it's in that context window.I use this verbal vomit technique with ChatGPT for more than just coding, it's such an approachable way of getting this robo-genius started on working on an idea you have in your head without you having to do much more than blabber. Emails, research, even making memes of my friends. They all start with this.
With the context windows we have now, there's basically no such thing as too much information. You're not going to overload anything. You're just giving the model more surface area to latch onto, which means fewer misunderstandings later.
Chunking the idea down into testable steps
Once ChatGPT and I have aligned and refined the idea, the next step is breaking it down into something that can be built incrementally.
This is where I used to mess up vibe coding. If you ask an LLM to build the entire app in one go, you're basically inviting it to create a beautiful, fragile mess. It will overreach. It will invent abstractions you don't need. It will glue things together in ways that feel clever until you try to change literally anything. Or it will timeout.
So I force the build to happen in stages. Each stage produces something testable and useful and the following stage builds on the last.
I think of this as an MVP of an MVP, the smallest slice that still connects to real external services early. Database? Add it early. API? Add it early. Anything that could screw you later should show up immediately.I understand that building in the integrations so early seems contrarian to what I just said re. MVP of MVP but I've found the v0 is good at these integrations and has quite a few important ones built in eg Supabase. This also means that your MVP stages should test that these integrations work.
Once that build order is clear, I ask ChatGPT or Claude to write the actual prompt I'll feed into v0. Pretty much any prompt I use for chunky LLM sessions will be made by an LLM itself. All the context already exists in the window, and LLMs know LLMs better than I do so it makes sense that it's a better prompt builder than I am.
Back to the conversation about building in stages, it's important to ask ChatGPT to make multiple v0 prompts, one per stage.
v0 could not be more perfectly named
I use v0 because it's good at front-end work and you see the UI and the code side-by-side. You can click buttons, test flows, try it on mobile, try it on desktop. You stop arguing with yourself in your head and start reacting to something real. That immediacy matters more than raw intelligence at the beginning.
v0 also deploys automatically, which people weirdly underweight. Starting in v0 means your project already lives in a real environment hosted by Vercel (who owns v0). And even when you eventually leave v0 it will continue deploying your project for you every time you update the Git repo without you doing ANY extra work. Honestly it's like magic.
Inside v0, I still keep things scoped. One stage at a time. Each prompt corresponds to a step in the MVP plan. Things break constantly, layouts go weird, fixing one thing breaks another etc but v0's versioning makes this tolerable. You can jump back, compare versions, and move forward again since v0 will create a new version every time you prompt it. Again it's magical. v0 have done an incredible job in their interface and functionality which is where the "iTs JUst a CLauDe wRAPper" argument completely flops.
If you're brand new to this, honestly skip all the Chat GPT ideation meta-process at first. Just go to v0 and play. Make four or five throwaway projects. Learn what it's good at and what it's not.One of the things you should test is connecting your project to a database via Supabase. Most ideas will require you to save data and knowing how to connect v0 to Supabase is the way to go here. I learned this by creating a todo app which was good because there were very few other variables to focus on. That intuition comes from misuse and what's important to you and me will be different so you can map the pros and cons to your own style and needs.
When v0 starts becoming the wrong tool
Over time, v0 starts writing code like a greedy algorithm. Whether it's fixing a bug or adding a feature, it finds the laziest path that makes the thing work right now. The result is massive files and horrible structure. Normally, I wouldn't give a shit. Messy code has never bothered me much if the product works.
But LLMs change the cost of disorder. Structure stops being just a human concern and becomes a form of documentation for a future LLM navigating your codebase. The worse your codebase is organised, the more confused the models get. v0 really struggles here both in structuring the codebase and then understanding the spaghetti structure it fucked into existence.
I also don't think v0 is great at more complicated backend work. That might be imaginedlike everything else I've said in this post, but it's where friction shows up for me. When I can see future changes clearly but don't trust v0 to implement them cleanly, that's my signal to leave.
v0 writing "bad code" early is actually fine, it's honestly a feature. It optimises for momentum similar to blabbering into your ChatGPT, but at some point it starts slowing you down frustratingly.
Leaving v0 and cleaning things up
Leaving v0 is easy. A couple of clicks and the whole project is pushed to GitHub and at that point, the code is fully yours. You can pull it (download it) locally and treat it like any other codebase.
This is where I move to Claude CodeMoving to Claude Code = Adding the v0 project into a Git repository, cloning the git repo into my laptop, navigating to the repository in my terminal, and then literally typing claude. Claude Code was a bit scary to me initially because it seemed weird to talk to an LLM through your terminal but you get used to it. It's obviously not a better interface than the Claude app but it does the job perfectly well when you're working with a codebase. and the first thing I do is ask Claude Code to clean up the codebase. Not perfect it just make it coherent. Break up massive files. Restore structure.
// In plan mode:
Can you go through this project and tell me if it's effectively broken up, it seems to me that some files are way too big and should be broken up into smaller components. also tell me what's inefficient or unused. i want to make this codebase much more readable so that even a junior dev who may not know how react or next works will be able to figure it out.
// Once it shares the plan and nothing looks crazy I send it (ask claude code to execute it)Then I ask it to document the codebase. This is a little less about helping humans and more about helping machines. Documentation gives any future LLM a map of the system instead of forcing it to reverse-engineer chaos every time. I also ask it to document any glaring bugs or issues it sees in the codebase.
Because I build partly as a way of learning, this also helps me understand the codebase and what part of the software performs what functionality which eventually makes me better at debugging later.
Can you go through this repository and document how all of this works. Put it in a folder called docs and write it in markdown. The purpose of this should be for a junior engineer who may not be familiar with next or react to be able to understand the codebase, as well as for LLM code generators like claude to also understand the codebase.
In a separate file within the docs folder I also want you to document any problems you see with this codebase and tell me why things like [known bugs you might have seen from the v0 stage].I use Claude Code for this stage because even though I flip flop between whether Claude Code or Chat GPT Codex is better at coding, I've always found that Claude Code is better at documentation. And since the final thing I'll do is ask Claude to fix the bugs and issues that it's identified it helps to have better documentation across the board.
What's the right tool for the job post steady state
Once the codebase is cleaned up and documented, it reaches what I think of as a steady state. At this point the workflow stops being linear and I choose the tool based on what I want to do next.
Big front-end changes? v0. I don't know front-end particularly well, so I still rely on that workflow where I can see the UI side by side with the code and immediately poke at it.I'm increasingly convinced this isn't optimal and that I should probably just be using Claude Code + Cursor here too but for now, it's still v0.
Changes that span both front-end and back-end usually mean Claude Code plus Cursor. I'll make the first pass in Claude Code, get the structure right, test it locally, and then move into iteration mode in Cursor.
Most iterations end up being front-end tweaks anyway, and that's where Cursor really shines. It's fast, cheap, and great when you already have a sense of which file you need to touch. If I don't know where the issue is, I'll go back to Claude briefly, add some debugging or logging to isolate the problem, and then return to Cursor once I know where to operate.
Codex fits into this loop a bit differently. I tend to use it when I have a feature idea and want to let it run end-to-end, especially if the idea hits when I'm not at my computer. It's less about tight iteration and more about seeing what a full first pass might look like without me babysitting every step.
That difference shows up immediately in how you work with it. Claude Code operates directly on your local codebase, whereas Codex effectively spins up a new branch in your GitHub repository and works asynchronouslyThis async in the web part is pretty cool because you can either set it and forget it till you're back on a computer while it thinks / codes for 20+ minutes or even ask it to implement multiple completely different features which will be ready for you on different brances to test as soon as they're independently concocted. It's like you have unlimited engineers at your disposal. Something weirdly empowering about that. within the interwebs. To test Codex's output, you have to pull that branch down and run it locally and once you start manually editing that branch, Codex won't iterate on it anymore.
In practice, that makes the interaction much more binary: either the output is good enough to keep, or you throw the branch away and move on. Because of that, I mostly treat Codex as a way to generate a starting point for a feature, which I then refine using the Claude Code + Cursor combo. Or I use it when Claude Code and Cursor can't figure something out and hope that the Open AI brain is wired differently for the particular problem I'm fighting with. Technically Claude Code now has a web interface (and maybe even a mobile app) and ChatGPT Codex now has a CLI (terminal) tool, so you can use either company's model in either interface. I just use it this way because it's how I started and I like the "second opinion" aspect.
Through this hopefully infinite loop of feature iteration, one thing you need to keep in mind is chonking. How "big" a feature should you give the LLM to run away with. You'd be surprised by how well they perform with seemingly complex tasks but when you start, and if you plan on extending this codebase with peace of mind, a good measure is to think of something that makes a concrete change while allowing you to test only one extra user flow to confirm the LLM nailed it. I say this because you always want to move forward with confidence that it didn't break anything else in the process which is easier to confirm when you're only testing one new thing. At some point in your journey you'll feel a lot more confident on how big a chonk you can give an LLM to execute correctly while understanding what changed across your codebase and you can start getting cheeky with the magnitude of the feature requests without much repurcussion.
Every now and then, I repeat the cleanup step. Even when coding with Claude and Codex, that messy entropy is real, especially with my dumb ass conducting them. Documentation happens even more frequently. Any chance I get (or more accurately every time I remember), the code gets re-documented and any change Claude makes I ask it to automatically update the documentation.
Version control is your safety net, use it
One thing that makes this whole workflow feel safe is version control. With AI generating code for you it's your fallback mechanism when changes go irrevocably bad.
Every attempt at a new feature lives in its own branch. My main branch is the steady state, the version I know works exactly how I want. Before I start something new, I make sure I'm not on that main branch.
This matters because AI can and will fuck things up in ways that are hard to unwind. Sometimes you hit a point where every "fix" introduces a new weirdness. In that case, the right move is often to throw the branch away, go back to main, and try again with what you learned.
Once you're off v0, version control stops being as intuitive so GitHub becomes what protect your known-good version while you let AI run wild elsewhere.
Escape velocity
This is just how I've been using this stuff so far. I'm not claiming it's the optimal workflow or that it generalises perfectly to everyone. The biggest shift is how quickly I get to that escape velocity. So much of the heavy lifting is done for me that I'm not spending days just getting that first thing to exist. I get to something interesting fast. And once I'm there, I'm in the zone.
That's the addictive part. I'm seeing progress constantly. Each small improvement suggests the next one and there's no clean stopping point, at least for me I feel like I have to be dragged away from the computer.
Of course, there are plenty of "why the fuck isn't this working" moments. That's when I start switching tools. Claude. Cursor. Codex. Try all of them. Very often, one of them will unlock whatever I'm stuck on. And when none of them do, I go back a few steps to the last known good version and try again with what I learned.
Along the way, I end up learning anyway: about the code, the frameworks, and how these models actually behave in practice. Not because I set out to study, but in the process of building something I care about.
Honestly, this is the most fun I've had building. It feels like a very good time for the idea guy.
What a time to be alive.