I Don't Vibe Code. But If You Watch Me, You'll Think I Do.

I Don't Vibe Code. But If You Watch Me, You'll Think I Do.

141 archived tasks. 78 database migrations. 1,090 tests. And someone will still call this vibe coding.


Last week I shipped a heartbeat monitoring system for 15 background workers. Claude wrote most of the code. I reviewed it, corrected two edge cases in the Redis key expiry logic, ran the tests, updated the architecture docs, and pushed.

If you'd watched me work, you'd have seen a guy typing natural language into a terminal while code appeared on screen. You'd have said: "Oh, he's vibe coding."

I'm not.

But I understand why it looks that way. And I think the confusion is going to cost a lot of people their time, their money, and maybe their shot at actually learning something useful.


The Three Faces of Vibe Coding

There are three profiles in the vibe coding community right now, and I want to name them plainly.

The Marketing Guru. This person sells courses. Tweets threads about building a $10K MRR SaaS in a weekend with zero code. They show you a landing page and a Stripe checkout and call it a product. The product doesn't have tests. It doesn't have error handling. It doesn't have a database migration strategy. It has a demo and a payment link. The guru doesn't care. They already made their money — from you, not from the product.

The Innocent Fool. This person genuinely believes the gold rush is permanent. Engineering careers are over, they say. Why learn algorithms when ChatGPT can write them? They're not being cynical — they truly believe that the ability to prompt an AI replaces the ability to understand what the AI is doing. They'll build something, it'll work for a while, and when it breaks in a way the AI can't explain, they won't know where to start. I feel for them. They were told the ladder was gone, so they never climbed.

The Lost in Translation. This is the one I have hope for. They tried to learn coding before — maybe a bootcamp, maybe a YouTube series, maybe a CS degree they didn't finish. It never quite stuck. Now AI tools give them a second chance. They're not lazy and they're not naive. They just haven't found the path that works for them yet. If they meet the right mentor, the right project, the right moment of friction that makes them want to understand instead of just ship — they'll make it. These people can become real engineers. The other two can't, because the other two don't want to be.


Three Modes, One Screen

Here's what I think people actually mean when they lump everything together.

Mode 1: Vibe Coding

You open Cursor. You type "build me a dashboard with auth and a database." Code appears. You click run. It works — or it doesn't, and you prompt again until it does. You don't read the code. You don't understand the schema. You don't write tests because you don't know what a test would even verify.

You are a prompt engineer now. Always chasing the best prompt, the best model, the best system instruction. You trust the output because you can't evaluate it. You don't learn — either because you believe engineering is doomed and there's no point, or because you believe this product will be the breakthrough that makes you rich.

This is duct-tape programming. Or as we say in Brazil — GoHorse: ship fast, pray it works, think never. The only difference from 2015 GoHorse is that the horse now writes the code for you.

Mode 2: AI-Assisted Coding

A real developer uses AI to accelerate. This has levels.

Level 1 — The Rubber Duck. You have ChatGPT open in a tab. You're writing code yourself — every line. When you're stuck, you describe the problem. The AI suggests approaches. You evaluate, adapt, implement. I did this from 2022 to late 2023. The AI never touches your codebase. It's a conversation partner. A very fast Stack Overflow that understands your context.

Level 2 — The Copilot. The AI has access to your code. Cursor, Windsurf, Codex, GitHub Copilot — pick your tool. You ask localized questions: "refactor this function," "write the SQL for this query," "what's wrong with this test." You're still the decision maker. You're still the one navigating the codebase, choosing the architecture, deciding what to build next. The AI is a supercharged IDE. In my day job, I mostly work at this level.

Level 3 — Agentic Engineering. This is what I do on Novelist. And this is where it gets confusing — because from the outside, it looks exactly like vibe coding.


A view of the task-log directory — 141 archived task files, one per feature.

What Agentic Engineering Actually Looks Like

I take the back seat. I architect. I plan. I review. I test. I make product decisions. Claude does 95-99% of the implementation.

But here's what a vibe coder would never do:

I maintain 141 task files — each with a dev log, a decisions table, a difficulties table, and acceptance criteria. Every task is tracked from scope to completion. When I pick up work tomorrow, I read the task log first, not because Claude needs it, but because I need to know what happened and why.

I wrote 727 Cypress E2E tests and 532 Go integration tests. Not because someone asked for them, not because it's a best practice checklist item — because I've watched AI-generated code pass on the happy path and break on every edge case. The tests are how I know the system works. They're how I catch Claude's mistakes. They're how I sleep at night.

I maintain architecture docs — 200-novelist.architecture.md, 210-publix.architecture.md — and update them with every major feature. Not for show. Because two weeks from now, when I'm debugging a notification that didn't fan out, I need to know how the Herald realm works without re-reading 445 Go files.

The project has:

  • 68,215 lines of Go across 24 realms, 78 migrations, ~185 API endpoints
  • 72,764 lines of TypeScript/CSS across 205 React components, 26 pages
  • 1,164 lines of Rust for CPU-intensive image processing
  • 15 background workers with heartbeat monitoring and archive-before-prune data retention
  • 4 production nodes with Tailscale mesh VPN, Caddy TLS, and staging deployed

I built all of this with Claude. But I also spent 3 hours one day debugging a pgx timezone bug because Claude passed time.Time to a DATE column and the timezone offset shifted the date. Claude didn't catch it. The tests didn't initially catch it. I caught it because I understood what pgx does with time values on a non-UTC server. And then I wrote a rule so it never happens again.

That's not vibing. That's engineering with a very fast pair.


Terminal output: 1,090 tests passing across publix, rustix, and cypress.

The Test I Wish Vibe Coders Would Take

Here's a thought experiment.

Tomorrow, Claude and OpenAI change their pricing. Instead of per-token billing, it's a flat fee: the same annual cost as a mid-senior engineer in your market. $80K, $120K, whatever the number is where you live.

Could you still use it?

Not "could you afford it" — could you use it effectively? Could you review its output? Could you catch when it's wrong? Could you architect a system that it implements correctly? Could you debug the production incident at 2am when the AI is just as confused as you are?

If you can't, then what you have isn't a skill. It's a subscription. And subscriptions get cancelled.

If you learned nothing in the process — if you don't understand why your app works, how your database is structured, what your background workers do, why your tests exist — then you didn't build a product. You rented one.


Who This Is Actually For

I'm not writing this to dunk on anyone. The Marketing Gurus will keep selling. The Innocent Fools will learn the hard way or pivot to the next trend. That's not my problem.

I'm writing this for the Lost in Translation person. The one who tried, didn't stick, and is now wondering if prompting is enough.

It's not. And I'm not going to sugarcoat it — the path from "vibe coder" to "engineer" is not short. It's not easy. It took me years of reading errors I didn't understand, debugging systems I didn't build, learning patterns that only make sense after you've seen them fail. That's the work. That's why engineering pays well. Not because the syntax is hard — because the judgment is hard. Because knowing what to build, how to structure it, when to test it, and why it broke at 2am is knowledge that only comes from doing the work.

AI doesn't skip that path. It can accelerate it — if you're willing to learn along the way. Start at Level 1. Use ChatGPT as a rubber duck. Touch the code yourself. When something breaks, don't just re-prompt — read the error. Understand it. Fix it manually at least once. Then let the AI fix it next time, but watch what it does and check if it's right.

The AI is not going to replace engineers. It's going to replace people who can't engineer but pretend to. There's a difference — and it's exactly the difference that justifies the salary.

My career is going pretty well, thanks.


Building Novelist in public. novelist.ws

← Back