A post hit the HN front page this morning titled "I'm going back to writing code by hand". It documents archiving seven months of vibe-coded work on a Kubernetes GPU TUI called k10s and rewriting from scratch. 75 points, 31 comments in the first hour. The top HN comment caught something worth pulling out:
Title says "back to writing code by hand," but what they are doing is "doing the design work myself, by hand, before any code gets written." So... Claude still is generating the code I guess?
Two activities on the same project. The title conflates them. The body, read carefully, describes a different and more interesting thing.
What the piece actually shows
Seven months of single-session prompting produced a working tool and a 1690-line model.go that ate itself. One Go struct with thirty-plus fields holding UI widgets, Kubernetes client state, per-view caches, navigation history, mouse handling, log streaming, and fleet-view internals — all dispatched through a 500-line Update() function with 110 switch-case branches. Each prompt landed cleanly in isolation. Each prompt also added another conditional inside the generic resource loader, another m.x = nil cleanup line, another if m.currentGVR.Resource == "..." discriminator. The complexity accumulated invisibly while the velocity metric said shipping.
The post then extracts five tenets from the wreckage. Each tenet ends with a concrete block of CLAUDE.md or AGENTS.md text — directives that go into the file the AI reads on every prompt. The tenets are useful. Read them in the original. They generalize past the Bubble Tea / Go specifics, and the directives are reusable as a starter set.
The five tenets, generalized
The pattern across all five is the same. Find the failure mode AI gravitates toward by default. Write the inverse rule down. Put it in the agent's session-start file. Let the agent see the constraint on every invocation.
- AI builds features, not architecture. The model satisfies the immediate prompt and ignores the forty-nine other features sharing the same state. Fix: architectural invariants in CLAUDE.md — interface boundaries, ownership rules, what's allowed to depend on what. The AI follows them once they exist. It does not invent them on its own.
- The god object is the default AI artifact. Single-struct-holds-everything is the shortest path to satisfying a prompt. Fix: state ownership rules in CLAUDE.md — each view owns its own state, no fields on the central app struct for view-specific data, each view declares its own key bindings.
- Velocity illusion widens scope. Vibe-coding makes each new feature feel free. It is not free. Complexity is a budget; line count is not. Fix: an explicit scope-boundary section in CLAUDE.md naming who the project is and is not for, with specific feature classes rejected ahead of time.
-
Positional data is a time bomb. The AI defaults to
[]stringbecause it satisfies the table widget immediately. Six months later, sort functions are readingrow[3]for "Alloc" and one column insertion breaks every render path silently. Fix: a typed-data directive in CLAUDE.md — no flattening into positional arrays, all data flows as structs until the render call, column identity comes from field names. - AI doesn't own state transitions. Background closures mutating shared state directly is the shortest path to "working code." It also produces races that corrupt the display 1% of the time in ways that look like hallucinations. Fix: concurrency rules in CLAUDE.md — background work produces typed messages, only the main loop applies mutations, render is pure.
Each one is a rule a human writes once, in plain language, and the agent honors on every subsequent prompt.
This is what the running thread has been saying
Last week I argued that supervision belongs in artifacts, not in a developer's working memory. The day after, I argued those artifacts form an architecture rather than a flat file — behavioral guardrails at the top, project-specific rules delegated to per-domain files, hard-won lessons in an append-only log the agent reads at session start.
The k10s tenets are a project-specific instance of exactly that pattern. Five rules extracted from a specific seven-month failure, encoded in a file the AI reads on every prompt, generalizing the supervisory layer beyond what any individual prompt could carry. The work is the same shape as the running argument. The difference is the post calls it going back to writing code by hand — which is what the HN top comment correctly pushed back on.
Why the title is misleading
Going back to writing code by hand suggests abandoning AI as a code generator. The author is still prompting Claude to write the implementation. What changed is that the architecture — interfaces, ownership rules, message types, concurrency model — gets written down by a human first, in CLAUDE.md, before any prompt fires.
Senior engineers have always written designs before code. The new part is that the design now lives in a file the AI reads continuously, and the AI generates the implementation against it. Both activities are happening on the same project. The title gestures at one of them.
What this means for anyone in the same spot
If you're seven months into a project that started as a vibe-coded prototype and is starting to feel like a god object, the move is not to throw out AI coding. The move is to pause, read what the AI built, extract the invariants you wish had been enforced, and put them in CLAUDE.md before the next prompt.
The k10s tenets are a working starter set. The CLAUDE.md blocks the post includes are copy-pastable. Generalize them to your stack. Add your own from your own incidents. Use them as a seed for the kind of Mistakes Become Rules pattern where the file grows with each correction and the agent inherits the corrections on every subsequent session.
The next-version rewrite is the upstream architecture work that was always going to need to happen — now made explicit, encoded once, and enforced by the file the agent reads on every turn.
The clean reading
Writing the architecture down before the first prompt is the upstream activity senior engineers have always done. Doing it explicitly, in a file the AI reads on every invocation, is the new part. The five tenets in the k10s post are a working example. The HN top comment did the framing work the post's title should have.
Top comments (0)