There once was a junior who spent a week on one unit test. Thirty-seven commits. Each one broken. His senior approved the thirty-seventh because th...
For further actions, you may consider blocking this person and/or reporting abuse
The missing equal sign story hits because everyone who's approved code has done this. Not the same bug maybe. But the same math. Speed over scrutiny. Deadline over durability. And then the rollback.
You're right that AI didn't create this. It just removed the last excuse. Before, you could blame slow typing or unclear requirements. Now the code appears instantly. The bottleneck isn't generation anymore. It's judgment. And judgment was already the weak link.
The thirty-seven commits example is brutal. Not because the junior kept failing. Because the senior kept approving. The system didn't just allow broken code. It rewarded it. Sprint ended. PR merged. Dashboard green. Nobody asked if the test actually tested anything.
That's not a tool problem. That's a permission problem. And AI just made permission cheaper to grant.
The foundation question is the real one. What were you allowing before? Not what you said you allowed. What actually shipped. The slop in your codebase didn't appear last month. It accumulated over years of "good enough for now." AI just makes more of it, faster. And cleaner. Which makes it harder to spot.
The mirror is clearer now. That's uncomfortable. But blaming the mirror won't fix the reflection.
You got it. The thirty-seven commits is the part that still haunts me. Not the junior. The senior.
We had permission structures that rewarded green dashboards over actual understanding. That existed before AI. AI just made the gap between "approved" and "comprehended" wider and harder to see.
The mirror line at the end ... that is the real point. The reflection was always there. AI just made it impossible to ignore.
If you've got AI code review, I'm pretty sure you wouldn't ship a single equal sign when you meant ===. I still think it's about the harness that you wrap around your development process. An AI can be equally a reviewer as it can be an author. I think making sure the human in the loop is dealing at the right level is the important thing today.
The harness is everything. We run AI review on my team ... and we still found a bug last month that made it through three human reviewers and the AI. Single character error. The AI flagged it as "low confidence" but the human merged anyway.
The issue is not whether AI catches things. It is whether the human in the loop knows when to stop and look closer. AI gives you speed. Speed without comprehension is just faster debt accumulation.
This is why I started my project from scratch instead of
forking an existing codebase. The temptation to fork
Substrate or Tendermint was real but you inherit years of
decisions that don't match your use case. Starting clean
is slower but every line exists for a reason you
understand.
Starting from scratch is a luxury. Most of us inherit.
The real skill is not choosing between clean slate and legacy. It is learning to see which decisions in the legacy code were load-bearing and which were accidents of timing. That takes time sitting with the system ... the thing we trade for speed.
Fork or fresh, the debt follows the same rule. It accumulates where comprehension was traded for velocity.
That's fair. Starting from scratch was only possible
because there was nothing to inherit yet. If I was
joining an existing chain I'd be doing exactly what
you described, figuring out which decisions were
intentional vs accidental. The debt rule applies
either way.
This resonates. The pattern I'm seeing in AI/LLM projects specifically is that people skip input validation entirely because "the model handles it." Same energy as skipping SQL parameterization because "the ORM handles it" fifteen years ago. The debt isn't new, it's just accumulating faster because the iteration speed is higher and the failure modes are less visible — a prompt injection doesn't throw a stack trace, it just silently changes behavior.
This is exactly the pattern. "The model handles it" is the new "the framework handles it."
We learned this the hard way with our first agent rollout. Assumed the LLM would validate inputs because ... well, it seemed smart. Found out in staging that it would happily accept malformed data if the prompt was phrased confidently.
Now we validate at boundaries. Same as any other system. The model is not a boundary. It is a component.
Exactly. "The model is not a boundary, it is a component" — that's the line I wish more teams internalized before their first agent rollout. Glad you caught it in staging and not production.
sprint deadline as cover for bad reviews - that story is painfully familiar. been in the "we'll clean it next sprint" conversation more times than I can count.