AI might promise speed and productivity. But it comes with serious issues.
The other day I found a senior coder quitting AI coding after finding out that:
- AI was like a sloppy coder who bypassed tests and wrote bad code
- AI had stolen all the joy of solving problems from him
After experimenting with AI, I realized I was becoming lazier than usual. Since then, I've set simple rules to avoid losing my skills.
But those aren't the biggest problems.
A failed experiment revealed a more serious problem
Recently I found out about the experiment of a team of coders that revealed a deeper problem. This one,
After trying to write a feature at work only by prompting, they realized:
[Even when AI is capable of writing all of our code], a huge issue remains: I lose my mental model of the codebase.
...
Until I can trust the AI completely, I need to keep my own mental model alive. Otherwise, every time I need to do something myself feels like joining a new company.
It takes time to get familiar with a complex codebase.
At past jobs, it took me about a year to feel confident. Of course, your mileage may vary.
But when that happens, you feel like driving through familiar streets with only one hand on the wheel:
- You know the architecture, the folder structure, and even a rough sketch of code blocks.
- You know how modules connect and what to touch or avoid.
- You can even remember file and function names.
You've built the mental models and gained all the context. Without them, you feel like walking into a dark room.
When AI writes our code, it's stealing the context and the feeling of knowing a codebase like the palm of your hand.
Use AI if you want, but be the one dictating what to do.
Draw the boundaries of the solution, then let AI fill in the details.
Be the pilot and let AI be your copilot.
When AI can handle syntax, it's time to work on skills it can't, like collaboration, clear communication, and problem-solving. That's why I wrote Street-Smart Coding, the roadmap I wish I had on my journey from junior to senior.
Top comments (4)
That's the point of our jobs: problem-solving automation. If AI steals the problem-solving part of your job, then what's left?
The answer is a frustrated programmer.
AI has its usage scenarios for beginners, for learning a new language, for understanding a large code base, for summarizing documentation, that's its purpose as what it is: just a tool.
the fact we love problem solving is more an individual issue than an AI issue. If you love jogging places you wouldn't be upset cuz bikes exists. AI doesn't take away the problem solving, it just solves the cheap ways problems, so we can go and focus and more important things. Or no?
This is the way!
Love your rephrame. The problem is when we become so dependent on the tool and a faster, brighter tool comes out, leaving us behind.
The mental model erosion point is the one I keep coming back to, because it's not really about AI at all — it's about what happens when the act of building and the act of understanding become decoupled. That used to be the same thing. You typed the code, you wrestled with the architecture, you built the mental model because you had to. Now it's possible to ship something that works without ever forming a coherent picture of why.
What I find myself thinking about is that this isn't entirely new. Senior engineers leading teams have always had to maintain mental models of codebases they didn't write every line of. The difference is that with a human team, the process of reviewing PRs, asking questions, and debugging together gradually transfers understanding from the people who wrote the code to the people who didn't. There's a social mechanism for mental model construction. With AI-generated code, that social mechanism is missing — there's no one to ask "why did you structure it this way," no intent to interrogate. The code arrives fully formed and silent about its reasoning.
So maybe the skill that matters isn't "resist using AI" but deliberately rebuild those social mechanisms. Talk through the AI's output with teammates. Force yourself to explain the architecture to someone. The understanding doesn't have to come from typing the code yourself, but it does have to come from somewhere, and the somewhere needs to be intentional now in a way it didn't before. Curious how teams are actually doing this in practice — scheduled walkthroughs, architecture discussions, or just hoping it happens organically.