TLDR: AI is an abstraction for coding. And like all abstractions, it never replaces the need for the underlying knowledge.
The Theory
There's an ongoing theory that AI will take most jobs, and definitely all software development jobs. I think this is highly exaggerated for a few reasons. In this post I want to look at one of the most basic ideas in software development that contradicts this.
The Law of Leaky Abstractions
This is a concept attributed to Joel Spolsky, who coined and explained it in this article, which states:
All non-trivial abstractions, to some degree, are leaky.
I would go a step further and say that "all abstractions are leaky"; the trivial ones are just naturally absorbed.
This means that all abstractions attempt to hide some process from the person using them, and fail. To be honest, this tends to happen in edge cases, but it's usually a matter of time (or scale) until you bump into an edge.
Some of the classical examples in software development of this are:
ORM (Object-Relational Mapping): Still need to know a lot about SQL.
SQL: Still need to know about DB indexes, B-trees and how the DB implementation interprets the SQL statements.
Programming languages: Still need to worry about memory usage, CPU cycles, and code injection.
This is even applicable to abstractions outside software development, like driving cars or washing machines.
The mentioned article explains in more detail some of these examples.
Mo Abstractions, Mo Problems
The thing said about money by a big famous philosopher can be applied to abstractions. The more complex the abstraction is, the more it tends to leak.
Think about the given examples of SQL and ORM. If you are building your SQL statements by hand like the good old Vikings, you do need to know about the indexes and the underlying implementation of the database of choice (Postgres, obviously).
But if you value suffering and decide to use an ORM (let's say Hibernate). Now you need to know not only that, but waaaay more when you call your getResultList():
"Why isn't this working? What SQL statement does this result in?"
"What annotation should I use on this @Entity marked class field to make it unique?"
"What the hell does this @Strategy mean?"
And you still need to know SQL, not me saying it:
Throughout this document, we’ll assume you know SQL and the relational model, at least at a basic level. HQL and JPQL are loosely based on SQL and are easy to learn for anyone familiar with SQL.
You may argue this is an anecdotal fallacy, but give some honest thought to all the SDKs, JavaScript frameworks, and low-code platforms out there. While some scream "SKILL ISSUES", I calmly say "leaky abstraction".
So it seems that the further away we are from the most basic operations, the more we need to know about the abstracted "world". In other words, the more complex an abstraction is, the leakier it tends to be.
Is this an issue?
Should we abolish all abstractions since they are cursed, radioactive, closer talker that spits when talking?
Well... no. Abstractions are not the issue. Abstracting something isn't intrinsically bad. It can hide some of the complexity away from you for a long time and allow you to move faster. The issue is the conclusion that you no longer need the abstracted knowledge. It should be seen as something to speed you up.
The original article mentions the TCP abstraction, and most people can do a lot without understanding the "flakiness" of IP. Or even, relying on HTTP/HTTPS without understanding TCP or TLS/SSL.
Most garbage-collected language developers don't understand heap memory, pointers, and CPU cycles (they should). Anyone doing simple enough tasks in these languages probably doesn't think about these a lot.
Also, I think developers hardly ever need to consider things at the chip assembly structure or operation when programming. Things like: transistor material, voltage, or temperature. Maybe L2/L3 cache size and number of cores.
But none of this "completely" removes the need for understanding the underlying/abstracted process. If you've been living so far like this, either your task is simple enough (in your context) or the quality bar for your delivery is low enough.
The AI Case
So what does AI have to do with this? I argue that AI is just an abstraction for coding. Many tasks actually, but let's focus on coding.
AI operates like any other program. It takes an input and provides an output. If you are using it to code, it also works like any other programming language. You pass instructions that are "parsed" into instructions a computer can understand and execute.
Since the parsed instructions in this case are "code", it's fair to say that AI, in this context, is just abstracting the "code" away from you.
As with other abstractions, it helps you get further without the underlying knowledge, but it doesn't "remove" it from the equation.
The next logical argument would be: "Ok, but if it takes you far enough, it might get you where you need to go."
This is true, and indeed I think it does. Using AI to implement personalized software or automated processes for a single user is a reality. The truth is that you can't go much further than this right now, unless you understand the abstraction underneath.
Many developers using AI right now see the capabilities of it in the hands of someone experienced with this (themselves) and think this will be the same for an inexperienced person. It's not.
You might be thinking that it will keep improving and the current model "is the worst it will ever be". This is true, but it doesn't really mean much, but that's a topic for another chapter.
In order to truly replace all developers, AI would have to break the "Leaky Abstraction Law".
I don't think it will.
That doesn't mean it won't change the industry. It has already, but many things have before. Your day-to-day tasks might not be the same, but the underlying knowledge you accumulated over the years is still important, and no abstraction will ever "remove" the need for that.
Keep coding. Until next time.
Top comments (11)
IDK, the advances that happened from last April to this April are pretty fundamental, sure it's not perfect and sure it would be bad in the hands of a total amateur, but I'm not sure that's lasting. Right now, in a well architected and documented code base, below average developers can build something amazing, presuming they are using tools across the piece - architecture review, code review, plan review, ideation support etc. That's right now. On greenfield it's a mess still, but that won't last.
I totally agree. Well architected and documented code bases are easy to expand if you are really good at pattern matching, which LLMs are. But with every change there's a possibility to diverge from that state. Someone with deep understanding of such practices is required to prevent that. How long can it last ? A lot maybe. Again, if the requirements are simple enough, that code base is probably fine.
You're right. AI is just another layer. Like ORMs didn't kill SQL. They just made the pain of not knowing SQL show up differently. Same with AI. It'll write code fast. But when it breaks, you still need to know why. The leak doesn't go away. It just moves.
Interesting perspective. A lot of the conversation focuses on replacement, but in practice it feels more like reshaping how work gets done. The bigger shift is in how people adapt their thinking and workflows around AI.
Exactly. How to make the best use of it? When to rely on it or don't?
For me, AI works best when the task is repetitive, research-heavy, pattern-based, or needs a first draft. But I’d be careful relying on it for judgment, context, ethics, final decisions, or anything where being “technically correct” is not enough.
So maybe the question is less “will AI replace people?” and more “which parts of the work should stay human by design?”
leaky abstractions is a solid lens but it cuts both ways. when abstractions fail at scale, teams build a better layer on top, not drop down. AI will probably evolve the same way - abstracting its own failures into the next layer.
Sure, But again, the leak won't stop. It will just allow you to get further without the knowledge. And it adds up to the point where you need to ask yourself, does it worth?
that inflection point is real — and it hits PMs faster than engineers. we are making scope and risk calls without the depth to validate AI outputs. at some point it stops being about abstraction and becomes: do you understand what you are shipping.
Great point about leaky abstractions!
This is exactly what I see working with ML systems - AI tools are helpful until they hit edge cases, then suddenly you're digging into the fundamentals. The ORM comparison is spot on. It's not about replacing developers, but changing where we actually need to apply our expertise.
Keep the posts coming!💪
Thanks! That also means changing what skill to craft as a developer. Fast typing is becoming less useful.