I stopped caring which language someone uses. Somewhere in the last eighteen months, that happened without me deciding it.
Not because I became a better person. Because the argument stopped mattering.
In August 2025, TypeScript surpassed both Python and JavaScript as the most-used language on GitHub for the first time ever. Not because developers sat down and decided TypeScript won. Because AI tools handle it better, so it spread. The debate didn't resolve. The ground shifted underneath it and most people are still fighting on the old map.
The War That Already Ended
The Python vs JavaScript argument ran for a decade. Rust evangelism became a personality type. C++ veterans looked down on everyone. The fight was never really about syntax — it was about belonging. Who gets to call themselves a real developer. Who gets filtered out at the interview. Who gets taken seriously in the architecture meeting.
That argument is over.
Not because anyone won. Because something else became the constraint.
What Replaced It
The new constraints aren't linguistic. Tokens — how much context a session can hold before the model starts forgetting what it's building. Context windows — how much of your codebase an agent can actually see at once. Prompt discipline — whether your instructions are tight enough that the agent doesn't guess. Three things. None of them are in any job description yet.
Nobody voted on this shift. There was no announcement. It just became true while we were arguing about whether Rust was worth learning.
The developer who ships consistently now isn't the one who knows the most syntax. It's the one who can structure a spec tightly enough that the agent doesn't hallucinate the requirements, manage a context window without losing architectural coherence across sessions, and catch what the model got confidently wrong before it reaches production.
I’ve been experimenting heavily with this in my own production AI agents and real-browser automation workflows.
That's a different skill. No bootcamp teaches it yet. Most job descriptions don't list it.
The Gate Didn't Disappear. It Moved.
Language gatekeeping excluded people by syntax preference. You didn't know pointers? Not a real programmer. You used PHP? Embarrassing. You learned with a framework instead of from scratch? Shortcuts.
The new gatekeeping is quieter. You're not excluded for your language anymore.
You're excluded for your context budget.
Token limits are a billing problem dressed as a technical one. But knowing how to structure prompts, manage agent memory, and stay coherent across a long multi-step workflow — these compound. The developer who can do this produces dramatically better output than the one who can't. The gap is real and it grows with complexity.
Same exclusion mechanism. Different surface. Less visible, which makes it harder to name and harder to argue against.
The old gatekeeping was at least honest about what it was filtering for. The new one looks like a productivity difference.
What Doesn't Change
Not everything shifted.
That person is still you.
The things that actually matter — judgment, accountability, knowing when the confident answer is wrong — those don't change with the terrain. They get more important as generation gets cheaper.
Uncle Bob Martin, who spent months coding with Claude and wrote about it publicly, noticed something: Claude codes faster, holds more details, but can't hold the big picture. It doesn't foresee the disaster it's creating. Someone still has to see that. Someone still has to slow down and ask whether this is right, not just whether it compiles.
But the marker of competence shifted. The proxy changed. The new proxy is harder to fake than the old one.
You can memorize syntax. You can pass a whiteboard interview on language trivia. You can't fake knowing how to structure a ten-step agent workflow without the context collapsing at step seven, or how to write a spec that gives an agent something real to work with instead of something it'll interpret five different ways.
This is exactly why I built my own SEO automation agent that runs unsupervised on Cloudflare.
The old gate was about what you'd memorized. The new one is about how you think.
The Part That's Still Unresolved
I don't know if the new gate is better than the old one.
The old gatekeeping protected a social hierarchy more than it protected code quality. CS degrees, whiteboard interviews, years-of-experience requirements — they controlled access. They decided who got to call themselves real engineers. That architecture was never really about quality.
The new constraints are at least about something real. Context discipline, prompt structure, verification habits — these produce actual output differences. The filter is less arbitrary.
But "less arbitrary" isn't the same as "fair." Token budgets cost money. The developer in Lagos with a $20 API limit and the developer in San Francisco with a $200 plan are not operating in the same environment. The new constraint is technical and financial simultaneously. That's not a coincidence — it's just the old hierarchy in different clothes.
We spent years arguing about languages. Now the argument is how well you can give instructions.
That's not obviously worse. It's just different.
And we haven't decided yet whether the new gate is better than the old one, or just less visible.
Top comments (7)
You use a very narrow field of view in this post.
Sure as people we are going to have our opinions but I think the one thing we can agree on is that some languages are missing features an application might need. It can go from running threads to producing small executables.
There is not one language that is the best in everything.
The ability to see the big picture, to know when to drop it or go forward, the way to communicate is what makes you a developer/engineer and not a code parrot.
With AI it did become a part of the job that moved more to the centre of the stage, but it has always been there.
The only change is that you now need to explain thoughts to a program instead of people. But both punish you when your explanation left openings.
I understand the feeling. But that way of thinking is based on the hype of AI providers that want to sell you subscriptions.
Local LLM's are getting better all the time. It is up to us to get the most use out of them.
I always laugh when I see language wars. As developers we want to use the best tools available.
The war is for people who want to benefit from it. Use language X because it is better translates in my head as use language X because that makes me more money.
The local LLM point is real and worth separating. Running Ollama on decent hardware does sidestep the subscription problem . I've done it. But the gap that matters isn't API cost, it's model capability at the task level. A local model handling a ten-step agent workflow with cross-session memory and reranking isn't the same conversation as a frontier model doing the same thing. The ceiling is different. That gap closes over time, but "getting better all the time" is doing a lot of work in that sentence right now.
On languages still mattering for technical fit . agreed. Nothing in the article argues otherwise. The point is that the war about belonging, about who counts as a real developer, was never actually about which language handles threads better. It was social. That argument ended. The technical constraints never went away.
The "always been there" framing is where I'd push back slightly. Yes, judgment has always mattered. But when syntax was the primary filter, you could fake competence at the interview stage and learn the real skill on the job. The new filter is harder to fake in a production context. Same skill, tighter feedback loop...
You are right frontier models that are run in a data farm will always have an advantage over local LLM's because of scale and development.
The question is do many applications en workflows need long or complicated prompting?
Not everything needs an agent.
That war still exists, it just move from languages to anti-AI, mixed AI use or vibe coder.
As long as people can form groups they will exist.
I would argue that it is easier to fake competence with AI. Sure when you are at work your token budget is going to show your AI use. But do you really want that as the biggest parameter of your competence?
The belonging war rebranding point is right. Anti-AI versus vibe coder is the same tribal mechanic in different clothes. I'd update that paragraph if I were writing it today.
On faking competence . You're identifying something real but I think it runs in a different direction. AI makes faking output easier. A plausible-looking PR, a tutorial that reads correctly. What it doesn't make easier is faking competence under pressure when the system breaks in production and the model's confident about the wrong fix. That's the filter I meant not token budget as a metric, but what survives when generation gets cheap and verification becomes the actual work.
Token budget as a proxy for competence would be a bad metric. The article wasn't arguing for it as one. The point was narrower: that the financial constraint changes who gets enough reps at complex workflows to build real judgment. Not that spending more makes you better.
That is the same as using the argument someone on Stack Overflow solved it that way.
Delegating knowledge was and is never going to be a viable way to prove competence.
That was not my thinking when I mentioned token budget as a metric.
It is the other way around, the one that uses the least amount of tokens with the greatest output is the best developer.
In trivial cases yes that person is the ideal person. But when the complexity starts piling up, the token budget might not be the right measurement because doing more simulations could prove better results in the long run.
I get your argument. There was a study here were they looked for how your environment influenced the steps that you need to take to better yourself. And they found out that the better well of your parents were the further you could get in life.
This has always been a problem in society. And it is sad to see that society has begun to devaluate people again after a time where more people got better chances.
Interesting:
"That's a different skill. No bootcamp teaches it yet"
I wonder when the first bootcamps will start teaching it - seems inevitable ...
But yes, I do believe that "the new way" (AI coding) awards more "higher level" thinking over "memorizing" - at the same time, the $$$ it costs creates inequalities (but: if you're not rich, you gotta be clever, haha).
Old world versus new world, which one was/is better? I'm on the fence TBH, but your basic premise, that languages and frameworks matter a lot less now - yes, I certainly believe that!
Bootcamps will teach it when employers screen for it — that's how curriculum has always lagged.
The access gap you're pointing at is actually in the article: the Lagos developer on a $20 API limit and the San Francisco developer on $200 aren't playing the same game. But the deeper issue isn't the subscription fee. It's exposure to complex systems worth interrogating. You can't develop judgment about 10-step agent workflows without running a few.
The clever-without-resources path still exists but it requires getting to production-level problems somehow. The fence I'm less sure about: whether the new constraint is actually harder to game than the old one or just harder to see.