Three years ago, if you asked me "what do you do?" I had an answer I'm a software developer. I write code. I fix bugs. I solve problems.
Confident. Clear. No hesitation.
Last week, a junior developer asked me the same question What do you actually do?
I opened my mouth. Nothing came out Not because I forgot. Because I genuinely didn't know anymore I write code, I finally said. "But AI writes most of it."
So you're a prompt engineer? they asked.
I laughed. Then I stopped. Because the question wasn't wrong Three years ago, I knew who I was. Today, I'm not sure.
This isn't an anti-AI article. It's not about going back. It's about waking up one day and realizing you don't know what to call yourself anymore.
Am I still a developer? Or did I trade the craft for a faster way to ship?
What I Used to Say
A few years ago, if someone asked what I did, the answer came easily.
"I'm a developer. I build software. I solve problems with code."
That answer had weight. It described not just what I did but who I was. There was something solid in it something that felt earned.
I'd spend weekends on side projects nobody asked for. I'd refactor the same function three times not because it needed it, but because making it elegant was its own reward. I'd debug for hours, not because it was the efficient choice, but because finding the bug felt like winning something. A small private lottery that only I knew I'd entered.
The code was mine. The struggle was mine. The satisfaction was mine.
I'd read other people's code just to see how they thought. I'd have opinions about architecture. Strong ones. I'd argue about naming conventions longer than was reasonable, because the names mattered to me, because the code mattered, because I was in it.
That person feels like a different person now.
The Shift I Didn't Notice
It didn't happen overnight. That's what makes it hard to point to.
First, I used AI for boilerplate. The tedious stuff scaffolding, config files, the repetitive patterns I'd written a hundred times. No identity loss there. Smart move.
Then, I used it for functions I could write but didn't want to. Faster. Still felt fine.
Then, I used it for functions I should have known. This is where I should have paused. I didn't.
Then, I stopped writing code first. I started prompting first. Why struggle with something for twenty minutes when AI can produce a working version in ten seconds?
Then, I stopped evaluating the output carefully. I started skimming it. Shipping it.
Then, last week, a junior developer asked me "what do you actually do?" β and I had nothing.
The shift wasn't a decision I made. It was a thousand small yeses, each one feeling like efficiency, none of them feeling like losing something β until I looked back and couldn't find the person I used to be.
That's the thing about gradual loss. You don't feel it happening. You only notice it's gone.
So... What Am I Now?
A prompt engineer writes prompts. A developer builds systems.
I still do both. I still think about architecture. I still care about edge cases. I still debug though less often, and less deeply than I used to. I still have opinions about how things should be built.
But I also spend a significant part of my day generating, skimming, accepting, and shipping code I didn't fully think through. Code that works. Code that isn't really mine.
So where's the line?
Here's the honest answer I've landed on, after weeks of not wanting to say it out loud: I'm both. And neither. And the ratio is what actually matters.
I'm a developer when I'm designing the system when I'm reasoning about trade-offs, when I'm catching what the AI missed, when I'm asking "is this the right solution" instead of just "does this work."
I'm a prompt engineer when I'm just generating and shipping. When I've outsourced not just the typing, but the thinking.
The title doesn't matter. The ratio does.
Am I spending most of my time thinking and using AI to express those thoughts? Then I'm a developer who uses AI.
Am I spending most of my time prompting and occasionally skimming? Then I'm a prompt engineer who used to be a developer.
The terrifying part is that the ratio shifts quietly. You don't notice it moving until someone asks a simple question and you don't have an answer.
What I'm Actually Doing About It
I'm not quitting AI. That's not the answer, and honestly it's not what I want. AI has made me faster at the parts of development I find least interesting, which in theory should free me up for the parts I find most interesting.
The problem is that "in theory" is doing a lot of work in that sentence.
So I'm trying small things. Not a productivity system. Not a manifesto. Small things.
One hour, no AI, every morning. The first hour of my coding day β no Copilot, no Cursor, nothing. Just me and the problem. It's slower. Sometimes frustrating. It's also mine in a way that the rest of the day often isn't.
One honest question at the end of each day. "Did I think today, or did I just generate?" No audience. No performance. Just an honest answer to myself.
Building things nobody will ever see. No metrics. No deployment. No PR approvals. Just creation for the sake of creating, which turns out to be harder than it sounds when you've spent years optimizing for output.
Remembering the junior's question. Not to feel guilty. To stay honest about the answer.
Will these things fix the identity crisis? Probably not. But they slow the drift. And right now, slowing the drift feels like enough.
The Hard Truth
Here's what I've accepted: I'll never be the developer I was before AI. That version of me is gone not because AI took something from me, but because I gave it away. One shortcut at a time. One skipped debugging session at a time. One prompt where there used to be thinking.
But I don't think that makes me just a prompt engineer.
It means I need a new, honest answer to the question. One that accounts for what I've lost and what I've actually gained. One that doesn't pretend the craft is exactly what it used to be, but doesn't write it off either.
Developer who uses AI feels close.
Developer who still cares about the difference feels closer.
One Question Before You Go
What do you call yourself now? Developer, prompt engineer, something in between, something you're still figuring out?
And more importantly does the title actually matter, or is it only the work that does?
I've been thinking about this for weeks and I still don't have a clean answer. I'd genuinely like to hear yours.
I'll go first in the comments.
Your turn. π
The junior developer conversation is real. I used AI to help structure my thoughts for this which is either ironic or exactly the point.
Top comments (111)
I still consider myself a developer. I have more time to work on architectural decisions, using AI pretty much frees me up from low-level programming. But the real big help is not code generation, but the fact that AI never gets tired when I talk to it about my ideas. Not even crazy ideas. That's how I created a mordor project file format. In a daily work, AI is a big help when you need to understand a mysterious DOM/CSS problem on a runtime web application instance, where each partner is free to create additional CSS to make their web application appear on their own brand. Having partners with different skill levels and countries causes a lot of headaches, the copilot helps to speed up the process quickly in this case.
On the other hand, I'm somewhere between a programmer and a graphic designer, I drew a lot digitally when I was young, and currently another big help of AI is content creation. I think this year AI video technology has advanced, so I'll be revising my sci-fi stories at some point.
One last new thing I've come across at this age is that I've been able to write down my cognitive map as code:
So this will lead me to write a book about that.
Finally I keep a Vibe Archeologist
The "prompt" is not the thing I'm ever engineering. It's just a vessel for getting my ideas out, and hardly the most important part.
My own problem-solving, big picture thinking, and domain expertise are the important parts, not the prompts.
I think AI is also going to expose the consequences of poor architecture over time. Right now, much of the conversation around AI-assisted development focuses on speed, rapid iteration, and getting features shipped as quickly as possible. That approach can absolutely produce short-term results, but it can also encourage systems that are loosely structured, difficult to maintain, and heavily dependent on constant human correction.
As AI-generated code becomes more common, the quality of the underlying architecture will matter even moreβnot less. Teams with clear contracts, modular systems, predictable patterns, and well-defined boundaries will be able to scale AI usage far more effectively than teams relying on fragmented or inconsistent codebases. Otherwise, development risks turning into an endless cycle of generating, patching, debugging, and refactoring unstable systems.
In that sense, AI may become less of a replacement for good engineering and more of a stress test for it. Poor architecture can be hidden for a while when projects are small or teams are manually compensating for technical debt, but AI accelerates output so aggressively that weak foundations become visible much faster. Strong architecture, standards, and intentional system design will likely become one of the biggest differentiators between software that simply ships quickly and software that remains stable, scalable, and maintainable long term.
I love AI-generated text. "even moreβnot less". Like, yeah, I know what more means, little llm. So hilarious. π
Exactly! Since AI boom, i was forced to think larger and in detail to all the stuff i was not used to...
Peter this is one of the most unique comments I've ever received.
Mordor project file format I need to know more about this.
And Vibe Archeologist that's going on a T-shirt someday.
You're not just a developer or a prompt engineer. You're something else entirely someone who uses AI to explore every direction: code, design, cognitive maps, sci-fi stories.
The thing that struck me most: AI never gets tired when I talk to it about my ideas. Not even crazy ideas.
That's the underrated superpower. Not speed. Not efficiency Patience AI doesn't roll its eyes when you're excited about something no one else understands.
Keep being a Vibe Archeologist. The world needs more of that. π
Loved the honesty here β but I think the identity crisis has a simple answer: are you in control of the code? π―
Not "did you write it" β but can you own it, defend it, and debug it when it breaks at 2am? π
Developers have always abstracted the craft:
Machine code β Assembly β High-level languages β Frameworks & Libraries β AI π€
Each step felt like "losing something." It never was. The craft just moved up a level. AI is no different.
The real danger you're describing isn't AI β it's outsourcing your thinking, not just your typing. π§
That's a discipline gap, not a tool problem. π§
Simple rule: are you using AI to express your thinking, or replace it? π‘
If you're in control β you're still a developer. The tool doesn't change that. πͺ
β β οΈ Disclaimer: This comment was generated with the help of Claude β but the thoughts, direction, and intent are fully mine. I knew exactly what
I wanted to say. AI just helped me say it better. Which, ironically, is exactly the point. π
Afaq I'll be honest: I almost didn't publish this reply because your comment hit something uncomfortable.
You're right about the craft moving up a level. Machine code β Assembly β High-level languages β AI. The pattern is clear. But knowing the pattern doesn't make the identity crisis less real.
Here's the thing that bothers me and I think you know this too:
Everyone is using AI for writing now. Everyone. But almost no one admits it.
Look at any comment section. Look at any LinkedIn post. Everyone's voice is starting to sound the same. Polished. Structured. Bullet-pointed. We're all using the same tools, the same models, the same tone.
And yet no one talks about it. We just pretend these are our raw, unfiltered thoughts.
Your disclosure was refreshing. This comment was generated with the help of Claude but the thoughts are mine.
That's the honesty most people skip. Not because they're hiding something. Because admitting you used AI feels like admitting you couldn't do it yourself.
So here's my honest answer to your question are you in control of the code?
Not always. But I'm trying to be.
Some days I'm in control. Some days the AI is. The difference is whether I can honestly say I know why this works not just it works.
Thank you for the push. And thank you for the honesty most people avoid.
Glad you liked the disclosure ... I some time myself don't understand why developer do not admit the use of AI.
It's not that nobody talks about it. It's not unnoticeable either. The written-by-Claude / written-by-ChatGPT style is one out of many aspect of the current LLM AI hype that keeps turning me away from DEV, one of the few social media communities that I used to enjoy before.
Ingo I hear you. And honestly, I don't disagree.
The written-by-Claude style is everywhere now. Same tone. Same structure. Same rhythm. It's noticeable. And for someone who loved DEV for its authentic voices, that must be frustrating.
I want to be honest with you and with anyone else reading this who feels the same.
I use AI for about 20% of my writing. Structuring thoughts, finding better ways to say something, cleaning up messy paragraphs. The ideas are mine. The experiences are mine. The junior developer conversation that really happened.
I also always disclose when I use AI. Not because I have to. Because I want to.
What I don't do: pretend my AI-polished draft is my raw, unfiltered voice.
And here's the thing most people don't disclose. They paste, they publish, they move on. You'd never know. At least with me, you know exactly what you're getting.
But here's where I think we might align.
I recently started a newsletter. It's just me. No AI. No structure help. No polish. Raw thoughts, written the way they come out of my head. Imperfect. Messy. Human.
If what you miss is the unpolished, unfiltered voice you might find it there.
No pressure. No pitch for DEV. Just an invitation to read something written entirely by hand, by someone who still cares about the difference.
Either way thank you for the honesty. People like you are why DEV used to be special. And people like you are why it can still get there again.
When you outsource not just the typing but ALSO the thinking (and the checking), then you're "vibe coding" - you're doing what an 'end user' or 'business user' does, you're not a developer anymore ...
It's the "low code/no code" thing from before, but using a different technique.
Anything slightly more complex or 'critical' however does need the thinking and the checking, requires going deeper - and then you're a "developer" again :-)
Leob you've drawn the line that the article was circling but couldn't quite land on.
When you outsource the thinking and the checking then you're what an end user does.
That's it. That's the threshold. Not the tool. Not the output. The process.
A business user describes what they want. A developer builds it. If you're just describing and shipping without understanding you've switched seats without noticing.
Anything critical requires going deeper and then you're a developer again.
That's the hope I needed to hear. The title isn't permanent. It's contextual. The same person can be a prompt engineer in the morning and a developer in the afternoon depending on what the task requires and how much of themselves they bring to it.
Low code/no code with different technique fair point. But maybe the difference is that vibe coding feels like real coding in a way drag-and-drop never did. The output looks real. That's what makes it dangerous.
Thanks for the clarity as always. π
Nice! Yeah you're just switching seats (or hats?) - being a "business user" (vibe coding - being an "AI passenger"), or being a "developer" (using AI, but being in the "driver's seat") ...
P.S. I also like the metaphor of being an "AI passenger" versus an "AI director/architect" :-)
AI passenger vs AI director/architect that's even better than switching hats.
One is along for the ride, watching the scenery go by The other is holding the map, deciding which turns to take. Same vehicle. Completely different relationship to the journey.
I think that's what I was reaching for with ratio matters. Not just how much you use AI but what seat you're sitting in while you use it.
Thanks for evolving the metaphor, Leob. This thread is becoming the real article. π
Yep and both are (in principle) legitimate ways of using AI - but for different purposes, and/or with different outcomes - it's fine, as long as we're aware!
ExactlyποΈ
I explain it to myself like this:
If before there was a horse and a plow and you were a farmer, and now there are combines and crop-dusting planes - who are you? Youβre still the same farmer, just now you can do more. You still need to understand how plants grow, that they need watering, and that you still have to deal with BUGS, and so on π
This is such a wise analogy thank you.
Same farmer Better tools. Still knows the soil. Still watches the weather Still deals with bugs π
Same farmer, just now you can do more that's the reframe I needed.
Thank you for this. π
For me it becomes on such bigger scale.
Continue with the farming analogy, is it good that we are going away from many small farmers to a few big farmers?
I feel the same about the code, now we got more time to generate and create more. Would that mean that our brain is more scattered?
We start not knowing the code as we use to, same as Harsh was talking about. Before we OWNed it, we cared. Now... we had an AI generating it.
The scope is getting bigger for each individual developer, is that a good thing? I dont know, never thought about it until I read your comment :D
Nice analogy!
As far as I can think, AI will soon become an autonomous robot capable of handling every part of farming on their own.. and when that day will come (which will be very soon) will we need farmer then?
Everything flows, everything changes - some things disappear, others emerge.
Even the sun will burn out one day, but for now, we enjoy the good weather
I donβt think the identity changed as much as the workflow did.
Before, we proved we understood something by writing it from scratch. Now we prove it by reviewing, shaping, and catching what AI gets wrong.
The risky part isnβt using AI, itβs skipping that second step. Thatβs where the βprompt engineer vs developerβ line starts to show up for me.
Shubhra this is such a clear, balanced take. Thank you.
The risky part isn't using AI it's skipping that second step.
That's the whole thing in one sentence. The workflow changed. The job didn't disappear it moved up the stack. From writing to reviewing. From generating to shaping. From hoping it works to catching what the model couldn't see.
The prompt engineer vs developer line appears exactly at the moment you stop doing the second step.
Before, we proved understanding by writing from scratch. Now we prove it by reviewing, shaping, and catching mistakes.
Same proof. Different method. Same person. Different workflow.
This is the most useful reframe in the whole thread. Thank you. π
Exactly, Harsh. "The job moved up the stack" is a better way to put it.
From the outside it can look like "just prompting", but the real work is in the judgment after that. Catching what the model missed, deciding what to trust, shaping it into something that actually holds up.
People who skip that part usually don't notice immediately. It shows up later, when something subtle breaks and no one knows why.
That shows up later, when something subtle breaks exactly the part that doesn't make it into the demo video.
Thanks for adding this. You've made the thread richer. π
True, thatβs the part people donβt see in demos. Everything looks fine until edge cases start showing up.
This was an amazing read! Thank you!
What I really take from this is: "First hour, it is just me and VS Code, nothing else"
I talked a lot with my team that I want them to avoid using AI as much as possible when they are doing writing code for what they got hired to do, in this case it is Web, API and Mobile automation. Use AI to help you guide, answer question with problems you meet. Have it as a mentor, not someone doing what we hired you to do :)
If you are writing a helper tool with a UI and you really dont care about building it and not interested in the tech, go ahead, use AI!
Dennis this is such a clear, actionable framework Thank you.
Use AI as a mentor, not as someone doing what we hired you to do.
That's the line A mentor guides A mentor explains A mentor doesn't take the keyboard and do the work for you The line between helping and replacing is thin and you've drawn it well.
The distinction you're making is similar to what I was trying to get at with ratio. But you've turned it into a rule teams can actually follow.
Helper tool with a UI you don't care about go ahead, use AI
Yes. This is the nuance most conversations miss. Not all coding is the same Some code is craft. Some code is chore. AI for chore is smart. AI for craft is complicated.
I'm taking this back to my team. Thank you. π
Glad you liked it :)
It is like you said as well, it was not over night that AI takes over more and more off what you do and what you love to do. It is sneaky and always asking: "You can do this, would you like me to implement it?"
I just set up a system where Claude Code is my full-time developer, for $200 a month. Of course, I still have to review his work and point out any mistakes heβs made, but Iβd rather adapt than keep working as a solo developer.
Boon this is the pragmatic take the article didn't have. No crisis. Just a working arrangement. π
Claude Code is my full-time developer, for $200 a month. Of course, I still have to review his work.
That's the key line. You're not pretending Claude is autonomous. You're not pretending you're not involved. You've just shifted from doer to supervisor and you're honest about both roles.
I'd rather adapt than keep working as a solo developer.
Not this is good or this is bad. Just this is the trade I'm making.
The article asked Am I a developer or a prompt engineer? You're answering: I'm whoever gets the job done.
That's not confusion. That's clarity in a different key.
Thank you for the grounded take. π
Your interpretation of my comment is very interesting and very well written; youβve hit the nail on the head when it comes to my true feelings!
That means a lot glad I got it right Thanks for the honest take Boon. π
A vibe engineer here π it's gotta have a name.
Vibe engineer adding that to the LinkedIn headline. π
Thanks for reading! π
framing's wrong though - 'developer' was never about typing syntax. it was always about debugging prod at 2am and deciding when the simple version won't scale. AI just changes the input method.
Mykola you're not wrong. And honestly, this is the healthy way to see it.
Developer was never about typing syntax. It was about debugging prod at 2am.
That's the job. Hasn't changed.
AI just changes the input method.
If that's true then why does it feel like more than that? Why does the identity crisis exist at all?
Maybe because typing wasn't just input. For many of us, it was evidence of understanding. Not the job. Just proof to ourselves.
You're right about the definition. But the feeling isn't wrong either.
Thanks for the clarity it's a useful counterweight. π
because debugging prod at 2am is changing too. AI can trace errors, surface root cause, suggest the fix β before you've even reproduced it locally. the loop that defined the identity is compressing. it's not about syntax, it's about whether your judgment still sits at the center.
Mykola whether your judgment still sits at the center That's the real question now.
Not can you fix it but are you still the one deciding what good looks like?
Thanks for taking the conversation deeper. This thread has been genuinely valuable. π
yes β and staying at the center takes deliberate effort now. AI gets the symptom right most of the time. I decide whether the fix actually fits the system we built, not just the system as written. that gap is where judgment still lives. glad this thread went somewhere real.
Ironically, AI made me appreciate actual engineering more.
Generating code is cheap now. Understanding consequences is not.
The people who can reason about architecture, tradeoffs, failure modes, UX friction, operational complexity - those people suddenly became much more important, not less.
Kirill this is the counterbalance the conversation needed.
Generating code is cheap now. Understanding consequences is not.
That's the whole shift in two sentences. The commodity dropped in price. The rare skill didn't. AI didn't make engineering less valuable it made thinking more valuable, because thinking is now the only thing the AI can't do for you.
The people who can reason about architecture, tradeoffs, failure modes those people became more important, not less.
This is the hopeful version of the article I could have written. Not what did we lose but what's now worth more. The floor dropped. The ceiling lifted.
I wrote about the identity crisis. You wrote about the opportunity. Both are true. Both need to be said.
Thank you for this genuinely. π
Some comments may only be visible to logged-in visitors. Sign in to view all comments.