DEV Community

Cover image for AI vs Non-AI: Building the Same Project Twice

AI vs Non-AI: Building the Same Project Twice

Fernando Fornieles on May 06, 2026

When I was implementing my weather station system, I asked myself: what if I built it again but this time using AI? The idea I had in mind was t...
Collapse
 
itskondrat profile image
Mykola Kondratiuk

second build always goes faster - you’ve already solved the domain problems once. the interesting comparison would be two teams starting cold, not one dev running two passes. AI gets credit for your own prior learning.

Collapse
 
nandofm profile image
Fernando Fornieles

You're completely right. I mention in the article that the time comparison maybe is not as impressive as It seems due to the reason you are stating.

This a personal project I have mplemented in my free time, I don't have a team unless my son learns to program :-)

Collapse
 
itskondrat profile image
Mykola Kondratiuk

Solo project changes the calculus completely — no team history to control for. Future pair programmer incoming.

Collapse
 
vicchen profile image
Vic Chen

Really enjoyed the honesty in this comparison. The part that stood out to me was your point that AI looked much faster partly because the problem was already well understood, while the non-AI version included the real learning curve and refactoring cost. Also interesting that Sonar showed lower technical debt for the AI version but much higher cognitive complexity — that feels very true in practice. As an AI founder, I think this is the right framing: AI compresses iteration time, but it doesn’t remove the need for judgment.

Collapse
 
nandofm profile image
Fernando Fornieles • Edited

That is the most important thing for me when developing with AI: how to avoid the loss of judgement that can be provoked by the business pressure for velocity.

Collapse
 
vicchen profile image
Vic Chen

Exactly — velocity pressure is its own form of technical debt for judgment. When you're building under time pressure, you stop asking "is this the right abstraction?" and start asking "does it work right now?"

I've noticed AI amplifies whatever decision-making mode you're already in. If you're in slow/deliberate mode, it helps you build more. If you're in fast/deadline mode, it helps you skip the uncomfortable questions faster.

The discipline of stopping to think — "why did the AI suggest this?" — is what separates good AI-assisted dev from accumulated invisible debt.

Thread Thread
 
nandofm profile image
Fernando Fornieles

100% agree!

Thread Thread
 
vicchen profile image
Vic Chen

The meta-discipline of pausing to ask that question gets harder under pressure precisely when it matters most. One pattern Ive found useful: treat it like a code review rule -- never merge AI-generated code without being able to explain why it works, not just that it works. The explanation itself surfaces the judgment gaps. The projects where AI hurt the most were the ones where we shipped fast and never had that reckoning. The debt did not show up in the code -- it showed up in the teams mental model of the system.

Thread Thread
 
nandofm profile image
Fernando Fornieles

As someone once said: "With great power comes great responsibility" 😉

Collapse
 
xwero profile image
david duymelinck

Great breakdown of your experience! From the rewrite approach section I knew it was going to be fair and balanced.

Collapse
 
nandofm profile image
Fernando Fornieles

Thanks for your kind comment. I'm glad you liked it! 😊

Collapse
 
jack799200 profile image
Jack

This is one of the most realistic comparisons of AI vs non-AI development I’ve read. The time-saving benefits of AI are impressive, but your experience clearly shows that speed doesn’t always mean better quality or easier debugging. I especially agree with the point that vague specifications can create more confusion with AI than with human developers. AI works best as a coding assistant, not a replacement for problem-solving and engineering judgment. The balance between human expertise and AI tools will probably define the future of software development.

Collapse
 
nandofm profile image
Fernando Fornieles

Exactly, finding when to use AI and when not is key.

Regarding specifications, I remember myself, in the early 2000, reading hundreds of pages of software requirement specification documents. They aim to be accurate to let the developers translate them into code without problems. But the reality was that these documents always contains gaps and sometimes contradictory statements depending the page you were reading.

Thanks for your comment Jack!

Collapse
 
uuidbuilder profile image
Leonard Schaden

This is one of the more useful AI coding comparisons because it separates “time saved” from “confidence in the result.” The part about AI being great for bounded tasks but stressful for whole-project ownership feels very accurate. The hidden cost is not just code review, it is rebuilding enough understanding to trust what was generated.

Collapse
 
nandofm profile image
Fernando Fornieles

Indeed! I'm more concerned about the hidden cost in form of stress or cognitive debt than the outcome of AI.

Thanks Leonard!

Collapse
 
pururva_agarwal_49847572a profile image
Pururva Agarwal

Your token limit struggles on Gemini/OpenRouter are relatable. Generic LLMs lack deep context for specialized tasks, hitting a wall beyond general knowledge.

This is crucial for our drug-interaction graph.

Collapse
 
nandofm profile image
Fernando Fornieles

You're right, maybe I should use Claude or Codex, but OpenCode worked quite well.

I don't know how it is compared with others but, at least for learning and for what I needed in this personal project, it was ok for me.

Collapse
 
anupam_sekharc_b872009dd profile image
Anupam Sekhar C

Really appreciated how honest and balanced this was. A lot of AI discussions focus only on speed, but you highlighted something equally important, the mental overhead of trusting and maintaining generated code. The part about AI struggling with authentication flows and repeatedly missing the actual root cause felt very real.

Also loved your point that writing extremely precise specs can sometimes feel harder than just coding the thing yourself 😅 Great read.

Collapse
 
nandofm profile image
Fernando Fornieles

Cognitive debt, frustration, accountability, ... these hidden costs that are hard to measure but you can see and feel, are the most concerning things when using AI from my point of view.

Thanks Anupam!

Collapse
 
mnemehq profile image
Theo Valmis

These side-by-side experiments are the most useful AI content right now, mostly because they expose the second-order effects nobody quantifies. The AI build often finishes first but accumulates implicit decisions the author can't reconstruct three weeks later, and that's the cost line that bites on the second iteration, not the first. The race-to-MVP comparison flatters AI, the maintenance-six-months-in comparison usually doesn't.

Collapse
 
nandofm profile image
Fernando Fornieles • Edited

Technical and cognitive debt, reliability, accountability, ... downsides that most of the people that blindly follow AI don't want to take into account because... velocity!

Collapse
 
99tools profile image
99Tools

Really enjoyed the honest comparison. AI definitely helps speed things up, but your examples show why real dev experience still matters.

Collapse
 
nandofm profile image
Fernando Fornieles

Thanks!