DEV Community

AI Can Write the Code. It Still Forgets the Decisions That Matter.

Mary Olowu on May 14, 2026

A lot of AI coding advice quietly assumes the same thing: if the output is bad, you probably need a better model, a better prompt, or more tooling...
Collapse
 
gnomeman4201 profile image
GnomeMan4201

This is the exact mindset I have about Ai . From this post alone, you can tell you are very good at turning from what i consider a complex concept into a clear narrative. That also would indicate you have a lot of hands on “in the trenches “ experience. I don’t really know what im getting at but just want to say this is a legit post to read

Collapse
 
restofstack profile image
Mary Olowu

Really appreciate that. A lot of this came from hitting the same failure mode over and over on real project work, so I wanted to explain it in plain terms instead of treating it like magic. Glad it landed for you.

Collapse
 
xytras profile image
Comment deleted
Collapse
 
restofstack profile image
Mary Olowu

Yes, this is exactly the failure mode I keep seeing. I really like your framing of decisions as records with provenance plus explicit supersedes links, because that turns memory into something queryable instead of something buried in old chat logs. Even a small SQLite decisions table gets a team surprisingly far.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

this is the session boundary problem - and better prompts don’t fix it. treating decisions as artifacts you explicitly pass in each time actually helps. ADRs as context prepend, spec files. the model isn’t forgetting - it was never told.

Collapse
 
restofstack profile image
Mary Olowu

Exactly. “The model isn’t forgetting, it was never told” is probably the cleanest way to put it. ADRs, spec files, and other durable artifacts are what make context portable across sessions instead of trapping it in one chat window.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

It's entirely a tooling problem. If you don't have the proper memory or the proper insights, it's like bringing in a new developer each time and asking them to take the next step.

Collapse
 
restofstack profile image
Mary Olowu

That “bringing in a new developer each time” analogy is spot on. The painful part is not that the model can’t code, it’s that it doesn’t inherit the expensive decisions unless the tooling gives it durable state. That’s the gap I was trying to point at.

Collapse
 
lcmd007 profile image
Andy Stewart

This hits the nail on the head. No matter how high the AI's IQ, without deterministic state management, it’s just "stochastic mediocrity." The solution isn't blindly stacking compute, but building a persistent context that outlives the chat window. An Agent without memory is just a code monkey; one that preserves engineering decisions is a true digital partner.

Collapse
 
restofstack profile image
Mary Olowu

Persistent context that outlives the chat window is the key. Once engineering decisions are preserved somewhere stable, the agent stops feeling like a fast stateless assistant and starts feeling much closer to a real collaborator.

Collapse
 
codingwithjiro profile image
Elmar Chavez

AI context is expensive. Memory is expensive. That's why they enforce these limits. AI companies are bleeding money left and right and the revenue are not catching up from what I heard.

Collapse
 
restofstack profile image
Mary Olowu

I think cost is definitely part of why context limits exist. The part I keep coming back to, though, is that even with a bigger window, the workflow still breaks if the important decisions are not externalized somewhere durable. More tokens help, but better state management helps more.