DEV Community

Cover image for Modularity - An Overrated Anti-Pattern? The Power of the Monolithic Script in the Age of AI.

Modularity - An Overrated Anti-Pattern? The Power of the Monolithic Script in the Age of AI.

EmberNoGlow on February 21, 2026

Disclaimer: This post isn't a manifesto against modularity, but rather a description of a temporary approach for solo developers actively using LL...
Collapse
 
xwero profile image
david duymelinck • Edited

This feels bad on so many levels.

First: user/developer experience. Using regions won't help you when you don't know which regions there are. Using generic regions are more visual noise than anything else.
A long script adds more distraction than a short one, that is why people recommend to keep classes and functions small.

Second: function/class visibility. When all the code is in one file how do you handle classes and functions that only are used by specific classes. how do you handle functions and classes that could have generic names in a module, you don't want 80 character plus names in you code, right?

Third: useless AI information. The goal of context engineering is to give the LLM just enough information to generate the wanted outout, no more no less. By feeding the LLM the whole application every time, you are filling the context window with superfluous information.

I looked at your project and you don't even implement your own advise. Why do you want to convince other people it is a good idea?

Collapse
 
leob profile image
leob • Edited

Yeah, makes zero sense to me what the author advocates - I've been breaking code into multiple files (with self-describing names) since I took my first baby steps in programming - so much easier to navigate your code, it's a no-brainer ...

Collapse
 
embernoglow profile image
EmberNoGlow

I agree with your position: in traditional development and teamwork, modularity is the right solution.

My point, however, concerns the development workflow with LLM. When we use AI as the primary engine for generating complex, interconnected logic, it's more difficult for it to effectively process a dependency graph scattered across 10 files than a single, coherent piece of code where all local constants and functions are immediately visible.

This isn't a rejection of good design, but a pragmatic solution for speeding up AI iteration. In the release (as I mentioned), I'll return to modularity, but for the generation phase, a monolith proved more effective.

Thread Thread
 
leob profile image
leob • Edited

"it's more difficult for it to effectively process a dependency graph scattered across 10 files" - but is that really the case? I believe that modern tools like Copilot/Cursor/Claude are pretty good at working with projects containing dozens of files - but, if you say that for your particular use case a 'monolith' worked better, then I believe you!

Thread Thread
 
embernoglow profile image
EmberNoGlow

Thank you! You're right, modern tools have become much better for working with projects.

However, the problem I encountered was precisely that LLM forgot about side effects between files. For example, Copilot constantly forgot to call a function that updated variables in another script before calling the function in the current file. This led to errors that I had to fix manually. When I sent the patch back, it acknowledged the error, but in the next iteration, to my surprise, it fell into the same trap. I don't know if the problem was my laziness in explaining the error in more detail, or if I somehow misread the prompt.

Thread Thread
 
leob profile image
leob

Makes sense - you just found a practical solution to a real and practical problem - "in theory" it should have worked with multiple files, in reality it didn't ... I'm also a pragmatist myself - when confronted with an issue and when I see a quick workaround I'm inclined to go with it, unless I'd be violating an important piece of the architecture, but otherwise I'd just go with it - theory is for the academics!

Collapse
 
harsh2644 profile image
Harsh

Finally someone said it! 🙌

I've been thinking the same thing while working with AI. The moment I break code into 20 different files, Claude and GPT start hallucinating imports, forgetting functions, and losing context.

But here's the catch: 1000-line scripts are great for prototyping. For production? Nightmare.

My current workflow: prototype in one massive script → get it working → THEN refactor with AI's help. Best of both worlds.

What do you think - refactor at the end or never refactor? 🤔

Collapse
 
harsh2644 profile image
Harsh

Finally someone said it! 🙌

I've been doing the same thing lately. When I'm prototyping with Cursor, keeping everything in one monolithic script actually helps the AI understand the full context. The moment I split things into modules, the AI starts hallucinating imports and forgetting function signatures.

But yeah, once the prototype is stable, refactoring into modules becomes this satisfying "paying back technical debt" session.

Question: How do you decide when it's time to break the monolith?

Collapse
 
embernoglow profile image
EmberNoGlow

Thank you! You can break the monolith when you're generally confident the project has achieved most of its goals. I always create a short roadmap, and once a lot of features are completed, you can break the project if you're confident your API won't undergo any major changes.

Collapse
 
halakabir234hub profile image
Hala Kabir

This is a really thoughtful post — and I appreciate that you clearly framed it as a context-specific approach, not a manifesto against modularity. That nuance matters.

I especially agree with this line:

“AI is not a project manager. It is an executor.”

That’s such an important distinction in the LLM era.

For solo developers who iterate fast and heavily rely on AI for refactoring or generation, a monolithic script does reduce cognitive overhead. One file = one context window. No dependency graph gymnastics. No “wait, which file defines this?” moments. From an AI-collaboration standpoint, that’s a very real productivity boost.

That said, I see the monolith not as an anti-pattern — but as a phase.

In rapid prototyping:

Speed > architecture purity

Context completeness > separation of concerns

Iteration > long-term scalability

But once a project stabilizes, modularity becomes less about structure aesthetics and more about:

Testability

Replaceability of components

Onboarding clarity

Long-term maintainability

I also liked your point about “illusion of structure.” I’ve seen projects where splitting into 10 folders created more mental overhead than clarity. Structure without boundaries is just fragmentation.

One thing I’d add:
Instead of thinking monolith vs modular, maybe we think in terms of cognitive load management — for both humans and AI. The real question becomes:

At this stage of the project, what structure minimizes friction?

For solo + AI-driven experimentation, a well-organized monolithic script with logical sections (regions, clear grouping, strong naming) can absolutely be the most pragmatic choice.

Collapse
 
embernoglow profile image
EmberNoGlow

Thanks!