DEV Community

Cover image for We're Linear Thinkers in an Exponentially-Changing World
Keith MacKay
Keith MacKay

Posted on • Originally published at tlcmentor.substack.com

We're Linear Thinkers in an Exponentially-Changing World

We're Linear Thinkers in an Exponentially-Changing World

The more time I spend in the AI ecosystem, the more convinced I become that the pace of change isn’t just fast—it's explosive…and increasingly so. Most people still think in terms of linear change while the world is accelerating exponentially.

That mismatch is where disruption happens.


We’re linear thinkers living in an exponential world

We weren’t built to intuit compounding curves. It’s why exponential progress feels like it comes out of nowhere.

Most AI charts are logarithmic for a reason: they’re trying to compress the 10X-after-10X reality into something our brains can process. Turn those accelerations we don't process into nice straight lines.

And this is why the fast eat the slow. By the time a large company finishes planning, the curve has already bent again.


I first learned this over 20 years ago—Kurzweil rewired my thinking

At a Ray Kurzweil talk at MIT in the mid-2000s, he described the “Law of Accelerating Returns.” It permanently rewired how I think about the pace of technology change.

He wasn’t the first to notice this pattern:

  • Henry Adams – law of acceleration
  • Buckminster Fuller – ephemeralization (doing more and more with less and less)
  • Moore’s Law – exponential chip complexity
  • Hans Moravec – robotics advancing at Moore’s-law speed

Kurzweil unified and expanded these ideas to encompass all technology and evolution—and even applied them to his business strategy. He claimed that he began designing products years ahead of time, and calculating when the enabling technologies would be available. He developed a photocopier-sized text scanner for the blind in the 70s, and began designing his handheld version of the same soon afterwards. He was in production within months of when the enabling tech was finally small and performant enough to work for the product. Is it true? I don’t know – but the fact that it would be a striking example of true exponential thinking demonstrates how rarely it occurs.


Critics point to Kurzweil’s dates being off in some cases by a decade. But I would argue that in most cases, the limiting factor wasn’t technological capability—it was political will, regulation, or distribution. Exponential growth of technical capability is the rule rather than the exception. Rapid adoption of what's possible is the exception rather than the rule.

As William Gibson said: “The future is already here—it’s just not evenly distributed.”

It's never been more true. An example would be that Kurzweil's predictions for 2009 (made in 1999!) included “self-driving cars”. While not available to consumers in 2009, Google had, in fact, successfully logged over 200,000 miles with their self-driving technology by 2009, and Nevada was putting self-driving vehicle laws on the books. That future wasn't yet evenly distributed.


Today’s AI acceleration makes those early curves look quaint

Consider just a few signals as outlined in the Stanford AI Index Report 2025[1]:

  • Hardware costs have been dropping 30% per year
  • Energy efficiency has been improving 40% per year

And then on the software and training sides of the equation:

  • Google’s Gemini 3.1 showed that models can still gain intelligence through smarter training—not just more parameters as the recent trend had been (combine smarter training with more parameters, and there's no reason for the capability curves to flatten out yet)
  • Breakthroughs (like Mixture-of-Experts, 1-bit quantization, the tabular foundation model, etc.) emerge constantly

And finally, practitioners are learning new ways to get the most from the tools, and increasing efficiency while reducing costs:

  • I code with AI completely differently than I did a few months ago.
  • And I expect the same a few months from now.

Every layer of the ecosystem is growing. And doing so faster.


10X improvement per year changes everything

Most of my clients invest with a 3–7 year horizon. If AI capability continues at the ~10X per year that we’ve seen since at least GPT2:

  • 3 years → 1,000X more capable
  • 7 years → 10,000,000X more capable

Even “only” 5 more years of this curve gives you 100,000X improvement.

Put simply: software moats have largely evaporated over the past year.

  • In a recent diligence project, I replicated ~80% of the target’s product in a single weekend (in spare time) for something like $60 of Claude Code time. When the target wrote the product two years ago, it was ground-breaking. Now it was a weekend’s part-time work.
  • Another colleague did something similar on a recent project.
  • We’re both experienced software developers, and deeply ensconced in the context engineering rabbit holes. But a motivated junior engineer could very likely do this in under a week.

It's never technology, always psychology

My colleagues and I have successfully used context engineering principles and the latest generation of AI coding tools and LLMs to:

  • rebuild significant legacy systems into modern stacks with full test infrastructure and mature coding practices
  • build greenfield apps at unbelievable speed with legit UI frontend work
  • create agents, skills, commands, and developer workflow tools to further accelerate our own work with these tools
  • analyze legacy codebases and plan monolith decomposition and modularization
  • read, analyze, and fix bugs in open source projects
  • develop documentation and visualization for codebases, with no prior exposure to the code

The hardest part?

Change management. Humans can’t mentally accelerate at the same rate as the tools.

Rote tasks? AI is already there. That future just isn’t yet evenly distributed. Creative tasks? They’re next.


So what should you do?

Three things matter more than ever:

  1. Master the tools
  2. Stay flexible and experiment constantly
  3. Build moats around relationships, distribution, and trust—not code

Because the curve is still bending upward. And it’s bending faster than most people realize.


[1] Nestor Maslej, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, Toby Walsh, Armin Hamrah, Lapo Santarlasci, Julia Betts Lotufo, Alexandra Rome, Andrew Shi, Sukrut Oak. “The AI Index 2025 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2025. https://doi.org/10.48550/arXiv.2504.07139

Top comments (0)