DEV Community

Cover image for [Google Cloud Next '26 Recap #3] Anthropic's Vision for "After Software"
koichim2 for Google Developer Experts

Posted on β€’ Edited on

[Google Cloud Next '26 Recap #3] Anthropic's Vision for "After Software"

This is the third post in my Google Cloud Next '26 (Las Vegas) recap series.

You can find the previous posts here πŸ‘‡

In Parts 1 and 2, I covered my experiences on the EXPO floor. This time, I'd like to switch gears and share one of the sessions I attended at Next '26 β€” specifically, an especially memorable session by Anthropic.


Session Overview

The session I attended was in the Spotlights category.

  • Title: After software: Anthropic's vision for the next era of enterprise AI
  • Speaker: Eric Burns (Anthropic)
  • Session ID: SPTL021

What is a "Spotlight" session?

According to Next's session type definitions, Spotlights are described as follows:

Take part in these dedicated keynote-style sessions featuring live demos, new tech, and customer success stories, delivered by Google leaders and partners.

In other words, Spotlights are positioned as special, keynote-style sessions packed with live demos, the latest technology, and customer stories.

The session video is available on the Google Cloud Next page.

The session's premise

Quoting from the YouTube video description, the session's premise was as follows:

Enterprise software will change more in the next two years than it has in the past twenty. The model is being inverted. Top-down planning, where organizations define and prioritize what to build, is shifting to bottom-up execution, where AI agents rapidly solve problems as teams discover them. Anthropic shares what's happening at the frontier. Teams are already deploying agents on Vertex AI that autonomously define, build, and iterate. We make this vision concrete, grounded in real examples and practical frameworks.

A pretty bold message, I'd say. From here on, I'll walk through the points that especially stood out to me during the session.


Highlights from the Session

Anthropic's presence in the coding market

The first thing that grabbed my attention was the figure that Anthropic holds 54% of the 2025 coding market share. The vague sense of "Anthropic is well-supported by developers" was reinforced by an actual number.

On top of that, the evolution of how AI is used at work was framed with these keywords:

  • 2023: Chat
  • 2025: Code
  • 2026: Cowork

And the phrase that symbolizes where things are heading was "No terminal". Claude's benefits won't stay confined to engineers β€” they'll extend into non-engineering domains as well. That possibility came through clearly.

The evolution of Claude in a single video

For the evolution of Claude itself, the session played this YouTube video:

πŸŽ₯ https://www.youtube.com/watch?v=PnX30ZXxKco

The video gives the same task β€” "clone claude.ai" β€” to each generation of Claude models and compares the results. The progress of Claude across generations was packed into a short clip, and you could clearly see the quality of the output rising with each version.


"The Intelligence Explosion" (AI's growth curve)

Next came the "The Intelligence Explosion" section.

Under the heading "The artificial intelligence exponential", the session walked through AI's growth using emblematic LLM products from various companies across each year.

When it comes to interpreting AI's growth curve, on one hand there's the view of:

  • The Pure Exponential

while on the other hand, there are skeptical views that frame it as:

  • The Sigmoid (an S-curve)
  • The Hard Ceiling

Which curve AI will actually trace from here is genuinely thought-provoking.


"The Dawn of Agents" (What is an Agent?)

The following "The Dawn of Agents" section finally dove into the discussion of Agents.

The definition of Agent introduced here was particularly memorable for me:

An Agent is a Large Language Model that uses tools in a loop to pursue a goal

Simple, but it captures the essence of an Agent neatly β€” a phrasing I find genuinely useful as a mental anchor.


"The Solutions Machine" (Agent-driven problem solving in practice)

The next section, "The Solutions Machine", presented concrete real-world examples of Agents in action.

Example 1: Building a C compiler with multiple Agents in parallel

A case study in which multiple Agents are run in parallel to build a C compiler. You can read the details here:

πŸ”— https://www.anthropic.com/engineering/building-c-compiler

Example 2: Phased development by 3 role-specialized AI Agents

A case where three role-specialized AI Agents collaborated to develop software in phases. The full write-up is here:

πŸ”— https://www.anthropic.com/engineering/harness-design-long-running-apps

The mental model behind the Solutions Machine

What I found especially compelling was the mental model used to describe the Solutions Machine:

Feed in Problems and Capital, then iterate Work and Test to produce a Self-improving solution

Just throw in problems and capital, and Agents will autonomously experiment and refine the solution β€” that future image came into much sharper focus.


"The Great Inversion" (the center of value flips)

Next is the "The Great Inversion" section.

In the traditional approach to problem-solving:

  • "Identify, define, and scope the problem; Decide to solve it" was relatively cheap, while
  • "Implement, validate, deploy, and maintain the solution" carried the heavy cost.

But in the new pyramid at 10x Scale, this cost structure is being inverted.

As someone who works at a company traditionally rooted in SI (Systems Integration), this was a slightly uncomfortable message to sit with. The weight of implementation, deployment, and operations is shrinking, while the importance of deciding "what to solve" is growing β€” and we, too, will need to evolve in response to this shift.


"The Three Doors" (Three compounding opportunities ahead)

In the latter half of the session, "The Three Doors" introduced "Three compounding opportunities" β€” three directions for putting AI to use going forward:

  1. Empower every employee
  2. Supercharge software development
  3. Create revenue streams

What stood out was that the framing goes beyond engineering efficiency to include empowering every employee across the organization and creating new sources of business value. I'll definitely want to refer back to this framework when thinking about future AI strategy.

All in all, it was a dense, learning-packed session.


Bonus: "Introducing Cowork" at the Anthropic EXPO Booth

Now back to the EXPO for a moment.

Anthropic also had a booth at the EXPO, where a number of in-booth sessions were running throughout the event. I attended one of them, so let me briefly share that as well.

The session I joined was titled "Introducing Cowork". The content covered an explanation of how Claude Cowork works along with a live demo of producing Word, Excel, and PowerPoint deliverables with Cowork. Watching documents come together right in front of me was genuinely exciting β€” a demo that perfectly embodied the "2026 is the year of Cowork" message from earlier in the day.

The Anthropic booth ran several other sessions as well, with crowds large enough that people were standing in the back. The level of attention Anthropic was getting from attendees was tangible.


Wrap-Up

The Anthropic session I covered today was packed with content:

  • The dramatic changes coming in the next 2 years
  • The essence of Agents and the "Solutions Machine" concept
  • "The Great Inversion" of where value sits
  • Concrete opportunities through "The Three Doors"

It was a learning-packed session that nudged me to reconsider what "after software" might look like β€” and how my own work might need to adapt.

The session video is publicly available, so if any of this caught your interest, definitely give it a watch.

To be continued in #4.
[Google Cloud Next '26 Recap #4] Live Report from the Two Keynotes

Top comments (2)

Collapse
Β 
peacebinflow profile image
PEACEBINFLOW β€’

The "Great Inversion" section β€” where the cost structure flips and the expensive part becomes deciding what to solve rather than building the solution β€” is the kind of idea that sounds abstract until you realize it's already happening in small ways. I've noticed that the bottleneck on my own projects has shifted. It used to be implementation time. Now it's more often clarity time: do I actually understand the problem well enough to hand it to an agent, or am I using the act of building as a way to figure out what I'm building?

If that inversion is real, it rewrites something fundamental about how a lot of us were trained. The craft of engineering education is still mostly organized around the implementation side β€” how to build things correctly. There's comparatively little structured teaching around problem definition, scoping, or deciding what's worth building in the first place. Those were always important skills, but they were allowed to develop slowly because implementation took long enough that you could refine your thinking while you built. If implementation time collapses, that luxury disappears. You either arrive with a clear problem definition or you burn capital having agents build the wrong thing very quickly.

What I'm not sure about is whether "deciding what to solve" is something that can be taught systematically, or if it's one of those things that only really develops through experience β€” which would make it hard to scale at the pace the inversion seems to demand. Curious if the session touched on whether Anthropic sees this as a tooling problem or a training problem, or both.

Collapse
Β 
mnemehq profile image
Theo Valmis β€’

The 'after software' framing is useful precisely because most takes underweight what doesn't transfer. Models keep getting cheaper and more capable, but the institutional why behind any given system stays expensive and stays inside an org. Whatever comes after software still has to ingest that context to make decisions a team will trust. That's the gap we're closing at Mneme.