DALL-E/Every illustration.

The AGI-in-2027 Thesis

Some researchers are convinced that we are on the cusp of superintelligence. Are they right?

74 3

Was this newsletter forwarded to you? Sign up to get it in your inbox.


There are two meaningful divides among technologists in AI. The first is that the technology is an iterative, genuinely useful improvement that will enable new use cases of software. AI is the next cloud, the next smartphone. I’ve covered this perspective extensively over the last few months with the latest Google, Microsoft, and Apple AI product roadmaps. 

The other side is more…extreme. It believes that artificial intelligence will someday become artificial general intelligence (AGI) and, from there, self-improve to the point of being 10 times smarter than us. When that happens is debated. Whether that kills or empowers us is debated. How the technology will be able to self-improve is debated. But it is such a strong belief that every single AI research group has at least a few adherents. 

Even if you, like most investors and founders today, dismiss the second scenario out of hand, it is worth considering its validity. Because the biggest companies in the world are acting like it is true. Elon Musk believes that we will have AGI in less than two years. Microsoft is drawing up plans for a $100 billion supercomputer to build models big enough for AGI that it intends to finish in about six years. The world’s most important AI companies—OpenAI, Google DeepMind, Anthropic—have AGI as the explicit mission of their organizations.

I’m covering this topic today because of this chart, pulled from a 165-page series of essays from Leopold Aschenbrenner, who was recently fired from OpenAI’s Superalignment research team. The series—entitled "Situational Awareness"—is his argument for why this second group is right and what we should do as a result. 

Source: Situational Awareness.

The basic thesis of this chart is this: Previous generations of LLMs have had forecastable logarithmic growth in their capabilities. Each time we increase the investment into training by 10x, we get a predictably large leap in LLMs’ capabilities. Aschebrenner believes that we are about two to three orders of magnitudes (OOM) of investment away from AI being able to conduct AI research, making it recursively self-improving. He says that it will be able to do so by making a “hundred million autonomous machine learning researchers.” Sure! Why not?

Aschenbrenner’s choice to publish these essays with a podcast interview and a bunch of Twitter threads is embedded with his own incentives. To me, this writing reads as a spurned lover trying to reclaim what was lost—his position of authority in the AI community after getting the boot from OpenAI. He is doubly incentivized to make a splash with them because he “recently founded an investment firm focused on AGI, with anchor investments from Patrick Collison, John Collison, Nat Friedman, and Daniel Gross.” (Online PDFs are a surprisingly effective way to raise interest in investment vehicles.) 

He is so confident in his thesis that he threw the gauntlet at technology equity analysts like me, saying, “Virtually nobody is pricing in what's coming in AI.” First off, rude. Second, maybe he’s right. If superhuman intelligence really is about to be achieved in three years, maybe those five-year financial models I’ve been making are bunk. If nothing else, his writing is a useful framing tool to examine the AGI thesis. At its core, his argument is built on a series of three assumptions. 

The three levers of AGI

Let’s start with the Y axis of the chart: “Effective Compute.” This label summarizes the improvements and investments in LLMs in three different ways:

  1. Compute: The size of the “computer used to train these models,” aka how many GPUs were used
  2. Algorithmic efficiencies: The quality improvement in techniques that make training these models more efficient and powerful
  3. “Unhobbling” gains: The tools and techniques that we have developed to give the LLMs additional powers (like browse the internet) 

You’ll note that only one of these three variables is actually, like, based on numbers and neatly trackable in a spreadsheet—compute, or how many GPUs are humming. Algorithmic efficiency and “unhobbling” are bets on continuous scientific breakthroughs, not on capital expenditures. 

That does not disqualify Aschenbrenner’s calculations! There is precedence for this sort of continuous breakthrough. The oft-cited Moore’s Law—the observation that the number of transistors on a microchip doubles approximately every two years—relies on both the explicitly quantifiable investment in sophisticated chip fabricators and wholly unquantifiable scientific breakthroughs in transistor technology. Aschenbrenner is arguing for a variation on Moore’s Law, in which OOM gains in GPU chip computers and scientific breakthroughs enable a requisite increase in AI capabilities. 

His evidence is that, to date, models have gotten “predictably, reliably” better with each OOM increase in effective compute. This has been the case in video generation: 

Source: Situational Awareness.

Similar jumps in improvement have occurred in the GPT series of models, which can now outperform the vast majority of humans on standardized tests. 

Create a free account to continue reading

The Only Subscription
You Need to Stay at the
Edge of AI

Black Friday offer: subscribe now with 25% off of your first payment

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Monologue Monologue
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@sanjiva.dubey over 1 year ago

Wonderful article , opens up new thinking

Mark Laurence over 1 year ago

As always from you Ethan, some of the best, balanced and most nuanced thinking on a topic that most authors represent from a tightly held perspective at either end of a belief spectrum. Thank you for all of your great work, it’s always appreciated.

Leo Larrere over 1 year ago

I share your sentiment. I personally don't think current model architectures (even those of recent "enhanced" transformers) can produce an AI capable of discovering new things. Note that "new" here refers to anything humans haven't already put on paper.