If you want to understand how AI progress will unfold over the coming decade, a good historical analogy is the cat-and-mouse dynamic that ruled the PC industry in the 1980s and ’90s.
Back then, computers were good enough to attract millions of users, but their limited speed and storage were often a pain. Demand for increased performance was enormous. But just as soon as new generations of more powerful computers were released, developers quickly built applications that took advantage of the new capacity and made computers feel slow and space-constrained all over again, fueling demand for the next generation of improvements.
The main bottleneck was the central processing unit (CPU). Everything a computer does has to flow through it, so if it’s overloaded, everything feels slow. Therefore, whoever made the fastest CPUs had an extremely desirable asset and could charge a premium for it.
Intel was the dominant player. Its ecosystem of compatible devices, library of patents, integration of design and manufacturing, economies of scale, partnerships, and brand recognition (even among consumers!) all came together to make it the largest and most profitable supplier of CPUs in the ’90s. It is hard to overstate Intel’s dominance—it maintained 80–90% market share through this whole era. It took the platform shift to mobile, where battery efficiency suddenly mattered, for their Goliath status to unwind. [1]
Today we are seeing the emergence of a new kind of “central processor” that defines performance for a wide variety of applications. But instead of performing simple, deterministic operations on 1’s and 0’s, these new processors take natural language (and now images) as their input and perform intelligent probabilistic reasoning, returning text as an output.
Like CPUs, they serve as the central “brain” for a growing variety of tasks. Crucially, their performance is good enough to attract millions of users, but their current flaws are so pronounced that improvements are desperately demanded. Every time a new model comes out, developers take advantage of new capabilities and push the system to the limits again, fueling demand for the next generation of improvements.
Of course, the new central processors I’m talking about are LLMs (large language models). Today’s dominant LLM supplier is OpenAI. It released its newest model, GPT-4, yesterday, and it surpassed all previous benchmarks of performance. But developers and users are still hungry for more.
Although LLMs have a lot in common spiritually with CPUs, it’s too early to know whether the LLM business will be as profitable or defensible as the CPU business has been. Competition is coming for OpenAI. If it falters, that wouldn’t be the first time the pioneer of a complex new technology ended up losing market share.
Case study: Intel’s memory failure
Once again, we can look to Intel’s past to glean insight into OpenAI’s future. The early years at Intel represent a cautionary tale of commodification. Because Intel was the dominant supplier of CPUs in the ’80s, ’90s, and early 2000s, it’s easy to forget it actually started out in the RAM memory business, and even helped create the market in the 1970s. Its superior technology and design enabled it to capture the market in the early years, but as time went on, it had trouble staying competitive in the RAM market.
Just check out the brutal drop in Intel’s market share (highlighted in yellow):
The Only Subscription
You Need to
Stay at the
Edge of AI
Black Friday offer: subscribe now with 25% off of your first payment
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators
Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools

Comments
Don't have an account? Sign up!
Hi Nathan,
I do agree LLMs are becoming the new focal point for computing. But I don't think the memory analogy is apt. As you note, memory is a fully fungible component. When buying from Intel, TI, Samsung etc, you are buying a product with equivalent performance for a given specification. There is no qualitative difference. But for LLMs, each model outputs very different things. Swapping one for another immediately produces tangible effects. It's more like a Coke and Pepsi situation - you can taste the difference! And given that the product spec is entirely open ended and infinite in scope, I don't think it's even possible to produce a spec equivalent LLM. Talent, data, data processing, training/inference infra, distribution (via MSFT) are immense tailwinds that will produce product differentiation that's very tangible and lasting.
It's not a given that 'best models' will continue to be accessed via API, especially if you include cost. Since LLMs can train other LLMs, machine IQ can be effectively photocopied.
https://www.artisana.ai/articles/leaked-google-memo-claiming-we-have-no-moat-and-neither-does-openai-shakes