
Was this newsletter forwarded to you? Sign up to get it in your inbox.
When it comes to the software bundled into their operating systems, Microsoft and Apple are usually like oil and water.
Microsoft had Office, Apple had iWork. Microsoft had Internet Explorer, Apple had Safari. Microsoft had Windows Media Player, Apple had iTunes.
Always built in-house, always viciously competing to differentiate with their biggest rival.
Now, though, Microsoft has ChatGPT. And after yesterday’s flurry of WWDC announcements, Apple has…ChatGPT. When Apple’s newly beefed-up Siri gets a question that it can’t answer, it can call in ChatGPT for backup—no installation required.
It’s kind of crazy to see Apple integrate, by default, a mission-critical software service that is so closely allied with Microsoft and run on its cloud service. (There is some precedent. Twenty-five years ago, Apple bundled Microsoft’s Internet Explorer into Mac OS for a time—but only to save the company from potential financial ruin at the beginning of Steve Jobs’s second tour of duty as CEO.)
So, how did this arrangement happen?
Paul Graham famously remarked about OpenAI CEO Sam Altman’s dealmaking, “You could parachute him into an island full of cannibals and come back in five years and he'd be the king.”
In this case, it looks like Altman parachuted into a long-running war between two cannibal kings and turned them—through a very complicated set of deals—into reluctant partners. It’s impressive.
Who will win the race, Apple or Microsoft? Or will Sam eventually be the king?
Power moves
While ChatGPT will be integrated into iOS devices, Apple also introduced a full suite of what it calls Apple Intelligence features that boost Siri’s capabilities. Siri can now understand natural speech instead of just short commands, it has more context on who you are, and it can even find and fill out documents for you. ChatGPT is positioned as the backup option in case Siri fails.
It’s an interesting pattern—local on-device AI with cloud fallback—that both Microsoft and Apple have adopted. Yes, both companies are integrating OpenAI technology in various ways. Microsoft’s integration is very deep, while Apple outsources the queries it doesn’t have confidence it can correctly answer to ChatGPT (happily outsourcing that risk to OpenAI).
But both tech giants are also paying significant attention to processing LLM requests on-device using their own proprietary software and hardware, and only resorting to OpenAI’s cloud-based models for more complicated requests that the local devices can’t handle.
On the one hand, this was fairly predictable. In December 2022, I wrote that power would collect in four layers in the AI ecosystem:
- The operating system layer
- The browser layer
- The layer of models that are willing to return risky results to users
- The copyright layer
After WWDC, you can see why power will collect at the operating system layer. Apple’s deep AI integrations into all parts of the operating system will erode some amount of demand for typing ChatGPT or Anthropic’s Claude or any other web-based LLM into a browser.
It’ll be easier and faster to use something that’s baked into your device’s operating system, which has access to the maximum amount of context from all of your apps and websites. And as I’ve been writing for a while, the more context it has, the better its results are going to be even if its intelligence level is lower.
What I didn’t predict, however, is that the big tech players would judge local LLMs to be efficient, powerful, and fast enough to do significant work for consumers so soon. I thought Apple and Microsoft would have to use cloud-based frontier models for a long time. But they both seem to be betting heavily on on-device models, with frontier models as backups.
It remains to be seen whether that’s a good bet. The local models that Apple is relying on seem to be competitive with other models of similar size based on human preference ratings:
Was this newsletter forwarded to you? Sign up to get it in your inbox.
When it comes to the software bundled into their operating systems, Microsoft and Apple are usually like oil and water.
Microsoft had Office, Apple had iWork. Microsoft had Internet Explorer, Apple had Safari. Microsoft had Windows Media Player, Apple had iTunes.
Always built in-house, always viciously competing to differentiate with their biggest rival.
Now, though, Microsoft has ChatGPT. And after yesterday’s flurry of WWDC announcements, Apple has…ChatGPT. When Apple’s newly beefed-up Siri gets a question that it can’t answer, it can call in ChatGPT for backup—no installation required.
It’s kind of crazy to see Apple integrate, by default, a mission-critical software service that is so closely allied with Microsoft and run on its cloud service. (There is some precedent. Twenty-five years ago, Apple bundled Microsoft’s Internet Explorer into Mac OS for a time—but only to save the company from potential financial ruin at the beginning of Steve Jobs’s second tour of duty as CEO.)
So, how did this arrangement happen?
Paul Graham famously remarked about OpenAI CEO Sam Altman’s dealmaking, “You could parachute him into an island full of cannibals and come back in five years and he'd be the king.”
In this case, it looks like Altman parachuted into a long-running war between two cannibal kings and turned them—through a very complicated set of deals—into reluctant partners. It’s impressive.
Who will win the race, Apple or Microsoft? Or will Sam eventually be the king?
Power moves
While ChatGPT will be integrated into iOS devices, Apple also introduced a full suite of what it calls Apple Intelligence features that boost Siri’s capabilities. Siri can now understand natural speech instead of just short commands, it has more context on who you are, and it can even find and fill out documents for you. ChatGPT is positioned as the backup option in case Siri fails.
It’s an interesting pattern—local on-device AI with cloud fallback—that both Microsoft and Apple have adopted. Yes, both companies are integrating OpenAI technology in various ways. Microsoft’s integration is very deep, while Apple outsources the queries it doesn’t have confidence it can correctly answer to ChatGPT (happily outsourcing that risk to OpenAI).
But both tech giants are also paying significant attention to processing LLM requests on-device using their own proprietary software and hardware, and only resorting to OpenAI’s cloud-based models for more complicated requests that the local devices can’t handle.
On the one hand, this was fairly predictable. In December 2022, I wrote that power would collect in four layers in the AI ecosystem:
- The operating system layer
- The browser layer
- The layer of models that are willing to return risky results to users
- The copyright layer
After WWDC, you can see why power will collect at the operating system layer. Apple’s deep AI integrations into all parts of the operating system will erode some amount of demand for typing ChatGPT or Anthropic’s Claude or any other web-based LLM into a browser.
It’ll be easier and faster to use something that’s baked into your device’s operating system, which has access to the maximum amount of context from all of your apps and websites. And as I’ve been writing for a while, the more context it has, the better its results are going to be even if its intelligence level is lower.
What I didn’t predict, however, is that the big tech players would judge local LLMs to be efficient, powerful, and fast enough to do significant work for consumers so soon. I thought Apple and Microsoft would have to use cloud-based frontier models for a long time. But they both seem to be betting heavily on on-device models, with frontier models as backups.
It remains to be seen whether that’s a good bet. The local models that Apple is relying on seem to be competitive with other models of similar size based on human preference ratings:
Source: Apple.But these models are still pretty stupid. Apple has fine-tuned them for specific use cases like text summarization, image editing, or form filling, which should make them more useful. But for the time being, the kinds of more complex requests—like requests with multiple reasoning steps, or those that need to accurately process very long prompts —that we’re accustomed to using ChatGPT and Claude for are not going to be handled by the on-device models.
Instead, they’ll be sent first to Apple’s intelligence cloud, and then to ChatGPT as a fail-safe. So, the question is: Why ChatGPT?
Why did Apple integrate ChatGPT in the first place?
You have to imagine that ChatGPT wasn’t Apple’s first choice. If the company has built an entire ecosystem of local and cloud models to serve its users, why does it need ChatGPT?
Well, Apple has some benchmarks for its models that show comparable performance with GPT-4, in, for example, following instructions:
Source: Apple.But take a look at its human preference benchmarks (e.g., how much humans preferred the responses of Apple models vs. GPT-class models):
Source: Apple.Apple’s cloud models win only 41 percent of the time to GPT-3.5-Turbo. But GPT-3.5-Turbo is already not a very good cloud model by today’s standards: It frequently hallucinates, and its reasoning capabilities are poor. Anyone who is used to ChatGPT-4o—which is now free—is going to be seriously disappointed when they use Siri on its own.
It seems likely, based on these benchmarks, that Apple turned to OpenAI because it had to in order to build the best possible experience. OpenAI has the combination of frontier model capability and server capacity (thanks to Microsoft!) to be able to handle an Apple-sized request volume.
Where does this leave the state of play between big AI vendors?
State of play
OpenAI is in a great position: It has the number-one consumer AI app in ChatGPT, which is being integrated in various ways into Microsoft and Apple’s products. So long as it keeps its technology edge, it’ll continue to have a lot of power.
But, Apple already announced yesterday that ChatGPT is just the first of other third-party models that it will make available to its users. And the big tech companies are modifying their hardware and operating systems to push more AI workloads away from the cloud and on to individual devices. For now, because the power of the on-device models is fairly limited, that shouldn’t be much of a threat to OpenAI.
Over the next year or two, nerds like me will still get excited for the latest GPT release, but the intelligence level of GPT-4 class models are probably more than powerful enough for most consumer needs. As soon as models with that level of intelligence can be run on-device, Apple and Microsoft will be significantly less reliant on and vulnerable to OpenAI for many of their users’ AI requests. I think that could be a significant threat to ChatGPT outside of power users and businesses.
Ultimately, the features and models Apple released yesterday are useful, but, as Ethan Mollick points out, they’re fairly conservative. Will that approach be enough?
We’ll see, but Apple isn’t standing on the sidelines of AI anymore.
Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools
Ideas and Apps to
Thrive in the AI Age
The essential toolkit for those shaping the future
"This might be the best value you
can get from an AI subscription."
- Jay S.
Join 100,000+ leaders, builders, and innovators

Email address
Already have an account? Sign in
What is included in a subscription?
Daily insights from AI pioneers + early access to powerful AI tools