7 min read

What's in it for Open AI?

Last week at WWDC, Apple announced Apple Intelligence. There's been significant pressure from investors for Apple to do something in AI ever since the launch of ChatGPT and the wave of AI models that followed from Google, Meta, Anthropic and others. There's a strong argument that this pressure came from investors who don't understand how Apple operates; it's perfectly in keeping with their approach to take their time and launch a product that nails the use cases and user experience rather than rush some fancy new tech into their product. They have done that with Apple Intelligence, where they have nailed the implementation in a way no one else has.

As impressive as this is, though, what's in this for OpenAI is not so obvious. It had been clear for a while (thanks to Mark Gurman) that OpenAI would be a part of this announcement, but most people (myself included) expected them to be a more significant part. They were the junior party and joined the platform on Apple's terms. But before we can dig into that, it's worth understanding how Apple Intelligence works (at a high level).

Most of what I say below focuses on iOS and the iPhone, the most significant product. However, everything applies equally well to MacOS and iPadOS.

The architecture

There are three groups of models that form Apple Intelligence as it stands today:

  1. On-device models: this is a small, local model developed by Apple. It has an interesting modular architecture that I might write about in future. Still, this discussion is well suited for 'adjustment' activities, such as summarisation and proofreading. It's not doing any true generation, and given its size, it probably wouldn't do so particularly well. This model will also decide where a particular request is routed (i.e. will it be handled on-device, in the Apple cloud, or by OpenAI).
  2. Apple's server models: Apple is building a private cloud, seemingly based entirely on its own silicone, to run larger models that cannot be run on-device (at least at the moment). Again, the architecture is interesting in that it allows them to maintain user privacy while still offloading to the cloud, and again, this is something to discuss in a future post.
  3. Third-party models: finally, we have third-party models, which at the moment is just ChatGPT, but Apple made clear they would like to include other models in future, including Google's Gemini. Any query Apple decides it can't serve (or doesn't want to - perhaps for reputational reasons) gets sent to ChatGPT. The flow makes it very clear that you're sending data to a non-Apple service every time with a big scary prompt, which also makes it clear that Apple is, therefore, not responsible for the result.

There are a couple of key points to make here. First, ChatGPT/ OpenAI is not deeply integrated into iOS; they're being called over an API, and the user knows when that happens. There's a lot of talk online about the security risks of ChatGPT being deeply embedded in the OS, which I assume is a willful misunderstanding from people with a grudge against one or both of these companies. The second point is that while ChatGPT will be available through iOS, it will be on Apple's terms. OpenAI will receive only the things Apple decides not to handle itself and, in the future, will be competing against other models to do so.

Benefits for OpenAI

  • Users: The clearest benefit for OpenAI is that it now has access to a vastly larger user base than before. As the fastest-growing website ever, ChatGPT seems to have around 180 million users. In contrast, there are over one billion iPhones in the world. Many of those are too old to run Apple Intelligence, but they also ship over 200 million new phones yearly. From the next generation, it's safe to assume every iPhone sold will run Apple Intelligence. There are at least a couple of benefits to this:
    • Firstly, ChatGPT has managed to monetise some percentage of its user base. According to The Information, the company had an annualised revenue run rate of $3.4 billion in June 2024. Currently, they're investing so much that they're not profitable and still pretending to be a non-profit, but the point is they have a way to make money from having users.
    • Secondly, defaults do matter, particularly for non-technical users. Many of these users will stick with the default if it's good enough. ChatGPT is about to become the default third-party AI model for a huge number of iPhone users, giving OpenAI a user base that the other models, if/ when they join Apple Intelligence, will have to work hard to coax away by either being significantly better, or significantly cheaper.
  • Revenue/ conversion: This was already partially addressed above. Apple Intelligence and the OpenAI integration are free to anyone with a good enough device. The full details haven't been announced yet. Still, it seems likely that the ChatGPT integration will operate on the same terms as the free version currently available on the web, i.e. you get 10 messages per day and cannot access the most powerful model. As with the web version some percentage of users will be convinced to upgrade to remove these restrictions.It seems very likely that you'll be able to subscribe to OpenAI very quickly using Apple Pay on your phone, which means two things:
    • The friction to subscribing is significantly reduced: the first time you hit your daily limit, it'll only be a matter of seconds to pay for a subscription and keep using the service, making it more likely people will do so
    • Apple will get their cut: like all iPhone transactions, Apple will take a percentage, likely the usual 30%.
  • Brand: Finally, ChatGPT is synonymous with AI in the minds of the general public. Whatever their business model ends up being (subscriptions, ads, devices, or something else), that brand will be an incredibly valuable asset. Being Apple's 'chosen' partner will only reinforce that brand image.

One benefit I explicitly don't see for OpenAI is access to more data, which they could use to improve their models. They undoubtedly will get some when an individual sends some data for a specific query, but Apple will always have the advantage here regarding data specific to the user. As the on-device models and Apple's cloud improve, OpenAI will have access to less and less of that data.

Risks for OpenAI

  • Being commoditised: a commodity is a good or service that is undifferentiated and available from multiple suppliers, who are then forced to compete on price because there's nothing else to compete on. Given the proliferation of AI models at or around the same level of functionality, there's good reason to think we're heading towards AI models being a commodity. Sure, OpenAI got their first, giving them a hugely valuable brand and existing user base, but there's increasingly little to stand them apart from other models in terms of capability.Apple Intelligence now sits between iPhone users and the third-party model they use. While users can change that model, they'll likely do so infrequently, if at all. Assuming all the third-party models available are good enough, and I strongly suspect they will be, users will choose the cheapest. OpenAI, therefore, is likely in a position where they're about to compete in a commodity marketplace with some incredibly powerful and rich businesses, like Google. Two things might help OpenAI here:
    • Timing: Apple Intelligence will arrive in September, about 3 months from now. If OpenAI is still the only third-party AI available, it will become the default for many people. If other models are onboard at that point, then there is a chance (depending on the implementation) that users will have to actively choose which model to use. If this is the case, they don't benefit from relying on inertia to keep users.
    • Privacy: Apple users are more privacy conscious, with Apple both leaning into and amplifying the importance of privacy. Google is not seen as a company that cares about privacy, so this may be the user base they have more difficulty convincing. However, they have captured search on iOS (albeit by paying to be the default), so maybe this isn't too big of a deal.
  • Harder to push into devices: there are ongoing rumours that Sam Altman/ OpenAI are talking to Jony Ive's firm LoveFrom to design new AI hardware. I would love to see what they come up with, but I'm also very sceptical that they can make this a success. There are two possible versions of this:
    1. A device that is an accessory to your phone; this potentially gives a new way of interacting with AI, but is ultimately reliant on the phone for processing, data, etc.An 'accessory' device is definitely a possibility, and Meta has already done this with their Ray-Ban Smart Glasses. This could be a popular device, but ultimately the power still lies with the phone. Phone manufacturers (and yes, particularly Apple) may decide to keep access to the best bits of their APIs to themselves to ensure they have the best accessory.
    2. A device that replaces your phone: which means it needs to offer some huge advantage over your phone (i.e. access to an AI you can't get elsewhere) while also being good enough at all the other stuff your phone can doThis feels like the path for OpenAI to become a true consumer tech giant. However, the fact that they're willing to plug into Apple's ecosystem and all the arguments about commoditisation above suggest they don't believe that their models will be sufficiently differentiated to draw users away. That's certainly my viewpoint.
  • Costs without revenue: as above, I expect there to be a way for users to subscribe to ChatGPT through this interface, giving them access to better models and more than 10 queries per day. The problem for OpenAI is they're just picking up what Apple's models can't deal with. In the early days, there will potentially be a lot of traffic while new users will experiment, and Apple's models will be limited. But over time, maybe 10 queries per day will be enough. They'll have taken on a load more infrastructure costs to deal with that upfron without a proportional increase in revenue.

Conclusion

So what does all this mean? Did OpenAI do the right thing by signing on to join the Apple Intelligence solution? My view is it's a gamble, but it's a reasonable one to make. They need to continue to capture users and mindshare in the AI space, and this move gives them a huge new user base. The risks of losing that mind share or of another AI model suddenly becoming the default for a huge number of users is arguably a much bigger threat.

What I hope for OpenAI, though, is when Apple Intelligence does launch, they are still the only third-party AI available (or at least, the default). Giving users the choice between competing models on day one gives them a chance to realise that, really, any model will probably do.