Apple Pays Google $1 Billion/Year for New Siri with Gemini
AN Apple, the company that has always prided itself on controlling every piece of its ecosystem -- from chip to software -- just signed a cash checkUS$1 billion per yearfor Google. The reason: Siri needs Gemini to finally function as a true artificial intelligence assistant.
It is no exaggeration to say that this agreement redefines the landscape of consumer AI. The largest hardware company in the world implicitly admitted that it cannot compete in language models with those who do this as a core business. And instead of launching another average product with the "Apple Intelligence" label, it decided to pay billions to have the best AI engine available underneath Siri.
Let's break it all down: how much it costs, how privacy works, what Siri will be able to do, when you'll see it on your iPhone and what this movement means for the AI market as a whole.
1. The deal: $1 billion a year for Gemini
According to sources close to the negotiations, Apple reached a multi-year agreement with Google to license acostmized version of Geminias Siri's main artificial intelligence engine. The estimated value is US$1 billion per year, which may vary based on the volume of use.
To put it in perspective: Apple already pays Google between US$18 and US$20 billion per year to make Google the default search engine on Safari. This $1 billion AI deal is smaller in absolute value, but potentially more transformative for Apple products.
Agreement structure
- Duration:multi-annual (estimates point to 3-5 years), with annual term reviews
- Base model:costm version of Gemini (not public Gemini -- a variant optimized for Apple's needs)
- Processing:on Apple servers, within the Private Cloud Compute infrastructure. User data does not go to Google
- Partial exclusivity:Apple cannot use competing models (GPT, Claude) as the main engine for Siri during the term of the agreement. But it can offer secondary options
- Payment:annual fixed fee + variable component based on volume of requests
Historical context:Apple previously used Bing briefly in 2023 for search results in Spotlight, but later backtracked. With AI, the company has apparently learned that compromise doesn't work -- either you have the best model available or it's not worth launching.
Why Gemini and not GPT?
That's the question everyone asked. OpenAI, with GPT-4o and GPT-5, was the obvious choice. But three factors weighed in Google's favor:
- Native multimodality:Gemini was designed from the ground up to process text, images, audio and video simultaneously. GPT added multimodality later. For Siri, which needs to understand what's on the screen, hear voice commands and interpret photos, Gemini's architecture is more natural
- On-device efficiency:Google has experience with models that partially run on the device (Gemini Nano). This aligns with Apple's philosophy of processing as much as possible locally.
- Existing relationship:Apple and Google already have a massive commercial relationship (the search agreement). Adding AI to this partnership is a natural extension, with contracts and legal terms already mapped out
There was also, according to rumors, a control issue. Apple wanted a partner that would provide the model but not try to dominate the user experience. With OpenAI, there were fears that ChatGPT would become more visible than Siri. With Google, the agreement is clearly infrastructure -- Gemini is invisible to the end user.
2. Siri's problem: it only works 2/3 of the time
To understand why Apple made such a drastic decision, you need to understand the size of Siri's problem.
Apple internal tests (leaked in 2025) showed that SiriI only successfully complete 2 out of every 3 requestsof users. In other words, 1 out of every 3 times you talk to Siri, she fails -- doesn't understand, responds incorrectly or says "I can't help you with that".
The numbers are embarrassing
| Assistant | Success Rate (2025) | User satisfaction |
|---|---|---|
| ChatGPT (mobile app) | ~89% | 4.7/5 |
| Google Assistant | ~85% | 4.3/5 |
| alexa | ~78% | 4.0/5 |
| Siri | ~67% | 3.2/5 |
Siri is the default assistant for more than2 billion active Apple devices. Having the worst AI assistant on the market isn't just embarrassing -- it's an existential risk to Apple's narrative as a premium technology company.
What goes wrong
- Natural language understanding:Siri still operates with predefined intents. If you ask for something outside the command catalog, she doesn't understand
- Reasoning:Siri doesn't reason. It maps input to output. Requests that require inference or context consistently fail
- Memory:Siri doesn't remember what you asked 30 seconds ago. Each interaction is isolated
- Multi-step tasks:"send a message to João with the address of the restaurant I went to yesterday" -- this requires crossing data from Messages, Maps and history. Siri can't
- Integration with apps:Despite the App Intents framework, few apps implement deep Siri support
The result is that iPhone users have trained themselves todon't use Sirifor nothing but timers, alarms and "hey Siri, play music". The assistant that should be the iPhone's main interface has become a basic utility.
3. Siri 2.0: what changes with Gemini underneath
With Gemini as its engine, Siri gains capabilities that were technically impossible with the previous architecture. It's not an incremental update -- it's a rebuild.
Capabilities confirmed (via leaks and internal sources)
- Natural conversation:Siri will truly understand natural language, with context, ambiguity, and nuance. "That Italian restaurant Marco recommended" will work even if you don't say the name of the restaurant
- Session memory:Siri will remember the context within a conversation and, in some cases, between sessions. "Compare it to that hotel I researched yesterday" will work
- Reasoning:requests that require inference, comparison and analysis. "Which of my flights this month has the best price-duration ratio?" -- Siri will cross-reference data from the email, calendar and Maps app
- Long text generation:Siri will be able to compose complete emails, summarize long documents and create structured texts
- Multimodality:"What is this plant?" pointing the camera. "Summarize this PDF" showing a document. "What's wrong with this code?" pasting a screenshot
What doesn't change (for now)
- Siri Voice:Apple will maintain its own voices, will not use Google's voices
- Visual identity:no Google branding will appear on the interface. For the user, "Siri" is period
- Basic tasks:timers, alarms, HomeKit and simple commands continue to use Apple's on-device engine, without sending anything to servers
Stay ahead with updated skills
The AI race doesn't stop. Those who have ready-made skills in Claude Code adapt more quickly to each new development. 748+ skills covering marketing, dev, SEO, copy and automation.
Ver Mega Bundle — $94. Privacy: Gemini costmized on Apple servers
The biggest concern with this deal was predictable: privacy. Apple has built its entire brand around "your data stays on your device." Using a Google template inside Siri seems to contradict this.
Apple's solution is ingenious and complex at the same time.
Private Cloud Compute + Gemini
Apple is not sending user data to Google's servers. What Apple did was:
- License model weights:Apple received a copy of the costmized Gemini and runs on its own servers
- Private Cloud Compute Infrastructure:The servers that process Siri requests use Apple Silicon chips, with hardware security (Secure Enclave) and no access from Google
- Ephemeral processing:user data is processed in memory, never stored on disk, and deleted after the request is completed
- Independent audit:third parties can verify that the servers actually do what Apple says (the Private Cloud Compute audit program has been around since 2025)
In practice, Gemini runs inside an Apple “vault”. Google provided the model, but does not have access to the data that goes through it. And a clear setotion betweentechnology provider(Google) anddata controller(Apple).
Simple analogy:and how to buy a BMW engine to put in a Porsche car. BMW sold the technology, but it doesn't have access to the car, the driver or where he's going. Porsche controls everything.
What does this mean for users
For the end user, nothing changes in terms of privacy. Your requests to Siri continue to be processed by Apple, on Apple servers, in accordance with Apple's privacy policies. The fact that Google's AI engine is transparent -- you don't have to accept Google's terms, create a Google account, or share anything with Google.
The only difference is that for complex orders that require cloud processing, the model that runs on Apple's servers is now Gemini instead of an inferior in-house model.
5. Siri as a complete chatbot
One of the most visible changes for the user is the transformation of Siri from a voice command assistant into acomplete conversational chatbot. Think ChatGPT, but natively integrated into the iPhone.
What does this mean in practice
Today, interaction with Siri is transactional: you ask, it responds (or fails), the end. With Gemini, Siri will support multi-round conversations:
- You:"I want to plan a trip to Lisbon in June"
- Crab:"I found flights from Sao Paulo to Lisbon in June. The cheapest is R$3,200 with TAP, leaving on the 8th. Do you want me to check hotels too?"
- You:"Yes, close to the center, up to R$500 per night"
- Crab:"Three options: [list]. Hotel X has a 4.8 rating and is 5 minutes from Chiado. Do you want me to book the flight and hotel?"
- You:"Book the hotel, but remind me tomorrow to decide on the flight"
- Crab:"Hotel booked. Reminder created for tomorrow at 10am about the flight to Lisbon."
This conversation involves searching the web, comparing prices, integrating with a booking app, creating a reminder and context memory. The current Siri would not be able to complete even the second line. Siri 2.0 with Gemini does everything naturally.
Revamped visual interface
Leaks indicate that Siri will gain an expanded visual interface, with information cards, visual comparisons and formatted responses -- similar to what Google does with Gemini on Android. Siri's colorful orb should remain, but the response area will take up more space on the screen to accommodate rich, interactive responses.
6. Screen awareness and multi-step tasks
Perhaps Siri 2.0's most ambitious feature:screen awareness. Siri will be able to "see" what's on your screen and act on it.
Examples of screen awareness
- You are reading an article in Safari:"Siri, summarize this article" -- Siri reads the content of the page and generates a summary without you needing to copy and paste anything
- You received a photo on WhatsApp:"Siri, what's in this photo?" -- Siri analyzes the image directly from the app
- You are in an email with an invitation:"Siri, add this event to my calendar" -- Siri extracts the date, time, location and description from the email and creates the event
- You are viewing a product on a website:"Siri, find it cheaper" -- Siri identifies the product and searches for prices on other websites
This is possible because Gemini is natively multimodal -- it processes images and text simultaneously. Siri can send a “screenshot” of what’s on the screen to the model and receive a contextual response.
Multi-step tasks with confirmation
In addition to understanding the screen, Siri 2.0 will be able to performsequences of actionsbetween multiple apps:
- "Siri, take the address of the restaurant that Ana sent in iMessage and create an event on the calendar for Friday at 8pm with a route on Maps"
- Siri identifies Ana's message, extracts the address, creates the event and adds the route
- Before executing, it shows a preview: "I'm going to create: Dinner at [restaurant], Friday 8pm, with a 25min route. Confirm?"
- You confirm and everything is executed
The confirmation step is crucial. Apple doesn't want Siri to perform potentially erroneous actions without checking. The "preview + confirmation" model is the middle ground between full autonomy (risky) and the current Siri (useless for complex tasks).
7. Apple admits that it cannot do AI alone
This is the part that no one at Apple wants to discuss publicly, but that the agreement makes clear:Apple was unable to build competitive language models internally.
It's not for lack of money. Apple has US$162 billion in cash. It's not for lack of talent -- the company has hired hundreds of AI researchers over the past three years. The problem is structural.
Why Apple Failed at AI
- Hardware Culture:Apple is, in its DNA, a hardware company. The internal culture values industrial design, costmized chips and physical experience. Language models are pure software, and the company has never had this as a top priority
- Privacy as a limitation:the insistence on processing everything on-device limited the size of models Apple could use. On-device models need to be small and efficient. Competitive language models need to be huge and run in data centers
- Late start:While Google and OpenAI had been investing in transformers since 2017, Apple was focused on other priorities. The 5+ year gap in language model research is almost impossible to close
- Counterproductive secrecy:Apple's culture of secrecy has prevented the company from participating in the open AI research community. Apple researchers couldn't publish papers, go to conferences, or collaborate externally. This isolated the AI team from field progress
- Talent retention:Many AI researchers hired by Apple were frustrated by the lack of freedom to publish and the slowness in bringing research to product.
The irony:Apple, famous for never depending on third parties for critical components, now depends on Google for the most important technology of the next decade. The total control that has defined Apple for 20 years simply doesn't apply to language models.
The hybrid strategy
Apple hasn't completely given up on its own AI. The strategy is hybrid:
- On-device (Apple):Small templates for quick tasks -- text autocomplete, photo classification, keyboard suggestions, object detection in camera
- Cloud (Gemini):complex language tasks, reasoning, multimodal analysis and long text generation
- Long-term research:Apple continues to invest in its own foundational models, but accepts that it will not have parity with Google/OpenAI/Anthropic for at least 3-5 years
It is a pragmatic decision. Instead of launching a mediocre Siri with its own models, Apple preferred to have the best Siri possible with a third-party model. The end user doesn't care who made the model -- they care if Siri works.
8. Timeline: iOS 26.4, iOS 27 and WWDC 2026
When will you see this new Siri on your iPhone? The timeline has not yet been officially confirmed, but leaks and analyzes converge on a likely scenario:
WWDC 2026 (June)
Apple is expected to present Siri 2.0 as a highlight of the June keynote. Historically, WWDC is where Apple introduces new software features. The new Siri will be this year's "one more thing" -- or the central theme of the entire presentation.
Wait:
- Live demonstration of new capabilities (natural conversation, screen awareness, multi-step tasks)
- Announcement of the framework for developers to integrate with the new Siri
- Probably no mention of Google or Gemini -- Apple will position everything as next-generation "Apple Intelligence"
iOS 26.4 (second half 2026)
The first public version of Siri 2.0 should arrive as a point update in iOS 26, probably iOS 26.4. This follows Apple's pattern of releasing AI features as incremental updates (as it did with the original Apple Intelligence in iOS 18.1, 18.2, 18.4).
In this version, expect basic Gemini capabilities: improved natural conversation, text generation and simple reasoning. Advanced features like screen awareness and multi-step tasks may come later.
iOS 27 (September 2027)
The full version of Siri 2.0, with all announced capabilities, should be mature in iOS 27. This gives Apple a full year to iterate, fix bugs and expand integration with third-party apps.
| When | What to expect |
|---|---|
| Jun 2026 (WWDC) | Official announcement, live demo, beta for devs |
| Sep-Nov 2026 (iOS 26.4) | Natural conversation, text generation, basic reasoning |
| 2027 (iOS 27) | Screen awareness, multi-step tasks, deep integration with apps |
9. What does Google gain from this
Google didn't make this deal just for the money (although $1 billion a year isn't negligible). Motivation is strategic.
Massive distribution
Gemini, as a consumer product (Gemini app), competes directly with ChatGPT and is losing in adoption. But if Gemini is Siri's engine, it will be running on2 billion Apple deviceswithout users even knowing. And distribution that no marketing campaign can buy.
Usage data (anonymized)
Although Google does not have access to individual users' data, the agreement likely includes aggregated and anonymized telemetry. Google can receive metrics such as: most common types of requests, success rates by category, language distribution and conversation lengths. This data helps improve Gemini without compromising individual privacy.
Validation and lock-in
Apple choosing Gemini over GPT is the most public validation Google could receive for its AI models. This strengthens the narrative that Gemini is competitive (or superior) to GPT for practical applications. And once Siri depends on Gemini for 3-5 years, Apple will have a hard time switching -- classic platform lock-in, but inverted.
Pressure at OpenAI
The agreement is also a message to OpenAI: Google does not need to win in consumer (ChatGPT vs Gemini app) if it can dominate the infrastructure. If Gemini runs on Siri (Apple) and Google Assistant (Android), it is inpractically every smartphone on the planet. OpenAI is limited to the ChatGPT app, which is popular but not integrated into anyone's operating system.
10. Market impact: what changes for you
If you work in technology, marketing or any area that uses digital tools, this agreement has concrete implications:
For iPhone users
- Siri will actually work:For the first time in years, it will be worth using Siri for tasks other than timers. The barrier of "there's no point in asking, she doesn't understand" will fall
- Less need for setote AI apps:If Siri does what ChatGPT does, many users will stop opening ChatGPT for everyday tasks
- Privacy Maintained:Unlike using third-party AI apps, Siri with Gemini on Apple servers maintains ecosystem privacy
For marketing professionals
- Voice search will grow:With a competent Siri, more people will search for information by voice. If you're not optimizing for voice search, start now
- Assistants as a channel:Think of Siri as another distribution channel. If Siri recommends restaurants, hotels and products, being visible to AI assistants will become as important as SEO
- Corporate chatbots need to improve:When the user reference is a Gemini-powered Siri, mediocre chatbots on corporate websites will look even worse
For developers
- SiriKit and App Intents become more important:Apps that integrate deeply with Siri will have a competitive advantage. If Siri can perform actions within your app via voice command, your app gains exponentially more usefulness.
- Language models as a commodity:The fact that Apple bought Gemini instead of building it itself reinforces that language models are becoming infrastructure -- like servers or databases. The difference is not having the model, but how you integrate it
- Multi-model and the future:Apple may use Gemini for Siri but Claude for development (Xcode) and GPT for research. The multi-model world is already reality
The message for those working in AI:It doesn't matter if you are team OpenAI, team Google or team Anthropic. What matters is knowing how to use the tools. Apple -- the most valuable company in the world -- just admitted that it can't do everything alone. If Apple needs AI partners, you also need the best tools available.
2026 is being defined as the year in which AI stopped being "a cool feature" to becomecritical infrastructurefrom all major platforms. Microsoft with autonomous agents, Apple with Gemini in Siri, Google dominating the consumer AI backend, Anthropic expanding agents with Claude Code. Those who master the tools now have a compound advantage for the coming years.
Don't wait for the next news. Act now.
While companies launch new models, you can be using the best of them with professional skills. Claude Code + 748+ skills = maximum productivity. $9.
Quero as Skills — $9FAQ
According to sources close to the agreement, approximately US$1 billion per year. The agreement is multi-year, with exclusivity clauses for voice assistants on consumer devices. Price may vary based on actual usage volume.
Not directly. Apple uses a costmized version of Gemini that runs on its own servers, within the Private Cloud Compute infrastructure. User data does not go to Google. Apple maintains full control over privacy, using Gemini as a language engine but with its own layers of security.
The expectation is that it will be presented at WWDC 2026 (June) and launched with iOS 26.4 in the second half of 2026, or in full with iOS 27 in September 2027. Apple has not yet officially confirmed it, but consistent leaks point to this timeline.
Not completely. Apple continues to develop smaller models to run on-device (on iPhone and Mac). The agreement with Google is for advanced capabilities that require large models on servers. The strategy is hybrid: proprietary models for simple tasks and Gemini for complex tasks. But Apple accepts that it won't have parity with Google or OpenAI for at least 3-5 years.