AI

Apple Pays Google $1 Billion/Year for New Siri with Gemini

minhaskills.io Apple Pays Google $1 Billion/Year for New Siri with Gemini AI
minhakills.io 4 Apr 2026 16 min read

AN Apple, the company that has always prided itself on controlling every piece of its ecosystem -- from chip to software -- just signed a cash checkUS$1 billion per yearfor Google. The reason: Siri needs Gemini to finally function as a true artificial intelligence assistant.

It is no exaggeration to say that this agreement redefines the landscape of consumer AI. The largest hardware company in the world implicitly admitted that it cannot compete in language models with those who do this as a core business. And instead of launching another average product with the "Apple Intelligence" label, it decided to pay billions to have the best AI engine available underneath Siri.

Let's break it all down: how much it costs, how privacy works, what Siri will be able to do, when you'll see it on your iPhone and what this movement means for the AI ​​market as a whole.

1. The deal: $1 billion a year for Gemini

According to sources close to the negotiations, Apple reached a multi-year agreement with Google to license acostmized version of Geminias Siri's main artificial intelligence engine. The estimated value is US$1 billion per year, which may vary based on the volume of use.

To put it in perspective: Apple already pays Google between US$18 and US$20 billion per year to make Google the default search engine on Safari. This $1 billion AI deal is smaller in absolute value, but potentially more transformative for Apple products.

Agreement structure

Historical context:Apple previously used Bing briefly in 2023 for search results in Spotlight, but later backtracked. With AI, the company has apparently learned that compromise doesn't work -- either you have the best model available or it's not worth launching.

Why Gemini and not GPT?

That's the question everyone asked. OpenAI, with GPT-4o and GPT-5, was the obvious choice. But three factors weighed in Google's favor:

  1. Native multimodality:Gemini was designed from the ground up to process text, images, audio and video simultaneously. GPT added multimodality later. For Siri, which needs to understand what's on the screen, hear voice commands and interpret photos, Gemini's architecture is more natural
  2. On-device efficiency:Google has experience with models that partially run on the device (Gemini Nano). This aligns with Apple's philosophy of processing as much as possible locally.
  3. Existing relationship:Apple and Google already have a massive commercial relationship (the search agreement). Adding AI to this partnership is a natural extension, with contracts and legal terms already mapped out

There was also, according to rumors, a control issue. Apple wanted a partner that would provide the model but not try to dominate the user experience. With OpenAI, there were fears that ChatGPT would become more visible than Siri. With Google, the agreement is clearly infrastructure -- Gemini is invisible to the end user.

2. Siri's problem: it only works 2/3 of the time

To understand why Apple made such a drastic decision, you need to understand the size of Siri's problem.

Apple internal tests (leaked in 2025) showed that SiriI only successfully complete 2 out of every 3 requestsof users. In other words, 1 out of every 3 times you talk to Siri, she fails -- doesn't understand, responds incorrectly or says "I can't help you with that".

The numbers are embarrassing

Assistant Success Rate (2025) User satisfaction
ChatGPT (mobile app)~89%4.7/5
Google Assistant~85%4.3/5
alexa~78%4.0/5
Siri~67%3.2/5

Siri is the default assistant for more than2 billion active Apple devices. Having the worst AI assistant on the market isn't just embarrassing -- it's an existential risk to Apple's narrative as a premium technology company.

What goes wrong

The result is that iPhone users have trained themselves todon't use Sirifor nothing but timers, alarms and "hey Siri, play music". The assistant that should be the iPhone's main interface has become a basic utility.

3. Siri 2.0: what changes with Gemini underneath

With Gemini as its engine, Siri gains capabilities that were technically impossible with the previous architecture. It's not an incremental update -- it's a rebuild.

Capabilities confirmed (via leaks and internal sources)

What doesn't change (for now)

Stay ahead with updated skills

The AI ​​race doesn't stop. Those who have ready-made skills in Claude Code adapt more quickly to each new development. 748+ skills covering marketing, dev, SEO, copy and automation.

Ver Mega Bundle — $9

4. Privacy: Gemini costmized on Apple servers

The biggest concern with this deal was predictable: privacy. Apple has built its entire brand around "your data stays on your device." Using a Google template inside Siri seems to contradict this.

Apple's solution is ingenious and complex at the same time.

Private Cloud Compute + Gemini

Apple is not sending user data to Google's servers. What Apple did was:

  1. License model weights:Apple received a copy of the costmized Gemini and runs on its own servers
  2. Private Cloud Compute Infrastructure:The servers that process Siri requests use Apple Silicon chips, with hardware security (Secure Enclave) and no access from Google
  3. Ephemeral processing:user data is processed in memory, never stored on disk, and deleted after the request is completed
  4. Independent audit:third parties can verify that the servers actually do what Apple says (the Private Cloud Compute audit program has been around since 2025)

In practice, Gemini runs inside an Apple “vault”. Google provided the model, but does not have access to the data that goes through it. And a clear setotion betweentechnology provider(Google) anddata controller(Apple).

Simple analogy:and how to buy a BMW engine to put in a Porsche car. BMW sold the technology, but it doesn't have access to the car, the driver or where he's going. Porsche controls everything.

What does this mean for users

For the end user, nothing changes in terms of privacy. Your requests to Siri continue to be processed by Apple, on Apple servers, in accordance with Apple's privacy policies. The fact that Google's AI engine is transparent -- you don't have to accept Google's terms, create a Google account, or share anything with Google.

The only difference is that for complex orders that require cloud processing, the model that runs on Apple's servers is now Gemini instead of an inferior in-house model.

5. Siri as a complete chatbot

One of the most visible changes for the user is the transformation of Siri from a voice command assistant into acomplete conversational chatbot. Think ChatGPT, but natively integrated into the iPhone.

What does this mean in practice

Today, interaction with Siri is transactional: you ask, it responds (or fails), the end. With Gemini, Siri will support multi-round conversations:

This conversation involves searching the web, comparing prices, integrating with a booking app, creating a reminder and context memory. The current Siri would not be able to complete even the second line. Siri 2.0 with Gemini does everything naturally.

Revamped visual interface

Leaks indicate that Siri will gain an expanded visual interface, with information cards, visual comparisons and formatted responses -- similar to what Google does with Gemini on Android. Siri's colorful orb should remain, but the response area will take up more space on the screen to accommodate rich, interactive responses.

6. Screen awareness and multi-step tasks

Perhaps Siri 2.0's most ambitious feature:screen awareness. Siri will be able to "see" what's on your screen and act on it.

Examples of screen awareness

This is possible because Gemini is natively multimodal -- it processes images and text simultaneously. Siri can send a “screenshot” of what’s on the screen to the model and receive a contextual response.

Multi-step tasks with confirmation

In addition to understanding the screen, Siri 2.0 will be able to performsequences of actionsbetween multiple apps:

  1. "Siri, take the address of the restaurant that Ana sent in iMessage and create an event on the calendar for Friday at 8pm with a route on Maps"
  2. Siri identifies Ana's message, extracts the address, creates the event and adds the route
  3. Before executing, it shows a preview: "I'm going to create: Dinner at [restaurant], Friday 8pm, with a 25min route. Confirm?"
  4. You confirm and everything is executed

The confirmation step is crucial. Apple doesn't want Siri to perform potentially erroneous actions without checking. The "preview + confirmation" model is the middle ground between full autonomy (risky) and the current Siri (useless for complex tasks).

7. Apple admits that it cannot do AI alone

This is the part that no one at Apple wants to discuss publicly, but that the agreement makes clear:Apple was unable to build competitive language models internally.

It's not for lack of money. Apple has US$162 billion in cash. It's not for lack of talent -- the company has hired hundreds of AI researchers over the past three years. The problem is structural.

Why Apple Failed at AI

The irony:Apple, famous for never depending on third parties for critical components, now depends on Google for the most important technology of the next decade. The total control that has defined Apple for 20 years simply doesn't apply to language models.

The hybrid strategy

Apple hasn't completely given up on its own AI. The strategy is hybrid:

It is a pragmatic decision. Instead of launching a mediocre Siri with its own models, Apple preferred to have the best Siri possible with a third-party model. The end user doesn't care who made the model -- they care if Siri works.

8. Timeline: iOS 26.4, iOS 27 and WWDC 2026

When will you see this new Siri on your iPhone? The timeline has not yet been officially confirmed, but leaks and analyzes converge on a likely scenario:

WWDC 2026 (June)

Apple is expected to present Siri 2.0 as a highlight of the June keynote. Historically, WWDC is where Apple introduces new software features. The new Siri will be this year's "one more thing" -- or the central theme of the entire presentation.

Wait:

iOS 26.4 (second half 2026)

The first public version of Siri 2.0 should arrive as a point update in iOS 26, probably iOS 26.4. This follows Apple's pattern of releasing AI features as incremental updates (as it did with the original Apple Intelligence in iOS 18.1, 18.2, 18.4).

In this version, expect basic Gemini capabilities: improved natural conversation, text generation and simple reasoning. Advanced features like screen awareness and multi-step tasks may come later.

iOS 27 (September 2027)

The full version of Siri 2.0, with all announced capabilities, should be mature in iOS 27. This gives Apple a full year to iterate, fix bugs and expand integration with third-party apps.

When What to expect
Jun 2026 (WWDC)Official announcement, live demo, beta for devs
Sep-Nov 2026 (iOS 26.4)Natural conversation, text generation, basic reasoning
2027 (iOS 27)Screen awareness, multi-step tasks, deep integration with apps

9. What does Google gain from this

Google didn't make this deal just for the money (although $1 billion a year isn't negligible). Motivation is strategic.

Massive distribution

Gemini, as a consumer product (Gemini app), competes directly with ChatGPT and is losing in adoption. But if Gemini is Siri's engine, it will be running on2 billion Apple deviceswithout users even knowing. And distribution that no marketing campaign can buy.

Usage data (anonymized)

Although Google does not have access to individual users' data, the agreement likely includes aggregated and anonymized telemetry. Google can receive metrics such as: most common types of requests, success rates by category, language distribution and conversation lengths. This data helps improve Gemini without compromising individual privacy.

Validation and lock-in

Apple choosing Gemini over GPT is the most public validation Google could receive for its AI models. This strengthens the narrative that Gemini is competitive (or superior) to GPT for practical applications. And once Siri depends on Gemini for 3-5 years, Apple will have a hard time switching -- classic platform lock-in, but inverted.

Pressure at OpenAI

The agreement is also a message to OpenAI: Google does not need to win in consumer (ChatGPT vs Gemini app) if it can dominate the infrastructure. If Gemini runs on Siri (Apple) and Google Assistant (Android), it is inpractically every smartphone on the planet. OpenAI is limited to the ChatGPT app, which is popular but not integrated into anyone's operating system.

10. Market impact: what changes for you

If you work in technology, marketing or any area that uses digital tools, this agreement has concrete implications:

For iPhone users

For marketing professionals

For developers

The message for those working in AI:It doesn't matter if you are team OpenAI, team Google or team Anthropic. What matters is knowing how to use the tools. Apple -- the most valuable company in the world -- just admitted that it can't do everything alone. If Apple needs AI partners, you also need the best tools available.

2026 is being defined as the year in which AI stopped being "a cool feature" to becomecritical infrastructurefrom all major platforms. Microsoft with autonomous agents, Apple with Gemini in Siri, Google dominating the consumer AI backend, Anthropic expanding agents with Claude Code. Those who master the tools now have a compound advantage for the coming years.

Don't wait for the next news. Act now.

While companies launch new models, you can be using the best of them with professional skills. Claude Code + 748+ skills = maximum productivity. $9.

Quero as Skills — $9
SPECIAL OFFER — LIMITED TIME

The Largest AI Skills Package on the Market

748+ Skills + 12 Bonus Packs + 120,000 Prompts

748+
Professional Skills
Marketing, SEO, Copy, Dev, Social
12
GitHub Bonus Packs
8,107 skills + 4,076 workflows
100K+
AI Prompts
ChatGPT, Claude, Gemini, Midjourney
135
Ready-Made Agents
Automation, data, business, dev

Was $39

$9

One-time payment • Lifetime access • Free updates

GET THE MEGA BUNDLE NOW

Install in 2 minutes • Works with Claude Code, Cursor, ChatGPT • 7-day guarantee

✓ SEO & GEO (20 skills) ✓ Copywriting (34 skills) ✓ Dev (284 skills) ✓ Social Media (170 skills) ✓ n8n Templates (4,076)

FAQ

According to sources close to the agreement, approximately US$1 billion per year. The agreement is multi-year, with exclusivity clauses for voice assistants on consumer devices. Price may vary based on actual usage volume.

Not directly. Apple uses a costmized version of Gemini that runs on its own servers, within the Private Cloud Compute infrastructure. User data does not go to Google. Apple maintains full control over privacy, using Gemini as a language engine but with its own layers of security.

The expectation is that it will be presented at WWDC 2026 (June) and launched with iOS 26.4 in the second half of 2026, or in full with iOS 27 in September 2027. Apple has not yet officially confirmed it, but consistent leaks point to this timeline.

Not completely. Apple continues to develop smaller models to run on-device (on iPhone and Mac). The agreement with Google is for advanced capabilities that require large models on servers. The strategy is hybrid: proprietary models for simple tasks and Gemini for complex tasks. But Apple accepts that it won't have parity with Google or OpenAI for at least 3-5 years.

Share este artigo X / Twitter LinkedIn Facebook WhatsApp
PTENES