Can Google Conquer Its Nvidia Dependency?
Gemini was trained on Google’s in-house chips, which look like a cheaper and more efficient alternative to Nvidia’s cutting-edge products.

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.
So far at least, the AI hype has been a rising tide that lifts all boats. But now? Google’s gain is suddenly Nvidia’s drain.
The debut of Google’s flashy new Gemini 3 AI model impressed the industry with capabilities approaching rival products from OpenAI, Anthropic and Meta. Perhaps more importantly, however, is that Gemini was trained on Google’s in-house Tensor Processing Units (TPUs), which look like a cheaper and more efficient alternative to Nvidia’s cutting-edge chipset. And all of a sudden, investors are starting to see the hazy outlines of a new king who, in fact, looks like the old king, dethroned and humiliated by Sam Altman’s Game of Thrones power play.
Have Their Cake and Eat It TPU
After all, it was Google scientists who published a landmark research paper in 2017, called “Attention is All You Need,” which introduced the “transformer” deep-learning architecture that has served as the bedrock of the AI boom. Then came ChatGPT, quickly followed by a freakout that chatbots could kill Google search and, soon after, the rushed debut of Bard, a PR debacle that vaporized $100 billion of Google’s market value in early 2023. Now, even Altman is saying Gemini’s advancements will cause “the vibes out there to be rough for a bit” for OpenAI, according to a memo seen by The Information.
Even worse vibes, however, are being felt by Nvidia. The successful deployment of the Google TPUs reduces the company’s reliance on Nvidia, obviously. But Google’s steadily growing cloud business, which offers compute power using both its own TPUs and Nvidia’s chips, could cut the legs out from under cloud providers (cough cough, Oracle) that have spent the past year-plus amassing a stockpile of costly Nvidia chips. And already, Google’s TPU business has customers lining up:
- In October, Anthropic announced that it had agreed to buy up to 1 million Google TPUs in a deal valued at tens of billions of dollars. On Monday, The Information reported that Meta is planning to use the TPUs in data centers scheduled to be built in 2027.
- Meanwhile, Google’s cloud business recently reported third-quarter revenue of $15.2 billion. That remains comfortably behind Microsoft’s Azure and Amazon Web Services, but still marks a 34% year-over-year increase.
“Several of our largest inference customers are suddenly asking about TPUs” on Google Cloud Platform, Laurent Gil, co-founder and president of cloud management firm Cast AI, told The Daily Upside. “They view TPUs as both more available and less expensive than competing for [Nvidia] H100 capacity. They see [Google’s TPUs] as a fast path to scale. The catch is the software ecosystem. Nvidia still has an advantage with CUDA [Compute Unified Device Architecture], but TPUs are a real player now.”
Cut Loose: Google is hardly alone in its fight for chip independence. Meta, too, has poured cash into designing chips in-house. Ditto Microsoft and OpenAI. Still, the AI war is far from a zero-sum game … for now at least. “We’re not at the point where we have to worry about who’s winning and who’s losing,” Bernstein senior analyst Stacy Rasgon said on CNBC earlier this week. “The pie is still hopefully getting bigger.”











