Google Declares All-Out AI Semiconductor War, Emerging as a Trillion-Won Customer for Meta

Photo of author

By Global Team

Google has officially challenged NVIDIA in the artificial intelligence (AI) semiconductor market.

The company has changed its strategy to supply its proprietary chip, the Tensor Processing Unit (TPU), which was previously used only for its cloud services, to external companies.

The IT specialized media ‘The Information’ reported on the 24th (local time) that “Google is pursuing a plan to directly install its AI chips in customer data centers” and mentioned Meta as the first major customer for this plan.

Google declares full-fledged AI semiconductor war… Meta emerges as a major customer worth billions
Google declares full-fledged AI semiconductor war… Meta emerges as a major customer worth billions

This is Google’s first attempt to sell TPUs, previously restricted for internal cloud use, to external companies, which is considered a strategic change that could shake NVIDIA’s long-standing dominance in the AI semiconductor market.

According to reports, Meta is negotiating with Google over directly integrating Google TPUs into its data centers starting in 2027. The deal is expected to be worth billions of dollars. Additionally, Meta is said to be considering renting TPU computing resources from Google Cloud from 2026.

Currently, Meta relies on NVIDIA’s Graphics Processing Units (GPUs) for AI learning and large-scale data processing. However, as demand for AI has surged and NVIDIA’s chip supply has become insufficient, Meta is likely making this strategic choice to secure alternative supply chains.

If Meta actually adopts TPUs, there could be a significant shift in the AI semiconductor ecosystem currently centered around NVIDIA. Key components in data centers that enable AI model training could partially transition from GPUs to TPUs.

Google has so far rented TPUs only within its cloud platform ‘Google Cloud.’ However, it now proposes a method where companies can directly install TPUs in their on-premises data centers. This allows AI computations to be performed within an enterprise’s own secured network without transferring AI data to an external cloud.

Google particularly targets financial institutions and high-frequency trading firms, which prioritize security and prefer internal servers due to data security and regulatory compliance reasons.

Google explained, “By directly installing TPUs on-site, we can meet security and compliance demands while allowing high-performance AI training.”

This strategy signals an intention to directly enter the semiconductor hardware market, extending beyond merely renting cloud services.

The competition for leadership in the AI chip market is becoming increasingly fierce. Most AI model development currently relies on NVIDIA GPUs. However, NVIDIA chips are expensive, and their supply is limited. Thus, Google aims to expand its own chips to capture a portion of NVIDIA’s market share.

‘The Information’ mentioned discussions within Google predicting that expanding TPU supply could take away up to 10% of NVIDIA’s annual revenue.

NVIDIA’s annual revenue is approximately $80 billion by 2024, so capturing just 10% would account for around $8 billion (approximately 11 trillion won).

The market immediately reacted following this report. Alphabet’s (Google’s parent company) stock price rose 2.1% in after-hours trading on the 25th (local time), while NVIDIA’s stock price fell 1.8%. Investors appear to perceive Google’s expansion into AI semiconductors as a tangible threat.

Experts view this agreement as going beyond a simple hardware supply contract, with the potential to alter the power structure of the AI industry. While the computational resources needed for AI learning are increasing explosively, the supply chain is still dominated by NVIDIA. If Google opens up its TPUs externally, companies will have more options in choosing chips.

An industry insider highlighted, “If Google opens up its chips, cloud competitors like Amazon and Microsoft are likely to follow suit,” adding that “the AI chip market may shift from an NVIDIA-dominated system to a multipolar structure.”

Currently, Google is expanding the AI semiconductor ecosystem around its latest version of TPU ‘v5’ series. TPUs are dedicated chips specialized for machine learning, which have higher energy efficiency and are optimized for large-scale parallel computation compared to conventional GPUs.

The advantages in terms of speed and cost efficiency in training AI models make Google introduce them as “the key infrastructure for corporate AI innovation.”

If Meta actually adopts Google’s TPUs starting in 2027, the AI infrastructure market is expected to enter a new phase. NVIDIA’s absolute influence may weaken, and Google will extend its footprint from a cloud company to a semiconductor manufacturer. Meanwhile, experts predict that as competition in the AI chip market intensifies, technological innovation will accelerate, but price competition is likely to intensify as well.

The ‘multi-chip strategy,’ in which companies combine various types of chips to reduce AI computation costs, may also spread.

AI semiconductors are no longer merely a technological competition but are emerging as strategic assets influencing supremacy across cloud, data, and industry sectors.

Google’s recent move is expected to be recorded as the initial step in shifting the balance of power in the AI industry.

Leave a Comment