Google Axion

50% Faster and 60% More Efficient: Google Axion CPU Compared to x86

Google claims that Axion offers up to 30% better performance than the fastest existing general-purpose Arm-based virtual machines in the cloud
Share this STORY

Introduction

During the Google Cloud Next event in Las Vegas, Google Cloud Platform unveiled its first ARM based custom silicon chip, Axion CPU. Industry experts have eagerly anticipated this long-awaited announcement, with many wondering when, rather than if, Google Cloud would venture into developing its own chips. This reveal closely follows Microsoft’s release of their Cobalt CPU just a few months prior.

This news also comes about 18 months after the cloud service provider announced a strategic engagement with Arm-CPU vendor Ampere to deploy its chips across Google Cloud datacenters.

Follow us on Linkedin for everything around Semiconductors & AI

The Birth of Axion CPU

Google’s unveiling of the Axion CPU marks a significant milestone in the company’s journey towards enhancing AI infrastructure. Tailored specifically for AI workloads in data centres, the Axion CPU is built on the high-performance Arm Neoverse V2 platform.

Here are some key takeaways about the Google Axion:

Improved Performance and Efficiency: Google claims that Axion offers up to 30% better performance than the fastest existing general-purpose Arm-based virtual machines in the cloud, along with up to 50% better performance and 60% better energy efficiency compared to current x86-based systems.

Focus on Data Centers: The chip is designed for Google Cloud and will be available to Google Cloud customers later in 2024.

Part of a Larger Trend: Google’s announcement comes amidst a growing trend of big tech companies like Amazon and Microsoft developing their own custom silicon chips to meet the demands of artificial intelligence (AI) and cloud computing.

This is a significant development in the tech world, as it shows Google’s commitment to building its own hardware infrastructure for its cloud services.

Google Axion CPU Design

Its design is aimed at delivering optimal performance for Google’s AI services. While ensuring a seamless transition for customers with existing Arm-based workloads.

ARM Neoverse is a group of 64-bit ARM processor IP cores licensed by Arm Holdings. These cores are specifically designed for datacenter, edge computing, and high-performance computing applications.

Neoverse V-Series: These processors are geared toward high-performance computing.

Neoverse N-Series: These processors are designed for core datacenter usage.

Google Axion CPU: Performance and Potential

Google Axion

Reports suggest that the Axion CPU is poised to outperform general-purpose Arm chips by 30 percent, offering up to 50% better performance and up to 60% better energy efficiency than comparable current-generation x86-based instances. This leap in performance underscores Google’s commitment to providing cutting-edge solutions for AI-centric tasks.

The projected performance boost paves the way for enhanced efficiency and scalability across various Google Cloud services, from Google Compute Engine to Google Kubernetes Engine.

That’s why they’ve already started deploying Google services like BigTable, Spanner, BigQuery, Blobstore, Pub/Sub, Google Earth Engine. YouTube Ads platform on current generation Arm-based servers and plan to deploy and scale these services and more on Axion soon.

Read more Can ARM processors Dethrone x86 king or Will they Learn to Coexist? – techovedas

TPU v5p: Powering Generative AI Models

In tandem with the Axion CPU, Google introduces an upgraded version of its Tensor Processing Units – TPU v5p. These specialized AI chips are purpose-built for training large and demanding generative AI models.

Google significantly amplifies its AI training capabilities with a single TPU v5p pod. It integrates a staggering 8,960 chips, more than doubling the capacity of its predecessor, the TPU v4 pod.

Since 2015 Google has released five generations of Tensor Processing Units (TPU); in 2018 they released their first Video Coding Unit (VCU), achieving up to 33x more efficiency for video transcoding; in 2021, they doubled-down on custom compute by investing in “system on a chip” (SoC) designs, and released the first of three generations of Tensor chips for mobile devices.

Differentiation in Approach

Google’s venture into custom silicon follows similar moves by industry peers like Microsoft and Amazon. Microsoft recently unveiled its own custom silicon chips tailored for cloud infrastructure. While Amazon has long offered Arm-based servers through its custom CPU, Graviton3.

While Microsoft and Amazon have also unveiled custom silicon chips for cloud infrastructure, Google’s approach differs in its integration strategy.

Google plans to integrate these chips into its cloud services rather than selling them directly to customers. This approach offers businesses the opportunity to rent and utilize advanced AI capabilities seamlessly.

Read more 3.5x Faster: Meta Unveils Monster AI Chip – techovedas

Industry Implications and Future Prospects

Google’s bold move into custom silicon development is indicative of the increasing importance of AI in cloud computing. As AI workloads continue to proliferate, the demand for specialized hardware solutions is poised to soar. Google’s Axion CPU and TPU v5p meet current AI needs and push the company to the forefront of AI infrastructure innovation. This highlights Google’s dedication to advancing AI and improving the scalability and performance of its cloud services.

Share this STORY

Leave a Reply

Your email address will not be published. Required fields are marked *