NVLink Fusion: Nvidia Unveils Supercharging AI with Qualcomm, Fujitsu, and MediaTek Collaboration

Introduction

Nvidia is taking a major step to widen its AI ecosystem with the launch of NVLink Fusion, a program that allows other chipmakers to integrate their custom CPUs and AI accelerators with Nvidia’s AI infrastructure.

Announced at Computex 2025 in Taipei, the initiative brings in partners like Qualcomm, Fujitsu, and MediaTek, enabling them to connect their processors directly to Nvidia GPUs.

This move aims to drive AI scalability and efficiency across massive data centers.

Follow us on Linkedin for everything around Semiconductors & AI

NVLink Fusion: 5 Key Points to Know

Open Interconnect: NVLink Fusion allows non-Nvidia chips to communicate directly with Nvidia GPUs.

Major Partners: Qualcomm, Fujitsu, MediaTek, and Marvell are onboard to integrate NVLink in their custom CPUs and AI accelerators.

Rack-Scale AI: The program targets large-scale AI systems with mixed silicon architectures.

High-Speed Communication: NVLink Fusion offers up to 14X more bandwidth than PCIe.

AI Software Integration: Nvidia Mission Control software streamlines system management and validation.

techovedas.com/5-5-billion-nvidia-hit-with-charge-as-u-s-blocks-ai-chip-sales-to-china

What is NVLink Fusion?

NVLink Fusion is a strategic move by Nvidia to extend its proprietary NVLink technology to external chipmakers.

Traditionally, Nvidia’s NVLink served as a high-bandwidth interconnect for its own GPUs, enabling faster communication compared to PCIe.

With NVLink Fusion, Nvidia opens the interface to other chipmakers. This allows custom CPUs and AI accelerators from partners like Qualcomm and Fujitsu to connect seamlessly with Nvidia’s GPUs in rack-scale AI systems.

Background: Why NVLink Matters for AI

Nvidia’s NVLink is a game-changing interconnect technology that delivers ultra-high-speed data transfer between GPUs and CPUs.

By opening it up to third-party silicon through NVLink Fusion, Nvidia is positioning itself to dominate the AI infrastructure landscape.

Here’s why NVLink is crucial for AI systems:

Exceptional Bandwidth: NVLink delivers up to 900 GB/s of bandwidth, far exceeding PCIe 5.0, ensuring faster data transfer between GPUs and CPUs.

Direct Communication: It enables GPU-to-GPU and CPU-to-GPU communication, reducing latency and maximizing throughput for AI workloads.

Scalability for AI Clusters: NVLink allows data centers to connect multiple GPUs across massive clusters, enhancing scalability and efficiency.

Proprietary to Nvidia: Previously, only Nvidia’s hardware utilized NVLink, keeping it exclusive to its AI infrastructure.

NVLink Fusion Expands Access: The new NVLink Fusion program opens the interconnect to external chipmakers, allowing third-party CPUs and AI accelerators to integrate seamlessly with Nvidia’s AI ecosystem.

Partners and Ecosystem Expansion

Nvidia has lined up an impressive array of partners for NVLink Fusion. Here’s a breakdown:

PartnerRoleContribution
QualcommCustom Server CPUsIntegrates NVLink in new server CPUs
FujitsuArm-Based CPUs144-core Monaka CPU with NVLink
MediaTekAI Accelerators (ASICs)Custom AI ASICs with NVLink
MarvellAI Accelerators (ASICs)NVLink-connected AI chips
CadenceChip Design SoftwareDesign tools for NVLink integration
SynopsysChip Design SoftwareProvides design IP for NVLink
Astera LabsSpecialized Interconnect SiliconNVLink-specific connectivity chips

Inside Qualcomm and Fujitsu’s AI Push

Qualcomm plans to roll out a custom server CPU designed for AI workloads. By integrating NVLink, the company aims to position its CPU as a viable competitor to Nvidia’s Grace CPU in AI data centers.

Meanwhile, Fujitsu’s 144-core Monaka CPU targets high-performance computing with extreme power efficiency. Built on a 2nm Arm architecture, Monaka will utilize NVLink to connect directly with Nvidia GPUs, forming an integrated AI processing ecosystem.

Vivek Mahajan, CTO of Fujitsu, highlighted the significance:

Connecting our 2nm Monaka CPU with Nvidia’s architecture opens up unprecedented scalability for AI systems. This partnership aligns with our vision to deliver sovereign AI systems powered by world-leading computing technology.”

Competitive Landscape: AMD and Broadcom Absent

Notably, Nvidia’s primary AI competitors AMD, Broadcom, and Intel are absent from the NVLink Fusion ecosystem.

These companies have aligned with the Ultra Accelerator Link (UALink) consortium, which aims to develop an open, industry-standard interconnect to counter Nvidia’s proprietary NVLink.

AMD recently announced the Instinct MI450X, a rack-scale AI accelerator with IF128 interconnect technology, which directly challenges Nvidia’s NVLink-powered GPU clusters.

techovedas.com/intel-40-discount-loss-on-tsmc-3nm-chip-tech-how-intel-ceo-comments-sparked-tensions-and-financial-strain

Mission Control Software: Unifying AI Infrastructure

In addition to NVLink Fusion, Nvidia launched Mission Control, a software suite designed to manage AI workloads across NVLink-connected systems.

Mission Control consolidates system orchestration, performance monitoring, and validation, allowing data center operators to streamline AI deployments.

Nvidia claims the software reduces deployment times by up to 30%, a critical advantage for enterprises managing large AI clusters.

techovedas.com/a-comprehensive-comparison-of-nvidia-amd-and-intel-gpus

Conclusion: Nvidia’s Strategic Shift

With NVLink Fusion, Nvidia is transforming its proprietary interconnect into an open platform for AI hardware vendors.

For data center operators, NVLink Fusion offers new opportunities to mix and match processors and AI accelerators, optimizing performance for specific AI workloads.

As AI infrastructure evolves, Nvidia’s strategy to broaden NVLink’s reach could reshape the AI hardware landscape.

For more of such news and views choose Techovedas! Your semiconductor Guide and Mate!

Kumar Priyadarshi
Kumar Priyadarshi

Kumar Joined IISER Pune after qualifying IIT-JEE in 2012. In his 5th year, he travelled to Singapore for his master’s thesis which yielded a Research Paper in ACS Nano. Kumar Joined Global Foundries as a process Engineer in Singapore working at 40 nm Process node. Working as a scientist at IIT Bombay as Senior Scientist, Kumar Led the team which built India’s 1st Memory Chip with Semiconductor Lab (SCL).

Articles: 2965

For Semiconductor SAGA : Whether you’re a tech enthusiast, an industry insider, or just curious, this book breaks down complex concepts into simple, engaging terms that anyone can understand.The Semiconductor Saga is more than just educational—it’s downright thrilling!

For Chip Packaging : This Book is designed as an introductory guide tailored to policymakers, investors, companies, and students—key stakeholders who play a vital role in the growth and evolution of this fascinating field.