Nvidia_CUDA_Ban

NVIDIA Bans CUDA-Based Software On 3rd Party GPUs; Stirs China GPU Makers

Recently, Nvidia updated its user license agreement to restrict the use of CUDA with software layers that enable compatibility on competing GPUs.
Share this STORY

Introduction:

In recent developments within the GPU market, Nvidia latest move to ban CUDA software usage has sent shockwaves through the global AI and chip communities, particularly in China.

This action, outlined in the user license agreement for CUDA 11.6, prohibits the running of CUDA through translation layers on non-Nvidia hardware platforms.

The implications of this decision are far-reaching, sparking significant controversy and discussions about the future of GPU ecosystems and competition dynamics.

Follow us on Linkedin for everything around Semiconductors & AI

The Nvidia CUDA Dependency Dilemma:

Nvidia’s CUDA platform has long been the cornerstone of accelerated computing, enabling developers to harness the power of GPUs for various AI and scientific computing tasks. The seamless integration of CUDA with Nvidia GPUs has created a dependency among developers, who rely on its stability and performance optimizations. This dependency has solidified Nvidia’s dominance in the GPU market, as GPUs and CUDA have become synonymous with high-performance computing.

Nvidia

What’s the issue of Nvidia CUDA ban?

Nvidia’s CUDA is a popular toolkit that allows programmers to leverage the power of Nvidia GPUs for AI applications. Recently, Nvidia updated its user license agreement to restrict the use of CUDA with software layers that enable compatibility on competing GPUs.

Nvidia

According to reports from Tom’s Hardware, the language mentioned is absent from the documentation of CUDA versions 11.4 and 11.5, suggesting its introduction from version 11.6 onward.

CUDA, a computational platform by NVIDIA tailored for its GPUs, excels in efficiency when paired with NVIDIA hardware. However, some users employ CUDA on non-NVIDIA platforms.

Here’s a breakdown:

  • CUDA: It’s a powerful toolkit from Nvidia that allows programmers to leverage the parallel processing capabilities of Nvidia GPUs for tasks like AI development.
  • Software License Agreement (EULA): This is a legal document that outlines the terms and conditions for using Nvidia’s software, including CUDA.
  • Restriction on Translation Layers: The EULA now explicitly prohibits using CUDA with tools called “translation layers” on hardware other than Nvidia GPUs. These translation layers essentially act as interpreters, allowing CUDA code to run on non-Nvidia hardware, even though it wasn’t designed for that purpose.
  • Previously Online, Now Installed Documentation: Nvidia has recently incorporated this restriction into the documentation installed with the CUDA toolkit itself, in addition to its previous mention solely in the online EULA on Nvidia’s website. This change aims to make the restriction more visible to users.

The imposed restriction seems strategically crafted to deter endeavors such as ZLUDA, in which both Intel and AMD have recently engaged. More significantly, it aims to impede certain Chinese GPU manufacturers from utilizing CUDA code through translation layers.

Why Nvidia CUDA ban for Non-Nvidia Tools

Tom’s Hardware outlines two methods for utilizing CUDA on alternative platforms: recompiling code or employing a translation layer like “ZLUDA,” which simplifies the process. Notably, multiple Chinese GPU manufacturers have acknowledged leveraging CUDA.

For instance, Denglin Technology is developing a processor boasting a computer architecture compatible with programming models such as CUDA and OpenCL. Given the complexity of reverse engineering NVIDIA’s GPUs, speculation arises around the utilization of a translation layer.

Moore Threads is reportedly involved in a translation tool named “MUSIFY” aimed at executing CUDA code on GPUs. However, it remains unclear if MUSIFY is part of the translation layer.

Tom’s Hardware highlights the ambiguity surrounding NVIDIA’s prohibition of CUDA usage on alternative platforms. It’s uncertain whether this ban responds to the activities of Chinese manufacturers or anticipates future developments.

Nevertheless, Tom’s Hardware anticipates that the proliferation of translation layers will likely trigger NVIDIA’s prohibition, potentially challenging its dominance in high-speed computing, particularly in AI applications.

Read More: How 3D-enabled Memory Promises to Bridge the Gap between CPU and GPU

The Rise of Alternative Ecosystems:

Chinese GPU manufacturers, such as Moore Threads and Biren Technology, are leading the charge in developing independent GPU ecosystems that offer alternatives to CUDA. Moore Threads, in particular, has emphasized the autonomy of its MUSA architecture, which is not bound by Nvidia’s EULA. By offering a fully independent GPU architecture and development tool, Moore Threads aims to provide developers with a viable alternative to CUDA.

Hygon, another key player in the Chinese GPU market, has built its ecosystem based on ROCm, offering a CUDA alternative at a lower cost. By leveraging the power of the local open-source community, Hygon has created a robust ecosystem that provides developers with the tools they need for AI and scientific computing tasks.

Take the look on Original Link release from Nvidia here

The Road Ahead:

As the GPU landscape continues to evolve, Nvidia’s stricter CUDA policy is likely to shape the competitive dynamics of the market. While it reinforces Nvidia’s dominance in accelerated computing, it also fuels innovation and competition among GPU manufacturers. Chinese companies, in particular, are leading the charge in developing alternative ecosystems that offer developers greater flexibility and choice.

In conclusion, Nvidia’s decision to tighten restrictions on CUDA usage underscores the evolving nature of the GPU market and the competitive pressures facing GPU manufacturers. While it presents challenges for competitors, it also spurs innovation and drives the development of alternative ecosystems that could reshape the future of accelerated computing. As the industry continues to evolve, it will be essential for stakeholders to adapt to these changes and embrace the opportunities they present.

Share this STORY