AI goes Analog: How Analog AI chips are more energy efficient

Unveil the future as AI takes an analog turn, unlocking unparalleled energy efficiency in computing.

Introduction

Artificial intelligence (AI) has woven itself seamlessly into our daily lives, enhancing everything from email tone adjustments to generating captivating visuals. However, the environmental toll of training large language models, such as ChatGPT, on conventional digital computer chips is posing sustainability challenges.In a bid to address this, researchers are exploring the potential of analog AI as a more energy-efficient alternative.

This article will give you a clear idea as to why analog chips are more energy efficient than their digital counterparts. We’ll also explore the challenges faced in making an analog processor.

Follow us on Linkedin for everything around Semiconductors & AI

The Digital Dilemma

Traditional digital computers transfer data between a central processing unit (CPU) and memory with every computation, resulting in significant energy dissipation during data shuttling.

This energy can be anywhere between 3 times and 10,000 times that required for the actual computation, depending on where the memory is located relative to the processing unit. This inefficiency becomes particularly pronounced in the context of large-scale neural-network computations for AI.

This is because ‘multiply–accumulate’ operations (which form the majority of neural-network computations) typically require chips comprising hundreds or thousands of transistors if designed using standard digital circuits.

Read More: What is Artificial Intelligence (AI) Memory Bottleneck and How to fix it? – techovedas

Analog AI: A Paradigm Shift

The solution lies in placing processing units closer to or within memory, a task challenging for standard digital circuits due to the complex nature of neural-network computations. Enter analog computing, where simplicity reigns supreme. Analog schemes, requiring only a few resistors or capacitors, can seamlessly integrate with memory, eliminating the need for energy-intensive data movement.

Phase-Change Memory Breakthrough

Ambrogio et al. an employee at IBM Research, California, has spearheaded an analog AI chip based on phase-change memory technology. This innovative approach leverages a material that switches between amorphous and crystalline phases in response to electrical pulses.

Analogous to the 1s and 0s in digital computers, this technology introduces a novel element—a state between the two, termed synaptic weight. This allows for multiply–accumulate operations without the need to move data, marking a significant leap in energy efficiency.\

Impressive Energy Efficiency

The analog AI chip developed by Ambrogio and team boasts 35 million phase-change memories, capable of storing 45 million preprogrammed synaptic weights. Their implementation achieves an astonishing energy efficiency of 12.4 trillion operations per second for each watt of power. This surpasses the capabilities of the most powerful CPUs and GPUs with a quantum leap.

The Road Ahead: Three critical challenges In Analog circuits

While analog AI holds immense promise, the path towards commercially viable analog AI products involves six steps. The first three critical steps involve designing memory technology, connecting circuits, and establishing the chip’s architecture.
Current research predominantly focuses on these foundational aspects. However, the journey towards analog AI products also demands the development of a compiler for code translation, crafting algorithms tailored to analog chips, and creating applications optimized for this innovative technology.

1.Memory Technology Choices

The choice of memory technology is a pivotal decision in analog AI development. While phase-change memories offer non-volatility—retaining information even without power—they pose challenges if frequent reconfiguration of weights is necessary. Volatile memories are being considered for their potential higher efficiency in certain scenarios, adding another layer of complexity to the decision-making process.

Read More: What are Emerging Memories: Types and Advantages – techovedas

2.Circuit Innovation: Bridging the Analog-Digital Gap

Circuit innovation is the next frontier. A flaw in most analog-AI implementations so far is that they focus only on the multiply–accumulate operation and leave all other computing tasks in the digital domain.

This means that data need to be converted from analog to digital, and vice versa, which slows computing down and limits performance. To overcome this, researchers either need to invent new techniques for converting data or bring more digital operations into the analog domain.

3.Architectural Design: Striking the Balance

The architectural design of analog AI chips is crucial. In the early 2010s, it became clear that GPUs were more efficient than CPUs for some applications. Analog AI chips represent the next step in this evolution: their throughput and efficiency are considerably better than those of CPUs and GPUs, but this comes at the expense of flexibility.

A proposed solution is a hybrid analog–digital architecture, leveraging the flexibility of digital components to fill gaps that analog devices may struggle with.

These three steps would provide the hardware foundation for an analog AI chip. To further improve and unleash the full potential of Analog ICs, the remaining three steps must also be tackled.

Maximizing Efficiency: Compiler and Algorithms

The astonishing efficiency of the authors’ chip merely reflects a theoretical maximum — in practice, the percentage of analog AI hardware that is actually used during computations can be very limited.

A customized compiler is therefore essential because it can segment tasks, then map each task efficiently to the available hardware to maximize performance. This tool efficiently segments tasks and maps them to available hardware, maximizing performance.

Tailored Algorithms

Tailored algorithms are equally vital. Analog computing is inherently prone to generating errors because it is vulnerable to problems such as thermal noise, manufacturing imperfections and variations in the thermal and electrical environment of the device. The random electron motion in conductors at finite temperatures introduces fluctuations in voltage or current, impacting the precision of analog signals.

What does this noise vulnerability indicate?

This means that performance might be compromised when analog-AI chips use algorithms that are designed for conventional digital computing — an issue for which there are two promising solutions. First, researchers can mitigate the impact of errors by using algorithm optimization techniques to relax the required computational precision. Alternatively, they can embrace algorithms that leverage analog errors, such as are involved in Bayesian neural networks, which use statistical inference methods to improve the performance of ordinary neural networks.

The Final Leap: Dedicated Applications

The last step towards commercial viability is the development of dedicated applications for analog AI chips. This undertaking is challenging as it took decades to shape the computational ecosystems in which CPUs and GPUs operate successfully, and it will probably take years to establish the same sort of environment for analog AI. But thankfully people like Ambrogio are charting the course towards realizing this goal.

Conclusion

In conclusion, analog AI stands as a beacon of promise in addressing the sustainability challenges associated with AI. Ambrogio et al.’s research marks a significant stride towards a future where efficient and sustainable AI is not just a possibility but a reality. As the technology matures and researchers navigate the complexities, analog AI may well become the cornerstone of the next era in computing.

Kumar Priyadarshi
Kumar Priyadarshi

Kumar Priyadarshi is a prominent figure in the world of technology and semiconductors. With a deep passion for innovation and a keen understanding of the intricacies of the semiconductor industry, Kumar has established himself as a thought leader and expert in the field. He is the founder of Techovedas, India’s first semiconductor and AI tech media company, where he shares insights, analysis, and trends related to the semiconductor and AI industries.

Kumar Joined IISER Pune after qualifying IIT-JEE in 2012. In his 5th year, he travelled to Singapore for his master’s thesis which yielded a Research Paper in ACS Nano. Kumar Joined Global Foundries as a process Engineer in Singapore working at 40 nm Process node. He couldn’t find joy working in the fab and moved to India. Working as a scientist at IIT Bombay as Senior Scientist, Kumar Led the team which built India’s 1st Memory Chip with Semiconductor Lab (SCL)

Articles: 2312