Analogue chips can slash the energy used to run AI models

Analogue chips can slash the energy used to run AI models
Analogue chips can slash the energy used to run AI models

AI research uses vast amounts of energy, but new research shows that analogue devices can run models far more efficiently due to their unusual ability to carry out data storage and processing in the same place

An analogue computer chip can run an artificial intelligence (AI) speech recognition model 14 times more efficiently than traditional chips, potentially offering a solution to the vast and growing energy use of AI research and to the worldwide shortage of the digital chips usually used.

The device was developed by IBM Research, which declined New Scientist’s request for an interview and didn’t provide any comment. But in a paper outlining the work, researchers claim that the analogue chip can reduce bottlenecks in AI development.

There is a global rush for GPU chips, the graphic processors that were originally designed to run video games and have also traditionally been used to train and run AI models, with demand outstripping supply. Studies have also shown that the energy use of AI is rapidly growing, rising 100-fold from 2012 to 2021, with most of that energy derived from fossil fuels. These issues have led to suggestions that the constantly increasing scale of AI models will soon reach an impasse.

Another problem with current AI hardware is that it must shuttle data back and forth from memory to processors in operations that cause significant bottlenecks. One solution to this is the analogue compute-in-memory (CiM) chip that performs calculations directly within its own memory, which IBM has now demonstrated at scale.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts