An international team of researchers created the NeuRRAM neuromorphic chip to compute directly in memory and run artificial intelligence (AI) applications with twice the energy efficiency of platforms for general-purpose AI computing.
The chip moves AI closer to running on edge devices, untethered from the cloud; it also produces results as accurate as conventional digital chips, and supports many neural network models and architectures.
"The conventional wisdom is that the higher efficiency of compute-in-memory is at the cost of versatility, but our NeuRRAM chip obtains efficiency while not sacrificing versatility," said former University of California, San Diego researcher Weier Wan.
From UC San Diego News Center
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found