Sign In

Communications of the ACM

ACM TechNews

Hardware-Software Co-Design Approach Could Make Neural Networks Less Power Hungry

View as: Print Mobile App Share:

A team led by researchers at the University of California, San Diego's Jacob School of Engineering has developed hardware and algorithms that could cut energy use and time when training a neural network.

Credit: David Baillot/UC San Diego Jacobs School of Engineering

Researchers in the Jacobs School of Engineering of the University of California, San Diego (UCSD) have developed a neuroinspired hardware-software co-design approach that could make neural network training more energy-efficient and faster.

For example, the research could make it possible to train neural networks on low-power devices such as smartphones, laptops, and embedded devices.

The researchers developed hardware and algorithms that allow neural network computations to be performed directly in the memory unit, eliminating the need to repeatedly shuffle data.

The hardware component is a highly energy-efficient type of non-volatile memory that consumes 10 to 100 times less energy than conventional memory technologies.

Said UCSD's Duygu Kuzum, "Overall, we can expect a gain of a hundred- to a thousand-fold in terms of energy consumption following our approach."

From Jacobs School of Engineering, University of California, San Diego
View Full Article


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account