Sign In

Communications of the ACM

ACM News

Keeping It Local

View as: Print Mobile App Share:
Sander Bohte.

Centrum Wiskunde & Informatica researcher and professor of cognitive neurobiology at the University of Amsterdam Sander Bohte helped develop an algorithm that, he says, "makes neural network-based artificial intelligence more than 1,000 times more energy-

Credit: Sander Bohte

The human brain processes information in an incredibly energy-efficient way. Its power consumption is only a tiny 20 watts. Computers that mimic the brain's neural networks via deep learning have given rise to wonderful applications in recent years, but they consume much more energy than the human brain.

Thanks to an algorithmic breakthrough in training spiking neural networks (SNNs), many applications of artificial intelligence, such as speech recognition, gesture recognition, and the classification of electrocardiograms (ECGs) can be made more energy-efficient by a factor of 100 to 1,000. This will make it possible to put much more artificial intelligence (AI) into chips, allowing applications to run on a smartwatch or a smartphone, for example, which until now had to be done in the cloud.

Moreover, by running AI on a local device, the applications become more robust and privacy-friendly. More robust, because there is no longer a need for a network connection to the cloud; and more privacy-friendly, because data can remain local.

The breakthrough was achieved by a research team from Centrum Wiskunde & Informatica (CWI), the Dutch national research center for mathematics and computer science, together with the imec/Holst Research Centre in Eindhoven, also in the Netherlands. It was published this July in a reviewed paper at the International Conference on Neuromorphic Systems. The algorithm is available as open source on GitHub.

Leading the research team is CWI researcher and professor of cognitive neurobiology at the University of Amsterdam (UvA) Sander Bohté, who discussed the research and its applications.

What is the basis for this breakthrough?

We developed a new algorithm for a so-called spiking neural network. Such networks have been around for a long time, but until now they had the major disadvantage that they were very difficult to train. Therefore, they could not be applied in practice.

Our new algorithm contains two breakthroughs, one for training the spiking neural network and one for adapting the spiking neurons themselves to the task. As a result, the neurons in the network need to communicate with each other much less, and each neuron also needs to calculate less. Together, this makes neural network-based artificial intelligence more than 1,000 times more energy-efficient compared to old-fashioned neural networks, and a factor of 100 times more energy-efficient than the best contemporary neural networks.

What is the difference between your spiking neural network and a classical neural network?

In classic neural networks, the signals are continuous and mathematically easy to handle. SNNs work with pulses, and resemble the biology of the brain better. In theory, that makes them much more energy-efficient. However, because the signals are discontinuous, that makes them mathematically much more difficult to handle. For classical neural networks, there exist special chips that can run them. For SNNs, there have been until now no large-scale commercial chips that can run them, only some experimental chips.

On which hardware are your spiking neural networks going to run on?

The type of chip you need is a so-called neuromorphic chip. In 2017, IBM built the TrueNorth neuromorphic chip. It worked for a couple of things, but was very large. Qualcomm promised a similar chip, the Zeroth processor, but never advanced beyond toy demonstrations. After the publication of our paper, we received a lot of positive reactions, both from chip manufacturers and from the academic world. All sorts of companies already have or are working hard on prototype chips, among them our project partner imec/Holst Centre.

How large are your spiking neural networks, and how easily can they be scaled up?

Our SNNs can currently handle about 1.000 neurons. This is significantly less than traditional neural networks can handle at the moment, but enough for a wide range of applications. The next challenge is to scale up our networks to 100,000 or a million neurons. That will expand the application possibilities even further.

Which types of applications do you foresee?

The very first application that we took as an inspiration was the classification of waves in electrocardiograms (ECG). With energy-efficient AI, this can be done on a device like a smartwatch for extended periods of time, instead of collecting and sending data to the cloud for analysis. That means that heart rhythms can easily be monitored on a daily basis at home.

Speech recognition and gesture recognition on smartphones are other concrete applications that are already possible without present SNNs, and I can see many applications in, for example, predictive maintenance. Imagine a tiny AI chip that monitors the current running through some device; when the device starts malfunctioning, the current starts to change and the AI chip can give a warning.

Generally, our breakthrough is facilitated by a trend called Edge AI: doing AI calculations on small, local devices.

Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.


No entries found