A sparse convolutional neural network (CNN) framework and training algorithms developed by researchers at Japan's Tokyo Institute of Technology (Tokyo Tech) can seamlessly integrate CNN models on low-power edge devices.
The 40-nanometer sparse CNN chip yields high accuracy and efficiency through a Cartesian-product multiply and accumulate (MAC) array and pipelined activation aligners that spatially shift activations onto a regular Cartesian MAC array.
Tokyo Tech's Kota Ando said, "Regular and dense computations on a parallel computational array are more efficient than irregular or sparse ones. With our novel architecture employing MAC array and activation aligners, we were able to achieve dense computing of sparse convolution."
From Tokyo Institute of Technology News (Japan)
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found