A new tactile sensing carpet assembled from pressure-sensitive film and conductive thread is able to calculate human poses without cameras.
Built by engineers at the Massachusetts Institute of Technology (MIT)'s Computer Science and Artificial Intelligence Laboratory, the system's neural network was trained on a dataset of camera-recorded poses; when a person performs an action on the carpet, it can infer the three-dimensional pose from tactile data.
More than 9,000 sensors are woven into the carpet, and convert the pressure of a person's feet on the carpet into an electrical signal.
The computational model can predict a pose with a less than 10-centimeter error margin, and classify specific actions with 97% accuracy.
MIT's Yiyue Luo said, "You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more."
From MIT Computer Science and Artificial Intelligence Laboratory
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found