New York University (NYU) researchers have used computer vision and augmented reality to develop ARSL, an application that enables users to capture sign language with a smartphone camera and see a live translation into their native language.
The team says ARSL also can translate spoken language into sign language.
The prototype was developed by researchers working in the NYU Future Prototyping and Talent Development program, a partnership between the NYU Media Lab and Verizon.
"We make magic when we pair leading students with outstanding mentors in the Envrmnt team at our AR/VR lab," says Christian Egeler, director of XR product development for Envrmnt, Verizon's platform for extended reality solutions. "We discover the next generation of talent when we engage them in leading edge projects in real time, building the technologies of tomorrow."
From Next Reality
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found