Researchers at the University of California, San Diego (UCSD) have built a brain-to-tweet interface that predicts the song a finch is going to sing a fraction of a second before it does so.
The researchers say the new system decodes realistic synthetic birdsongs directly from neural activity. They say their research marks the first prototype of a decoder of complex, natural communication signals from neural activity, and they note a similar approach could lead to a human thought-to-text interface.
The UCSD team used silicon electrodes in awake birds to measure the electrical chatter of neurons in the sensory-motor nucleus part of their brain, where "commands that shape the production of learned song" originate.
The experiment employed neural network software, into which the researchers fed both the pattern of neural firing and the actual song that resulted.
The researchers say the system can predict what the bird will sing about 30 milliseconds before it does so.
From Technology Review
View Full Article - May Require Free Registration
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found