University of California, Berkeley researchers have developed an algorithm that can be applied to functional magnetic resonance imaging (fMRI) imagery to show a moving image a person is seeing. The study marks the first time that anyone has used brain imaging to determine what moving images a person is seeing, and could help researchers model the human visual system on a computer.
The researchers watched hours of movie previews while lying in an fMRI machine, and then deconstructed the data so that they had a specific activation pattern for each second of footage.
"Once you do this, you have a complete model that links the plumbing of the blood flow that you do see with fMRI to the neuronal activity that you don't see," says Berkeley researcher Jack Gallant, who co-authored the study with Shinji Nishimoto. Next the researchers compiled 18 million YouTube video clips to test the model objectively.
The researchers used the YouTube library to simulate what would happen on the fMRI images when they watched a new set of movie trailers. The results of the simulations and fMRI scans were close to identical.
From Technology Review
View Full Article
Abstracts Copyright © 2011 Information Inc., Bethesda, Maryland, USA
No entries found