Neuroscientists at Georgetown University Medical Center and the University of California, Berkeley have developed a model that enables artificial intelligence software to function more like a human brain and learn new visual concepts more quickly.
The software identifies relationships between entire visual categories; the standard approach involves identifying objects using only low-level and intermediate visual features like shape and color.
Georgetown's Maximilian Riesenhuber explains, "Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples. We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing."
From Georgetown University Medical Center
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found