Sign In

Communications of the ACM

ACM TechNews

Machine Vision's Achilles' Heel Revealed By Google Brain Researchers

View as: Print Mobile App Share:
An original image (left) and an adversarial image.

Google Brain and Open AI researchers have identified a weakness in machine vision algorithms that allows them to be deceived in ways people can easily detect.

Credit: Emerging Technology from the arXiv

Machine vision algorithms have a weakness that enables them to be deceived by images modified in ways humans could easily detect, according to Google Brain and Open AI researchers.

"An adversarial example for the face recognition domain might consist of very subtle markings applied to a person's face, so that a human observer would recognize their identity correctly, but a machine learning system would recognize them as being a different person," the researchers note.

Their efforts to systematically study adversarial images has uncovered machine-vision systems' vulnerability.

The team begins with ImageNet, a database of images classified according to what they display; a standard test involves training a machine vision algorithm on part of this database and then assessing how well it classifies another part of the database.

The team developed an adversarial image database by modifying 50,000 pictures from ImageNet in three distinct ways. One algorithm makes small changes in images to maximize cross entropy, while another alters the image further via iteration. The third algorithm alters an image so it steers the machine vision system toward a specific misclassification.

Testing Google's Inception v3 algorithm's performance in classifying these images found the first two methods lower its top 5 and top 1 accuracy substantially, but the third algorithm cuts accuracy to 0 for all of the images.

From Technology Review
View Full Article


Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account