Deep neural networks (DNNs) have advanced to the point where they underpin online services from image search to speech recognition, and are now moving into the systems that control robots. Yet numerous experiments have demonstrated that it is relatively easy to force these systems to make mistakes that seem ridiculous, but with potentially catastrophic results. Recent tests have shown autonomous vehicles could be made to ignore stop signs, and smart speakers could turn seemingly benign phrases into malware.
Five years ago, as DNNs were beginning to be deployed on a large scale by Web companies, Google researcher Christian Szegedy and colleagues showed making tiny changes to many of the pixels in an image could cause DNNs to change their decisions radically; a bright yellow school bus became, to the automated classifier, an ostrich.
No entries found