Massachusetts Institute of Technology (MIT) and IBM researchers have demonstrated the vulnerability of compressed, or quantized, artificial intelligence (AI) models to adversarial attack.
They suggest this could be remedied by adding a mathematical constraint during quantization, to reduce the odds that an AI will be exploited by a slightly modified image and misclassify what it sees.
Deep learning models quantized from 32 bits to 8 bits or fewer are more susceptible to adversarial attacks, slashing their accuracy from between 30% and 40% to less than 10%.
Adding the constraint improved performance gains in an attack, with smaller models in certain conditions outperforming the 32-bit model.
Said MIT's Song Han, "Our technique limits error amplification and can even make compressed deep learning models more robust than full-precision models."
From MIT News
View Full Article
Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA
No entries found