Sign In

Communications of the ACM

ACM TechNews

AI May Put Private Data at Risk

View as: Print Mobile App Share:
Personal data may be at risk due to vulnerabilities in machine learning models.

Researchers at Cornell Tech say that current models of machine learning incorporate vulnerabilities that could be exploited.


Cornell Tech researchers have determined current models of machine learning are vulnerable to privacy leaks and other attacks.

Cornell Tech's Vitaly Shmatikov has developed models that can tell whether a certain piece of data was used to train a machine learning system, with more than 90% accuracy. That knowledge could potentially be used to leak sensitive genetic or medical information, as well as detailed behavioral and location data.

Shmatikov says tools that enable people to ascertain if a record was used to train an algorithm can help them determine if their data was misused.

He and his colleagues examined cloud services from Google and Amazon, which help customers build machine learning models from their own information.

The team built "shadow models" from true or false data that accurately identified the records used to construct them, indicating that customers who use these services can easily expose their own training data.

From Cornell Chronicle (NY)
View Full Article


Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account