Not only are the algorithms that determine what people see on the Web — search results, status updates, or product recommendations — inscrutable to users, the engineers who develop the Web's underlying software also do not know exactly how it works. Andrew Moore, dean of computer science at Carnegie Mellon University, notes machine-learning models train themselves by using vast amounts of information from previous people. However, Moore says it is becoming increasingly difficult to know the processes machine-learning models use and the data they collect. The data can range from the color of the pixels on a movie poster to the physical proximity to other people who enjoyed the movie. The bits of information a machine-learning model might analyze and prioritize could include 2,000 data points or 100,000.
Moore says we are "moving away from, not toward the world where you can immediately give a clear diagnosis" for what a data-fed algorithm is doing with a person's Web behaviors. "You might be overestimating how much the content-providers understand how their own systems work," Moore says.
As machine-learning systems become more complex than ever, they also could potentially hurt people. For example, they could inadvertently use a piece of information that leads to a loan rejection.
From The Atlantic
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found