A coalition of 46 scientists is calling on the research community to reconsider how it shares new artificial intelligence (AI) technology by specifying its potentially negative societal impacts as well as its benefits.
The researchers, collectively known as the ACM Future of Computing Academy, are urging peer-reviewed journals to reject papers that do not explore such hazards, a suggestion many scientists oppose because it will not eliminate AI's potential dangers.
Concerned U.S. and British scientists and policymakers published a report on weaponized AI in February, while others are building technologies to demonstrate how the technology can go awry.
The Massachusetts Institute of Technology's Matt Groh, for example, constructed a system that deletes objects and people from photos, to illustrate AI's potential usage for disinformation.
Another recent example is a joint Google Brain/DeepMind system that outperforms professional lip-readers, with ramifications for surveillance.
From The New York Times
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found