It may be theoretically impossible for humans to control a superintelligent artificial intelligence (AI), a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it's on the verge of being created.
Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.
Alongside news of AI besting humans at games such as chess, Go, and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. "The question about whether superintelligence could be controlled if created is quite old," says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. "It goes back at least to Asimov's First Law of Robotics, in the 1940s."
The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story "Runaround," are as follows:
In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us, but also investigated potential control strategies for such a machine—and the reasons they might not work.
From IEEE Spectrum
View Full Article
No entries found