Sign In

Communications of the ACM

ACM News

Can We Stop Runaway A.I.?

View as: Print Mobile App Share:

Computer scientist Jeff Clune said as A.I. challenges “dissolve,” more researchers are declaring openly that artificial general intelligence is possible and may pose a destabilizing danger to society.

Credit: Shira Inbar

Increasingly, we're surrounded by fake people. Sometimes we know it and sometimes we don't. They offer us customer service on Web sites, target us in video games, and fill our social-media feeds; they trade stocks and, with the help of systems such as OpenAI's ChatGPT, can write essays, articles, and e-mails. By no means are these A.I. systems up to all the tasks expected of a full-fledged person. But they excel in certain domains, and they're branching out.

Many researchers involved in A.I. believe that today's fake people are just the beginning. In their view, there's a good chance that current A.I. technology will develop into artificial general intelligence, or A.G.I.—a higher form of A.I. capable of thinking at a human level in many or most regards. A smaller group argues that A.G.I.'s power could escalate exponentially. If a computer system can write code—as ChatGPT already can—then it might eventually learn to improve itself over and over again until computing technology reaches what's known as "the singularity": a point at which it escapes our control. In the worst-case scenario envisioned by these thinkers, uncontrollable A.I.s could infiltrate every aspect of our technological lives, disrupting or redirecting our infrastructure, financial systems, communications, and more. Fake people, now endowed with superhuman cunning, might persuade us to vote for measures and invest in concerns that fortify their standing, and susceptible individuals or factions could overthrow governments or terrorize populations.

From The New Yorker
View Full Article



No entries found