In March, two Google employees, whose jobs are to review the company's artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.
Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.
The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard.
The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industry's next big thing — generative A.I., the powerful new technology that fuels those chatbots.
From The New York Times
View Full Article
No entries found