What to make of the strange case of Blake Lemoine? The Google AI engineer made headlines this past week when he claimed one of the company's chatbots had become "sentient." Not only does Google say Lemoine's claims are untrue, but almost every AI expert agrees that the chatbot, which Google calls LaMDA, is not sentient in the way Lemoine says it is.
Is Lemoine the inevitable result of the field's persistent fetishization of the Turing Test as a benchmark? In many actual versions of the Turing Test, humans often simply don't try that hard to stump the machine. And, in many cases, people are eager to deceive themselves into thinking the bots are real.
View Full Article (May Require Paid Registration)
No entries found