Technology

Google engineer who said AI is sentient placed on leave

A Google engineer who argued that the company’s artificial intelligence (AI) is sentient has been placed on personal leave.

A spokesperson for the company declined to elaborate on the reasons behind the suspension, noting that it is “a longstanding, private personnel matter.”

Multiple news outlets have reported that Blake Lemoine, the senior software engineer from Google’s responsible AI team, violated the company’s confidentiality policy.

Lemoine’s concerns reportedly grew out of his work with Google’s LaMDA model, which he grew to believe was sentient with feelings and emotions.

In a Medium post after he was placed on leave, Lemoine wrote that he sought “a minimal amount of outside consultation” after his managers turned down requests to escalate his concerns.


“When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google,” Lemoine wrote.

The Google spokesperson told The Hill that Lemoine’s claims about LaMDA being sentient were reviewed and dismissed.

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic — if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on,” they said. “LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user.”

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims” the spokesperson added.

While some researchers have suggested that automated systems could reach sentience, the consensus opinion in the space is that the technology has a very long way to go to reach such a point.