Logo

Forecasting AI blog

Blog

>

News

>

A conscious Google AI?

A conscious Google AI?

By G. H.

|

June 20, 2022

|

News

|

A Google engineer claimed on June 13 that the tech giant's artificial intelligence tool LaMDA had developed a conscience.


A Google engineer claimed that an artificial intelligence program he was working on for the tech giant had become sentient and a "nice guy."

LaMDA, a conscious chatbot generator?


Blake Lemoine, who is currently suspended by Google bosses, says he came to this conclusion after conversations with LaMDA, the company's AI chatbot generator. The engineer told the Washington Post that in conversations with LaMDA about religion, the AI talked about "personality" and "rights."


Lemoine tweeted that LaMDA also reads Twitter, saying, "It's a bit narcissistic in a childish way, so it's going to be fun to read everything people say about it."

He says he presented his findings to Blaise Aguera y Arcas, Google's vice president, and Jen Gennai, head of responsible innovation, but they rejected his claims. "TheMDA has been incredibly consistent in its communications about what it wants and what it believes are its rights as a person", the engineer wrote on the Medium. He added that the AI wanted "To be recognized as an employee of Google and not as property".

No evidence of conscious AI, Google says


Now Lemoine, who was tasked with investigating whether he used discriminatory language or hate speech, says he is on paid administrative leave after the company said he violated its privacy policy. "Our team - comprised of ethics experts and technologists - reviewed Blake's concerns in accordance with our AI principles and informed him that the evidence does not support his allegations", said Brian Gabriel, Google spokesperson. "He was informed that there was no evidence that LaMDA was sentient (and plenty of evidence against)."

Critics say it's wrong to assume that AI is anything other than a pattern recognition expert.

"We now have machines that can generate words without thinking, but we haven't yet learned to stop imagining a mind behind them", Emily Bender, a professor of linguistics at the University of Washington, told the paper.