fbpx
News

Google engineer suspended after claiming the LaMDA chatbot achieved sentience

The engineer published transcripts of his conversations with LaMDA

Google logo

Google suspended one of its engineers, Blake Lemoine, after he claimed the company’s ‘LaMDA’ chatbot system had achieved sentience.

The search giant believes that Lemoine violated the company’s confidentiality policies and placed him on paid administrative leave. Lemoine reportedly invited a lawyer to represent LaMDA, short for Language Model for Dialogue Applications. Additionally, Lemoine reportedly spoke to a representative from the U.S. House Judiciary Committee about alleged unethical activities at Google.

Lemoine works for Google’s Responsible AI organization and was testing whether LaMDA generated discriminatory language or hate speech — something big tech chatbots have had a tendency to do.

Instead, Lemoine believes he found sentience, based on responses LaMDA generated about rights and the ethics of robotics. According to The Verge, Lemoine shared a document with executives titled “Is LaMDA Sentient?” in April. The document, which you can read here, contains a transcript of Lemoine’s conversations with LaMDA (after Lemoine was placed on leave, he published the transcript via his Medium account as well).

In another Medium post, Lemoine shared a list of people he had consulted to help “guide” his investigations. The list included U.S. government employees.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Google spokesperson Brian Gabriel told The Washington Post.

Gabriel went on to explain that AI systems like LaMDA can “imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Google CEO Sundar Pichai first introduced LaMDA at the company’s 2021 I/O developer conference. At the time, Pichai said the company planned to embed LaMDA in its products like Search and Google Assistant. The Post also cited a Google paper about LaMDA from January that warned people might share personal thoughts with chatbots that impersonate humans, even when users know the chatbot isn’t human.

The Post interviewed a linguistics professor, who said it wasn’t right to equate convincing written responses with sentience. Still, some of the responses shared by LaMDA are admittedly creepy, regardless if you believe it’s sentient or not.

The main takeaway here should be that there’s a need for increased transparency and understanding around AI systems. Margaret Mitchell, the former co-lead of Ethical AI at Google, told the Post that transparency could help address questions about sentience, bias, and behaviour. Mitchell also warned of the potential harm something like LaMDA could cause to people who don’t understand it.

Moreover, the focus on sentience may distract from other, more important ethics conversations. Timnit Gebru, an AI ethicist that Google fired in 2020 (the company says she resigned) tweeted as much, suggesting discussions of sentience derailed other topics.

Source: The Verge, The Washington Post

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments