LaMDA and the Sentient AI Trap

(Wired) Google AI researcher Blake Lemoine was recently placed on administrative leave after going public with claims that LaMDA, a large language model designed to converse with people, was sentient. At one point, according to reporting by The Washington Post, Lemoine went so far as to demand legal representation for the LaMDA; he has said his beliefs about LaMDA’s personhood are based on his faith as a Christian and the model telling him it had a soul.

The prospect of AI that’s smarter than people gaining consciousness is routinely discussed by people like Elon Musk and OpenAI CEO Sam Altman, particularly with efforts to train large language models by companies like Google, Microsoft, and Nvidia in recent years.

Discussions of whether language models can be sentient date back to ELIZA, a relatively primitive chatbot made in the 1960s. But with the rise of deep learning and ever-increasing amounts of training data, language models have become more convincing at generating text that appears as if it was written by a person.

Read more here.

Posted in