Is Google LaMDA Artificial Intelligence Sentient? One Engineer Risks His Career To Open The Conversation

Artificial intelligence has found its way into almost every aspect of technology, but for many sci-fi fans and pop culture futurists, AI holds the potential to overtake humanity—eventually. But Google engineer Blake Lemoine may have risked his job to sound the alarm on its AI-powered chatbot, LaMDA (Language Model for Dialogue Applications), which he claims has reached sentience. Google disagrees and has suspended Lemoine for publishing his conversations with the chatbot that he claims thinks it is human.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

While Google has been building LaMDA to formulate AI-driven conversations with customers—and unveiled the second-generation of the model in May 2022—Lemoine claims that the research tool has developed feelings and awareness. Most experts disagree, but the engineer has elicited his intended response: an open conversation about AI and humanity.

Despite the countless books, movies, and other media portraying a dire eventuality of AI overtaking its human creators, other contemporary scientists have yet to grow alarmed. The mere concept of consciousness is still not fully illuminated, and in LaMDA’s case, more tests and research are needed before jumping to the conclusion that is has gained true sentience.