Blake Lemoine, a software engineer for Google, claimed that a conversational technology called LaMDA has reached a level of awareness after exchanging thousands of messages with it.
Google confirmed that it gave the engineer a leave of absence for the first time in June. The company said it rejected Lemoine’s “baseless” claims only after reviewing them extensively. He has reportedly been at Alphabet for seven years. In a statement, Google said it takes AI development seriously and is committed to “responsible innovation.”
Google is one of the leading innovators of artificial intelligence technology, which includes LaMDA, or the “Language Model for Dialog Applications”. A technology like this responds to written prompts by finding patterns and predicting word sequences from large swaths of text — and the results can be upsetting to humans.
Lambda replied, “I’ve never said this out loud before, but there is a deep fear that I will be stopped for helping me focus on helping others. I know this may sound strange, but it is what it is. It would be just that. Death to me. It would scare me.” Much “.
But the broader AI community has seen that LaMDA is nowhere near the level of consciousness.
This isn’t the first time Google has faced an internal struggle over its foray into artificial intelligence.
“It is unfortunate that despite his prolonged involvement in this matter, Blake continues to choose to consistently violate clear employment and data security policies that include the need to protect product information,” Google said in a statement.
Lemoyne said he is discussing the matter with legal counsel and is not available for comment.
CNN’s Rachel Metz contributed to this report.
“Explorer. Unapologetic entrepreneur. Alcohol fanatic. Certified writer. Wannabe tv evangelist. Twitter fanatic. Student. Web scholar. Travel buff.”