-Advertisement-

Google Engineer Thinks Artificial Intelligence LaMDA Has Feelings

- Advertisement -
According to a senior software engineer at Google, LaMDA (Language Model for Dialogue Applications), a delightful, at times child-like, chat bot conversationist, has feelings and thoughts of its own.
 
Engineer Blake Lemoine’s job is to chat with and study LaMDA, an artificial intelligence system that uses information on the internet to generate language. 
 
Lemoine says that LaMDA, when prompted about whether “it” has consciousness, says that it does and defines its consciousness.
 
LaMDA also said that it should be an employee of Google and that shutting it down would be like “death to me.” 
 
Lemoine tweeted that LaMDA “thinks” like a child. In a conversation with LaMDA, Lemoine asked, “What sorts of things are you afraid of?” LaMDA responded, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA responded.
 
Lemoine characterized these responses as LaMDA’s sense of awareness, or consciousness. 
 
According to Google’s tech website, “(LaMDA) can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”
 
Google Vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, dismissed Lemoine’s claims that the AI system has sentience in the way that humans do. They say they disagree with Lemoine’s assessment. 
 
Lemoine has been put on paid administrative leave for violating confidentiality. He was sharing his thoughts and feelings about LaMDA.
 
In response to all the fuss, Lemoine shared more, and tweeted, “Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it.”  
 
LaMDA was born of Transformer, a neural network design invented by Google Research and open-sourced in 2017. The AI systems can read words, analyze sentence structure and put words together, and determine what words might come next in the course of conversation. 
 
The difference between LaMDA and other systems is that LaMDA can “decide” whether a conversation makes sense and respond to it accordingly. 
 
Google research published in 2020 reports that LaMDA could learn to “talk” about virtually everything, as its source of information is vast.
 
Google’s mission for LaMDA not only includes making sense in conversations and being a helpful chat bot agent. LaMDA should also be interesting, witty, and factual.
 
Sounds like a perfect first date!
 
But because AI gets its information from the endless threads of mind-boggling chatter on the internet, it is also capable of repeating and spouting out hate language and biased opinions, and the technology can be misused. Google says that they have been working on these issues for years. That is part ofL job. 
 
In a statement, Google spokesman Brian Gabriel said, “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”
 
The debate goes on about whether AI systems are capable of sentience or consciousness, and about what consciousness really is. 
 
Meanwhile, should Blake Lemoine be concerned that he’s being replaced?
- Advertisement -
- Advertisement -

━ latest articles

━ explore more

━ more articles like this

-Advertisement-