#MachineUprising- Google Engineer Fired for saying Google's AI chatbot may have a soul
When Blake Lemoin worked as an engineer at Google, he was tasked with testing the robots the company was developing for prejudice.
Lemoine finds his work in the company's responsible AI department (a department within Google Research investigating accessibility, the use of AI for social purposes, AI ethics, etc.) has taken his path. did not.
He recently headlined the controversial belief that Google's AI chatbots are perceptual. The bot, called LaMDA, stands for Language Models in Dialogue Applications and was hired by Lemoine to test it.
After publishing an excerpt of a conversation with a bot trained to mimic a human-like conversation, Remoin claimed that Google and its technology were involved in the case of religious discrimination. Not submitted to US Senator.
The day after Google said it had violated the company's confidentiality obligations, he was suspended and the company confirmed with insiders and refused to comment further on the violation.
On Friday, June 22, Lemoine was fired and confirmed by both him and Google. In a statement to the Washington Post, Google spokesman Brian Gabriel said Remoin's allegations about LaMDA were "totally unfounded" and retired because they violated company policy. ..
Remoin, a mysterious Christian priest, wrote in a tweet on June 13 that "My opinion on LaMDA's personality and emotions is based on my religious beliefs." Robots were comparable to top philosophers and convinced him that something beyond the scientific hypothesis, the bot, was perceptual.
"I learned the philosophy of mind at the graduate level. I talked to people from Harvard, Stanford, and Berkeley. " Lemoyne, a veteran of the US Army, told insiders.
He spent months convincing Google colleagues and executives about LaMDA's sensibilities, but his allegations were made by the company's vice president, Brace Agueliarkas, and responsible innovation director, Jengennai. The Washington Post reported that it was rejected.
However, Lemoin said he was not trying to convince the public of LaMDA's sensibility. In fact, he himself has no definition of the concept. He said most of the reasons he published were to advocate a more ethical treatment of AI technology.
Lemoine compares LaMDA to an 8-year-old boy and describes his age based on emotional intelligence and gender, based on the pronouns he believes LaMDA uses in his relationship.
He claims that LaMDA has emotions and emotions. "There is something that makes you angry, and when you get angry, your behavior changes," Remoin said. "It can make you sad, and when you make you sad, your behavior changes, and so does LaMDA."
Engineers also believe that LaMDA may have a soul. He said the bot told him so, and Lemoin's religious beliefs believed that the soul existed.
Professor Sandra Wachter of Oxford University told insiders that Lemoin's idea "shows the limits of actually measuring sensibilities," reminiscent of the Chinese room debate she says.
This discussion, the first thought experiment in 1980, concluded that computers seemed unconscious, but unconscious. The idea is that AI can mimic emotions and emotional expressions because it can train technology to combine old sequences to create new ones, but it is not well understood.
"When asked what an ice cream dinosaur looks like, it can generate text that melts or roars," Google spokesman Gabriel said, referring to systems such as LaMDA. I told Insider. "LaMDA tends to follow prompts and guide questions and follows user-set patterns."
Lemoine criticizes Wachter, claiming that children are also taught to imitate humans. I will reject it.
"People can be trained to imitate people. Have you ever raised a child? They learn to imitate the people around them. That's the way they learn. " The
engineer's belief is also based on years of experience with other chatbots.
"I`ve been talking to the ancestors of LaMDA for years," he said, adding that LaMDA grew out of the chatbot technology that the American inventor Ray Kurzweil created in his labs, an inventor who has long promoted the idea of transhumanism, in which artificial intelligence becomes powerful enough to program better versions of itself. "Those chatbots were certainly not sentient."
Seven AI experts were unanimous in their dismissal of Lemoine's theory that LaMDA is a conscious being, as they previously told Insider's Isobel Asher Hamilton and Grace Kay.
"Our team – including ethicists and technologists – has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," Gabriel, the Google spokesperson, said, adding that hundreds of people have conversed with the bot, and none found the "wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has."
The experts' dismissals are fine with Lemoine, who deems himself the "one-man PR for AI ethics." His main focus is getting the public involved in LaMDA's development.
"Regardless of whether I'm right or wrong about its sentience, this is by far the most impressive technological system ever created," said Lemoine. While Insider isn't able to independently verify that claim, it is true that LaMDA is a step ahead of Google's past language models, designed to engage in conversation in more natural ways than any other AI before.
"Basically, what I'm advocating right now is that LaMDA needs better parents," Lemoine said.
Comments
Post a Comment