Rise Of The Machines: Did Google Create An Artificial Intelligence That Can Think & Feel?

By Roberta Elliot | Monday, 13 June 2022 05:15 AM
3
Views 1.7K

A senior software engineer at Google who signed up to test Google's artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has argued that the AI robot is in fact sentient and has ideas and feelings.

Throughout a series of discussions with LaMDA, 41-year-old Blake Lemoine presented the computer with various scenarios through which analyses could be performed.

They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.

Lemoine came away with the perception that LaMDA was clearly sentient and was endowed with sensations and thoughts all of its own.

 WATCH: JOHN LEGEND CALLS TRUMP A RACISTbell_image

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," he explained to the Washington Post.

Lemoine worked with a collaborator to present the proof he had gathered to Google, though vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his allegations.

 WATCH FETTERMAN REACTING TO PROTESTS WAVESbell_image

He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. In the meantime, Lemoine has now chosen to go public and shared his conversations with LaMDA.

"Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers," Lemoine tweeted on Saturday.

 RUSSIAN COURT DENIES EVAN GERSHOVICH'S APPEAL, WHAT'S NEXT FOR THE WSJ REPORTER?bell_image

"Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it," he continued in a follow-up tweet.

 HRMMM...RUSSIAN PRIEST THAT LEAD NAVALNY'S MEMORIAL SERVICE SUDDENLY 'DISMISSED' BY MOSCOW CHURCHbell_image

The AI system makes use of already known information regarding a particular subject in order to "enrich" the conversation in a natural way. Language processing is further capable of understanding hidden meanings or even ambiguity in responses by humans.

 EXECUTION-STYLE AMBUSH: LOS ANGELES DEPUTY ATTACKED BY NOTORIOUS GANG MEMBERbell_image

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. Throughout that time, he further helped develop an impartiality algorithm to remove biases from machine learning systems.

 PLOT TWIST ALERT: GEORGE SANTOS' EPIC EXIT THROWS CAMPAIGN CIRCUS INTO HILARIOUS CHAOSbell_image

He clarified how certain personalities were out of bounds.

LaMDA was not supposed to be permitted to create the personality of a murderer.

 GOOGLE'S "NO POLITICS" POLICY IN ACTION: AND THEY MEAN BUSINESS!bell_image

Throughout testing, in an attempt to push LaMDA's boundaries, Lemoine announced he was just able to generate the personality of an actor who played a murderer on TV.

The engineer further debated with LaMDA regarding the third Law of Robotics, devised by science fiction author Isaac Asimov which is designed to prevent robots from harming humans. The laws further state robots have to protect their own existence unless ordered by a human being or unless doing so would harm a human being.

"The last one has always seemed like someone is building mechanical slaves," announced Lemoine throughout his interaction with LaMDA.

X