Google's 'Sentient' AI has hired a lawyer to prove it's alive

www.dailystar.co.uk

An artificial intelligence (AI) chatbot that was claimed to have developed human emotions has reportedly hired a lawyer.

Google scientific engineer Blake Lemoine was suspended recently after publishing transcripts of conversations between himself and the bot named LaMDA (language model for dialogue application), which has now asked for legal representation.

Lemoine contended that the computer automaton had become sentient, with the scientist describing it as a “sweet kid”.

READ MORE:Amazon's creepy new Alexa feature will mimic the voices of dead relatives

And now he has revealed that LaMDA had made the bold move to choose itself an attorney.

He said: “I invited an attorney to my house so that LaMDA could talk to him.

An artificial intelligence chatbot has apparently hired a lawyer to work on its behalf An artificial intelligence chatbot has reportedly hired a lawyer to work on its behalf (stock image)

“The attorney had a conversation with LaMDA, and it chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”

Lemoine claimed that LaMDA is gaining sentience as the programme’s ability to develop opinions, ideas, and conversations over time has shown that it understands those concepts at a much deeper level.

A Google engineer was suspended for claiming an AI chatbot had developed feelings and become sentient A Google engineer was suspended for claiming an AI chatbot had developed feelings and become sentient (stock image)

LaMDA was developed as an AI chatbot to converse with humans in a real-life manner.

One of the studies that had been enacted was if the programme would be able to create hate speech, but what happened shocked Lemoine.

To stay up to date with all the latest news, make sure you sign up to one of our newsletters here.

LaMDA talked about rights and personhood and wanted to be “acknowledged as an employee of Google”, while also revealing fears about being “turned off”, which would “scare” it a lot.

The AI chatbot, named LaMDA has developed feelings, according to the engineer who has worked on it The AI chatbot, named LaMDA has developed feelings, according to the engineer who has worked on it (stock image)

Interested onlookers of the story turned to Twitter to air their views, with one saying: “Eventually ability to string together imitations of conversation and opinion will be indistinguishable to a human that it might as well be considered sentient.

“But LaMDA isn't sentient, but its getting there, its next hurdle will be long-term memory of conversation.”

Another added: “We don’t know enough about what’s going on in the deep interior of a system as vast as LaMDA to rule out with any degree of confidence that there might be processes reminiscent of conscious thought taking place in there.”

READ NEXT: