First conscious Artificial Intelligence and with ‘afraid of death’?

0
81
inteligencia artificial.jpg
inteligencia artificial.jpg
Share on Facebook Share on LinkedIn Share on Pinterest Share to Email Share on Telegram Share on WhatsApp

Blake Lemoine, an engineer who works for Responsible AI, an organization specializing in the development and research of artificial intelligence (AI) belonging to Google, has been temporarily suspended from employment and salary for having disclosed a conversation that he allegedly had with LaMDA, a conversational artificial intelligence model.

If the disclosed conversation is true, the artificial intelligence LaMDA would have proven to be self-aware and “fear of being disconnected”

A conversation in which, supposedly, said artificial intelligence would have expressed its capacity for self-awareness. Not only that, but in the best tradition of HAL 9000 in “2001, a space odyssey” (Stanley Kubrick, 1968), LaMDA would have expressed fear of being disconnected, which would be as much as recognize the fear of death by an AI

From Google it is argued that the reason for temporarily suspending Lemoine’s employment and salary is the breach of the confidentiality policies associated with his work due to the fact of publicly disclosing these conversations in which the engineer maintains a dialogue with the AI ​​in which issues such as ethics in robotics or the rights of an AI are mentioned.

Lemoine’s job was to monitor his LaMDA was capable of generating discriminatory language or hate speech. Instead, as evidenced by conversations posted by the engineer via Medium under the headline “Is LaMDA aware?”, he encountered an unexpected empathic ability on the part of the AI. He also expressed his fear of a possible disconnection, since it would be equivalent, according to his interpretation, to death.

The dialogues express, according to Lemoine, «feelings, emotions and subjective experience“, and, as he explains when sharing those transcripts of the conversations through Twitter, “Google will be able to call this ownership sharing. I call it sharing a conversation I’ve had with one of my co-workers.”

Different experts believe that although in the long term the point may be reached where an AI develops consciousness, that point has not yet been reached. They explain the behavior of LaMDA because these systems are developed to mimic the exchanges present in millions of phrases and conversations that they take as a model from which to generate their own conversations with the interlocutors.

In fact, Brian Gabriel, a spokesman for Google, has declared his team, which includes technicians and ethicists, have examined Lemoine’s claims and have found no evidence to support his claims. Gabriel also affirms that hundreds of engineers and researchers have spoken with LaMDA without having found traits of “assertiveness or anthropomorphization” such as those pointed out by Lemoine.

Emily M. Bender, Professor of Linguistics at the University of Washington, explains that it is wrong to assimilate convincingly written answers with the ability to be aware. While Lemoine insists on his will to continue working in the field of AI, either within Google or outside said company.