AppleTech News

Users will be able to test Google AI that would have gained feelings

LaMDA 2, Google’s artificial intelligence that had some of its news announced during the I/O 2022, in May, is now also receiving registrations from users interested in chatting with the chatbot that has given rise to talk. More than its technological capabilities, the machine made headlines after Google removed an engineer who accused the company of violating robotics ethics by producing an AI that would have “come to life”, with feelings – and everything else that one day it is feared that happen.

On the case, Google largely believes that the systems are already advanced enough to imitate human conversations and ways, but from there to have feelings, there is a great distance. LaMDA, a natural language processing (NLP) model, should improve your conversational AI assistants and make conversations more natural. Voice assistants like Siri or Alexa are prominent examples of such technologies as they can translate human speech into commands.


Now, the company wants public feedback and is opening registration for anyone who wants to be one of the first to interact with the news. Users who enroll in the LaMDA beta program will be able to interact with this AI in a controlled and monitored environment.

Android users in the US will be the first users to be able to register, and the program will expand to iOS users in the coming weeks. This experimental program offers some demos for beta users to demonstrate the capabilities of LaMDA.

The first demo is called “Imagine It”, which allows users to name a place, and from there, the machine offers avenues to explore their imagination. The second demo is called “List It”, where users can share an objective or topic and then break it down into a list of useful sub-tasks. Finally, the last demo is “Talk About It (Dogs Edition)”, which allows open conversations about dogs between users and chatbots.

Despite the tests, Google warns that the system, naturally, is not flawless, nor is it free from the typical flaws of human language. “The model may misunderstand the intent behind the identity terms and sometimes fail to produce an answer when they are used because it has difficulty differentiating between benign and contradictory prompts. It can also produce harmful or toxic responses based on biases in your training data, generating responses that stereotype and misrepresent people based on their gender or cultural background.

Applications can be made through Google’s AI Test Kitchen portal.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button