On a par with all the greats in the technological sector, Google is also investing time and money on artificial intelligence that is materializing with Gemini, in endless software ecosystem of the Mountain View giant, and with the coupled To the ovserviews/To fashions in the world of research.

Although the IA is able to surprise us positively in some respects, there are many others that are rather “funny” and demonstrate how much road there is still to be done in development before they can actually define it.

To the Google research shells shows its limits

At the end of March, Google brought functionality to Italy To the ovserviewsintegrated in the Google research and obviously based on artificial intelligence, useful for providing users with more complete and articulated responses compared to simple research results (which are in any case shown below).

At present, take like pouring gold everything that is suggested by To the ovserviews It could be a big mistake, especially considering that it is possible to deceive the artificial “intelligence” that underlies functionality in a very simple way.

In fact, some users have entered the search box some completely invented proverbs, asking for their meaning. The IA did not take a step back, going to check in advance the existence of these expressions, limiting itself to invent the meaning of healthy.

The hallucinations of artificial intelligence

The @gregjenner user on Bluesky asked for the meaning of the way of saying “You can’t lick a rate twice“. To the ovserviews He explained that this saying wants to tell us that you cannot deceive or cheat someone a second time after he was deceived once.

The colleagues of Engadgetwho have made many other attempts always and in any case rather imaginative answers, define these as the Hallucinations of artificial intelligence.

If someone is based on what was provided in response from an AI without verifying its truthfulness (giving it for granted everything) it could end up in trouble: such an example has already occurred in 2023, when two lawyers were fined of $ 5,000 for having exploited chatgpt in the cause of a customer, taking advantage of a chatbot response that generated non -existent cases cited by the two.