Home Tech News>Mobile>Android Google I / O 2022: the news on services from Maps to...

Google I / O 2022: the news on services from Maps to Search, from Assistant to Translate

0
16

After long days of waiting, previews and rumors, Google I / O officially began with the usual opening conference, also becoming an opportunity to see the latest hardware and software news.

When you think about these appointments, it is difficult to associate the greatest number of expectations with themes like the search engine or Google Maps, yet these are very important pieces of the Mountain View giant’s offer. And this is true both if you think about the amount of users who use them, and the ability to directly or indirectly generate a good portion of the turnover, landing a large portion of the ads purchased by advertisers.

Google I / O 2022 has given us the opportunity to take a look at what will be the news of the coming months: let’s see them in detail starting from the updates to the search engine.

INTEGRATED SEARCH ENGINES

 

For Google the world of searches will be less and less tied to the classic insertion of a text string in the appropriate field, and more and more the result of a mix of interactions with the tools developed internally and the world around us. The goal is to make every operation a natural process, offering more complete responses that comply with users’ expectations.

Google Lens, for example, has been available for some time and goes in this direction. The app has reached 8 billion searches per month, perhaps not a gigantic number in absolute but triple compared to 12 months ago, a sign that it is growing.

The next steps will pass from the interaction of the different tools available. With Multisearch, announced a few weeks ago, it will be possible to combine image and text search. In this way, you will be able to photograph a dress, a pair of shoes or any other object and specify some extra details, such as the color or size you want, then get more specific results. Another use case could be that of photographing a plant and adding to the research the intention to find the best solutions to take care of it or products to keep it healthy.

At Google I / O 2022 it was announced that Multisearch will also work to search for results near you. By taking a photo of a subject and typing “near me”, The results will show us which shops, bars or restaurants sell it in our vicinity. But the idea can be extended to anything. This feature will be introduced first in English, by 2022, and then extended to all other languages.

For this purpose, Google mixes the results obtained after the first scan of the image, which allow it to identify what it is, with the contributions of users who upload their impressions and shots to the reviews and exercise cards. .

The other novelty of today’s appointment is instead Scene Exploration, which expands the idea of ​​Google Lens to entire scenes. By photographing the shelf of a bookshop or the counter of a supermarket, the whole scene will be analyzed and information related to multiple products will be shown at the same time. These are news that will arrive in the near future and for which, at the moment, there are no certain dates.

MAPS TAKES US TO SAVE GASOLINE

As for Google Maps, the progress of recent and future years, the company explained, mainly involves the integration of increasingly advanced artificial intelligence systems. In particular, the initial purpose of Maps, that is to lead us from one place to another, has over time been accompanied by an increasing number of functions that allow us to inform ourselves about what surrounds us.

The addition of Immersive View, a new system that, combining Street View and aerial shots, will allow you to observe what is in a certain place, such as restaurants or places of interest, and so on. It is not very clear how detailed the information and images will be available but, considering that it will be released a little at a time, starting from Los Angeles, London, New York and Tokyo, it is likely that it is a rather large work and therefore of an interesting addition.

From the perspective of sustainability, the function that will allow you to choose the route that by car or motorcycle allows us not so much to use less time, but rather to consume less petrol and produce fewer emissions. This is a novelty already being tested in the USA and Canada, but which will also arrive in Europe.

Finally, a Live View enhancement was announced, an augmented reality system that allows you to see on your smartphone screen which direction to take to reach your destination and which now, thanks to the integration of a series of specific APIs, will be a little more useful. Bike sharing companies, for example, can use it to lead us to a bicycle to rent or inside a shopping center we can be led to the shop or restaurant we are looking for. DOCOMO and Curiosity, on the other hand, are developing a video game that promises to bring users to fight some monsters in Tokyo’s most iconic locations.

24 NEW LANGUAGES

Google Translate will add support for 24 new languages, bringing the total to 133. Idioms like Assamese, Dogri, Ewe, and Krio may say nothing to many of us – including this writer, until a few minutes ago – but this new package of additions, which have focused specifically on the north of the India and the African continent potentially caters to over 300 million people. The most used new language is Lingala, spoken by 45 million people in Africa, while the least common is Sanskrit, still used by around 20,000 people in India.

EYE TO THE ASSISTANT

According to Google, there are over 700 million people who use Assistant every day and one of the goals that the company has set itself is to make interactions more natural, starting with the way we call it back. In addition to being able to say “Hey Google“Or press the appropriate button, Nest Hub Max users in the US can simply look at the device and ask the question they are looking for an answer to. An addition that had already been talked about a few weeks ago.

The function is activated only once the Nest Hub Max has verified a correspondence between the voice and the face of the registered user, and the video recording operations are entirely processed within the device, so as to avoid trouble or cause privacy issues. Google said it uses six machine learning models to understand if you are actually addressing the device, considering aspects such as body and head position, proximity, lip and eye movements, and so on.

It is not a question of who knows what addition, but the goal that Google sets itself is to continue on the path of a better interpretation of people’s language, understanding the interruptions, errors and other imperfections that mostly unconsciously characterize the way of express oneself of each.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here