Artificial intelligence will be integrated into Google search, which will provide versatile answers to various questions.
After Microsoft’s impressive presentation detailing the use of artificial intelligence as a co-pilot in Bing search chats, Google has focused more on improving existing features like Lens and Translate. Hints that the first details of the work of an AI-based chatbot called Bard would appear today did not materialize much.
- the update touched on “multi-search” – a tool that allows you to start a search using an image and a few words of text.
For example, a user might search Google Search for a picture of a shirt they like and then ask Lens where they can find the same print on a different type of clothing, such as a skirt or socks, says BlueScreen.
Or you can point your smartphone camera at a broken part on a bike and type “how to fix” into Google Search. This combination of words and images can help Google process and understand the query better.
- Bard will also be integrated into the search: the neural network will collect information at the user’s request and give a detailed answer. Or, for example, NORA (no one right answer) will apply: if you ask the search a question that does not have a definitive answer, it will offer several points of view and also offer links where you can find out more details. In addition, users will have the opportunity to ask clarifying questions, a list of which will be offered under the answers.
- Google Maps’ Immersive View feature combines a 3D view of a specific area with specific information such as traffic and weather. It will be implemented in five cities: London, Los Angeles, New York, San Francisco and Tokyo.
- a number of new features for owners of electric vehicles – a filter for an in-depth search for charging stations with many extra clarifications.
- AR on Google Lens combines translated text with the image it was taken from. It will be available in all countries.
NIXsolutions notes that the following are the features that have not yet been given a specific release date:
- Android users, when watching a video, will be able to call an assistant and find a building or person in the frame without closing the application;
- if you use Google Maps on your smartphone, then Google will be able to show the time of arrival and the place of the next turn directly on the lock screen while driving;
- “Multi-search nearby” function – allows you to search for a specific dish or place near you. Currently only available in the US.
- Indoor Live View, which overlays AR directions to help navigate hard-to-reach buildings and places, will be expanded to more than 1,000 new airports, train stations and shopping malls.
- search with Live View – allows you to find out more information about specific places by viewing them through your phone’s camera. Added in Barcelona, Dublin and Madrid.
- Google Translate will be able to provide and understand additional context for certain words and phrases. First, it will be available only for English, French, German, Japanese and Spanish.