How to do a visual search with AIBack
AI (Artificial Intelligence) is being used in more and more Google products. In the coming days we will explain a number of these applications.
Since AI has been used in searches, the focus has mainly been on language and word searches. Searching by image and video has now also become easier. Google Lens is used over 10 billion times a month to find photos from the search bar.
New is now that you can also search for an image that is on your mobile. So you can now search for photos or videos that you receive or images that you see on a website.
Say someone sends you a video from Paris. If you want to know more about the building you see in the background, you can press the power button or the home button on your Android phone. This calls up the Google Assistant. Then tap on 'search screen', after which Lens recognizes the building as the Palais Luxembourg. If you want more information about this building, you can click.
Even more extensive is 'multisearch'; it allows you to search with an image and text at the same time. Multisearch is possible in all languages and countries in which Lens is available. In English (in the US) it is currently also possible to search locally. For example, you can take a photo and say "search near me" if you need something quickly.
You can also use multisearch for every image that appears on the search results page. For example, you search for 'modern living room ideas' and see a coffee table that you like, but that you prefer in a different model, for example rectangular instead of round. You can then add the text "rectangle" in multisearch to search for a rectangular model.