Google may soon combine from text and image search results – Times of India
Google has started to test a new multisearch feature for a few beta users. As per the tech giant, the new multisearch feature in lens will allow you to go beyond the search box and ask questions about what you see. With the new feature, you can ask a question about an object in front of you or refine your search by color, brand or a visual attribute. After getting the new feature, when you tap on the Lens camera icon in the Google app and search for any image in the gallery or from the camera, you will be able to swipe up and tap the “+ Add to your search” button to add text.
After adding the item to the search you will be able to refine your search. For example, you can screenshot a stylish orange dress and add the query “green” to find it in another color or snap a photo of your dining set and add the query “coffee table” to find a matching table.
As explained by Google in a blog post, the feature uses the company’s latest advancements in artificial intelligence (AI), which is making it easier to understand the world around in more natural and intuitive ways. The company further revealed that it is also exploring ways in which this feature might be enhanced by Multitask Unified Model (MUM).
For those who do not know, the tech giant announced MuM at Google I/O in May last year. The technology uses the T5 text-to-text framework and is trained across 75 different languages allowing it to develop a more comprehensive understanding of information. And as MUM is multimodal, it understands information across text and images. The company announced plans to add new artificial intelligence for Lens and Search in October last year.
For all the latest Technology News Click Here