DH Latest NewsDH NEWSUSLatest NewsNEWSTechnologyInternationalSpecialMobile Apps

Now, search Google with pictures…rather than words; Read on…

You enjoy the way an outfit looks, but you’d like it in green. You desire those shoes but prefer flats over heels. What if you could have curtains with the same pattern as your favorite notebook? I’m not sure how to Google for these things, but Google Search product manager Belinda Zeng gave me real-world instances of each earlier this week, and the answer was always the same: snap a photo, then write a single word into Google Lens.

Today, Google is introducing a US-only test of the Google Lens Multisearch feature that was revealed at its Search For the event last September, and while I’ve only seen a crude sample so far, you shouldn’t have to wait long to try it for yourself: it’s rolling out through the Google app on iOS and Android.

While it’s initially geared mostly at shopping — one of the most popular requests — Google’s Zeng and the company’s search director Lou Wang say it might do much more. ‘Imagine you have something broken in front of you and you don’t have the words to express it, but you want to fix it… you can just write how to mend,’ Wang explains.

Zeng adds that it could already function with certainly damaged bicycles. She also claims to have learned how to style nails by screenshotting photographs of gorgeous nails on Instagram and then Googling the phrase ‘tutorial’ to acquire video results that weren’t immediately appearing on social media. You may also be able to snap a photo of, say, a rosemary plant and receive maintenance recommendations.

‘We want to help people comprehend issues organically,’ Wang adds, revealing how multi-search will expand to include more videos, pictures in general, and even responses similar to those found in a typical Google text search. It also appears that the goal is to put everyone on an equal playing field: rather than collaborating with certain businesses or even limiting video results to Google-owned YouTube, Wang says the company will surface results from ‘whatever platform we’re able to index from the open web’.

But it won’t work with everything — just like your voice assistant doesn’t work with everything — since there are an endless number of conceivable queries and Google is still figuring out intent. Should the system prioritize the image or your text search if they appear to contradict each other? That is an excellent question. For the time being, you do have one more option: if you want to match a pattern, such as the leafy notebook, go up near to it so Lens doesn’t see it’s a notebook. Because, as previously said, Google Lens is attempting to detect your image: if it believes you want additional notebooks, you may need to inform it that you do not.

Google is expecting that AI models will usher in a new age of search, but there are many unanswered issues about whether context — rather than simply text — will be able to take it there. This experiment appears to be so constrained (it doesn’t even employ its most recent MUM AI models) that it is unlikely to provide us with an answer. However, it appears to be a cool gimmick that might go a long way if it became a standard Google Search feature.

shortlink

Post Your Comments


Back to top button