This week Google launched a search app that combines images and text to generate more search terms. This method can use smartphone cameras along with artificial intelligence and try to directly correct and expand the search results.
At this week's search event, Google unveiled details on how it uses a technology called the Multitasking Unified Model (MUM), which should intelligently understand what a user is searching for. Images of items and text, as well as giving users more ways to search for items.
Although Google has not announced a specific date, its blog post states that this feature should be implemented "in the coming months". Users will be able to point at something using the phone's camera, tap an icon that Google calls a lens, and then ask Google what they're looking at. Blog post - Scenarios like taking a picture of a bike you don't know its name and asking Google how to fix it, or taking a picture of a pattern and trying to find socks with the same pattern.
The user should also be able to use MUM to take a picture of a piece of equipment or clothing and ask if it is suitable for climbing Mt. Fuji. MUM must also be able to provide the information it learns from resources in several languages other than the user's preferred language.
Google uses artificial intelligence to make information more useful