Most people are well aware of how to perform normal searches with their computers. Think of something, toss it into a search engine, and check out the results. It’s become second nature for an entire generation of people who’ve grown up with it. That familiarity has allowed people to miss some of the issues with the platform. One of the biggest is the simple fact that search engines are text based. Even those which use sound are simply transcribing spoken word to a written phrase. And usually doing a fairly bad job of it.
Things are changing though. People keeping track of the latest advances in computer technology may have heard of something called visual search. Visual search is both very simple and very complex, at the same time. This might seem contradictory at first. But the difference is that it’s only complex in terms of the actual computing going on. The user experience can be just as simple as a text based search engine. To understand how this can be the case one first needs to understand more about how visual searches and image recognition work.
One of the most difficult factors comes from the nature of how people see the world. Much of what happens when we look at something is occurring without our conscious awareness. There’s actually a large portion of the brain devoted only to processing visual information. Our eyes see series of partial images, and this in turn is fed into specific parts of the brain. The brain will then go over these images and search for patterns in order to fill in the blanks. When we look at something we’re quite literally only seeing part of the picture. But our brains devote an incredible amount of power into extrapolating more information than what we’re actually seeing. This is then linked into our memories and higher brain function. And this is why it’s taken so long for computers to be able to recognize visual images. There’s a lot more going on than we’re consciously aware of. But some companies have managed to simulate that process within software systems.
One company, Slyce, provides the best example of state of the art technology. They’ve managed to actually create visual search system which can be plugged into existing applications. Or they can use their visual search to create entirely new applications. But the end result is that they’re now able to produce software which can recognize the world around us. If someone sees an object, they can simply use an app on their phone to tie into the Slyce services and recognize it. What happens next is up to the app. But one of the most popular uses is to use that information to help promote sales and services.