Here is How Google Lens Offers a Snapshot to Future of Augmented Reality

Google Lens offers a snapshot of the future of technology in AI and Augmented Reality
Google Lens offers a snapshot of the future of technology in AI and Augmented Reality

With the launch of Google Lens, it is clear that future of technologies includes the likes of Augmented Reality, Virtual Reality, Artificial Intelligence, IoT, etc.

We’re taking tentative steps into the future and it seems that next few years are to be very exciting for tech enthusiasts especially in the field of augmented reality and AI. The recent launch of Google Lens makes it very clear.

Whenever there is these kind of paradigm shifts, the technology behind the is more important. The underlying breakthroughs is what drives the innovations. It is these innovations that ultimately end up changing our lives. Keeping your ear to the floor and looking out for examples of new technology can thus help us better understand what might be around the corner.

This is certainly the case with the recently unveiled Google Lens. It definitely provides us with some very big hints of the future of Google and perhaps technology as a whole. Google Lens is powered by advanced computer vision, it enables things such as augmented reality, certain forms of AI and even ‘inside-out motion tracking’ of virtual reality.

Google Lens, in fact, encapsulates a number of recent technological advances and is the perfect example of Google’s new direction as an ‘AI first’ company. It may just provide a snapshot of the future.

What is Google Lens?

Google Lens is a tool that essentially brings search into the real world. The idea is simple: you point your phone towards something that you want more information on and Lens will provide that information.

So yes, it sounds a lot like Google Goggles. It might also sound familiar to anyone who has tried out Bixby on their Galaxy S8s. Only it’s much better than either of those things. Actually, it is supposedly so good, that it can now identify the species of any flower you point it at. It can also do OCR tricks (Optical Character Recognition – i.e. reading) and a whole lot more.

google translate
Google Translate has been doing OCR for a while.

At the I/0 2017, Google stated that we were at an inflexion point with vision. Meaning it is now clearly possible for a computer to look at a scene and dig out the details and understand what’s going on. Hence the name – Google Lens.

This improvement comes because of machine learning, which allows companies like Google to acquire huge amounts of data. It can then create systems that utilize the data in much more useful ways. This is the same technology that underlines voice assistants and even your recommendations on Spotify to a lesser extent.

More technologies that use computer vision

This computer vision used by Google Lens will play a big role in many aspects in the future tech. As Computer vision is instrumental in VR. These devices allow the user to actually walk around and explore the virtual world they are in. To do this all they need is to be able to ‘see’ either the user, or the world around the user. And then they can use that information to tell if they are walking forward or leaning sideways.

All of these becomes important for a high-quality augmented reality. For instance, for Pokémon Go to place a character into the camera-image in a realistic manner, it needs to understand where the ground is and how the user is moving. Pokemon Go’s AR is actually incredibly rudimentary, however the filters seen in Snapchat are surprisingly advanced.

This is something that we Google is also working on too with its project Tango. This initiative is to bring advanced computer vision to handsets through a standardized use of sensors that can provide depth perception and more. The Lenovo Phab 2 Pro and Asus ZenFone AR are two Tango-ready phones that are currently commercially available!

With its huge bank of data Available with Google. There is really no company better poised to make this happen than Google

Although Google started its journey as a search engine, computer vision has proved to be really useful for the company with this regard. For ex, if you search Google Images for ‘Books’, you’ll be presented with a series of images from websites that use the word books. This means that Google isn’t really searching images at all. It is indeed just searching for text and showing all the ‘relevant’ images. With advanced computer vision, it will be able to search the actual content of the images.

in short, Google Lens is in reality an impressive example of rapidly progressing technology. The technology that as we speak is opening a whole floodgate of new possibilities for apps and hardware. And with its huge bank of data available, there is really no company better poised to make this happen than Google.

Google as an AI first company

But where does AI fit in all this? Is it a coincidence that the same conference brought the news that the company would be using ‘neural nets to build better neural nets’? Or the quote from Sundar Pichair about a shift from ‘mobile first’ to ‘AI first’?

What does ‘AI’ first mean? Isn’t Google primarily still a search company?

Well yes, however, AI is the natural evolution of search. Traditionally, when you searched for something on Google, it brings up responses by looking for exact matches in the content. For ex, if you type ‘fitness tips’ then that becomes a ‘keyword’. Google then provides content with repetitious use of that word. Better still you’ can see it highlighted in the text.

But is this an ideal scenario? The ideal scenario would be for Google to actually understand what you’re saying before providing the results. That way, it could offer other relevant information. It could then suggest other useful things and become an even more indispensable part of your life. Wouldn’t that be good for both Google and for Google’s advertisers?

This is the very reason why Google has been pushing forward with its algorithm updates. Seeking changes in the way it searches. Internet marketers and search engine optimizers now know that they need to use synonyms and relevant terms in order for Google to show their websites. It’s no longer good enough to just include the same words repeatedly. On the other hand, ‘Latent semantic indexing’ allows Google to understand context and also gain a deeper knowledge of what is being said.

This aligns perfectly with other initiatives that the company has been recently pushing on. It’s the ability of natural language interpretation, for instance, that’s allowing something like Google Assistant to exist.

For ex, when you ask for information to a virtual assistant, you say: “When was Hugh Jackman born?”

You do not say, “Hugh Jackman birth date”

We talk differently from how we write. This is where Google starts to work more like an AI. Other initiatives like ‘structured markup’ that asks publishers to highlight key information in their content like ingredients in a recipe and dates of events. This makes life very easy for Google Assistant when you ask it ‘when is Star Trek coming out?’.

Google has been largely dependent on publishers and webmasters to create their content keeping all these in mind. Even if they haven’t always been transparent about their motivations – internet marketers are a sensitive bunch. This way, they’re actually helping to make the entire web more ‘AI’ friendly. One that is ready for Google Assistant, Siri and Alexa to step in, anytime.

Now with advancements in computer vision, this advanced ‘AI search’ can further enhance Google’s ability to search the real world around you. This would help provide even more useful information and responses as a result. Imagine being able to say ‘Okay Google, what’s that?’.

And imagine combining this with location awareness and depth perception. What would happen when you combine this with AR or VR? Google Lens can reportedly even show you reviews of a restaurant when you point your phone at it. This is as much an example of AR as it is of AI. All these technologies are coming together in fantastically interesting ways. They are even starting to blur the line between the physical and digital worlds.

As Pichai put it:

Google was built because they had started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for their core mission and vision.

Final Thoughts

Technology has been moving in this direction for a while. Bixby technically beat Google Lens, except it lost points for not working quite as advertised. No doubt many more companies will be getting involved soon enough in this advancement.

But Google Lens sends out clear statement from Google. That they are commitment to AI, to computer vision and to machine learning. It is a very clear indication of the direction that company will be taking in the coming years. It is likely going to be the same direction of technology, in general.

 The singularity, brought to you by Google!



Avani Lalka is an experienced marketer, and a writer in the field of new-age marketing and career development. She writes from her heart to make a difference in the lives of the people that follow her. Currently she is heading Account Based Marketing (ABM) in a popular Pune-based startup. Feel free to connect her as she welcomes new ideas and opportunities. Email: