Yes, the Iron Man suit. That’s what I imagine Augmented Reality to be.
The days of augmented reality headsets like the Hololens are still far off. However, we carry high resolution screens and powerful cameras in our smartphones. Using phones as a means to augment reality will make this technology available to the masses.
When you point at an object using Google Lens, you are essentially capturing an image that will run through a neural network and undergo detection and interpretation. After the image is detected and interpreted, related information will pop up on your phone screen. As Wired puts it, “Lens is essentially image search in reverse: you take a picture, Google figures out what’s in it.”
For example, you can point Google Lens at a building at it will name the building and instantly provide access to important facts about the building. If you point at an orange, you may be able to pull in its nutritional information. If you point it at a restaurant, Google Lens should provide you with restaurant reviews and means to book a table. Along with object recognition, Google Lens will also detect Wi-Fi passwords from pictures, or read business cards and convert them to contacts. All in all, it is the best implementation of Machine Learning and AI that will be open for public use.
Google Lens will make it to Google Assistant on both Android and iOS. It will also be integrated into Google Photos. With easy one-tap access on an Android device, Google Lens will enable users to make choices and learn about any object just by pointing at it. Google Lens is not yet available, but I will pounce on the opportunity to use it.
Follow Technonerds on Facebook and Twitter
P.S. Take that, Bixby ?