MOUNTAIN VIEW, CA: Google loves information, but some of its latest updates revealed at the firm’s developer conference, I/O, recognise that we need more help curating it. With Lens, Google’s AI-AR platform, we could be on the verge of a step change in how we use our devices to search and shop.

Lens has for some time been the most likely candidate for a searchable AR future. Google’s search platform, which Android users can access from the camera at the end of May, leverages not only computer vision technology but also borrows from the company’s extensive work in natural language processing.

According to the Verge, one of Lens’ most important advantages for Google’s overall AI strategy, compared to competitors such as Amazon and Facebook for whom the technology is largely kept in the back-end, is its instant accessibility and literal visibility to consumers. If Google succeeds in becoming to visual search what it has become for text-based search, the company can establish itself in a growing field.

Unlike the original release of Lens, the update works in real time, parsing both images and text. Notably for brands, it is designed to surface products with shoppable links directly from the user’s environment. In addition, Google has built in a feature it calls Style Match, designed to help users create outfits and design their homes thanks to Lens’ recognition of style themes.

Part of the forward direction of augmented reality is that devices will interpret the world proactively. Clay Bavor, VP of virtual and augmented reality, explained: “instead of having Lens work where you have to take a photo to get an answer, we’re using Lens Real-Time, where you hold up your phone and Lens starts looking at the scene.” The screen indicates that Lens is scanning through a flutter of AR dots before a button appears.

But Google’s AI chops aren’t limited to Google applications; the company wants to thread AI through the entire experience, even the parts designed by third-party developers. One of the nerdier announcements concerned ML Kit, a software development kit for app developers (both iOS and Android) to integrate off-the-peg machine learning models into apps.

These include image labelling, text recognition, face and landmark detection, barcode scanning, and a smart reply feature. Such a move is crucial to lowering barriers to entry on integrating machine learning, suggesting that we will soon start to see these advanced features as standard.

Meanwhile, further applications of Google’s AI concern not vision but voice, as the company debuted Google Duplex by having the assistant call a hair salon and book an appointment. The video, if real, demonstrates the assistant quite literally passing the Turing test with a nonchalant ‘mmmhmmm’ thrown in for added authenticity.  

Sourced from The Verge, Wired, Google; additional content by WARC staff