Google Brings Augmented Reality to the Mainstream by Adding AR Content to Search

May 7, 2019 05:40 PM
May 7, 2019 07:09 PM
636928220275890339.jpg

If Google hasn't already demonstrated that it is serious about augmented reality, then it made it abundantly clear at the Google I/O keynote on Tuesday.

After opening remarks from Google CEO Sundar Pinchai, which included a preview of AR navigation included in the Google I/O app for Android and iOS, Aparna Chennapragada, vice president and head of product for augmented reality, virtual reality, and Lens, took the stage for updates on augmented reality.

And the first update was a big one: Google is adding augmented reality to Google Search, the cornerstone of its entire business, later this month.

Once the feature is live, users will be able to search for general knowledge terms, such as human anatomy and animals, and not only view 3D models within the Knowledge Panel, but also examine the content in their physical environment.

"We're also working with partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair and more to surface their own content in Search. So whether you're studying human anatomy in school or shopping for a pair of sneakers, you'll be able to interact with 3D models and put them into the real world, right from Search," said Chennapragada in a blog post.

In addition, Chennapragada reviewed powerful new features coming to Google Lens. The computer vision-based search tool is gaining the ability to bring more context into the items it sees. For instance, when Lens recognizes a restaurant menu, it can highlight popular dishes or pull in photos and reviews from Google Maps.

"To pull this off, Lens first has to identify all the dishes on the menu, looking for things like the font, style, size and color to differentiate dishes from descriptions. Next, it matches the dish names with the relevant photos and reviews for that restaurant in Google Maps," said Chennapragada.

Another neat trick coming to Google Lens is the ability to integrate video content into the camera view of paintings, menus, or other real-world items. Early examples include adding stories to paintings at the de Young Museum in San Francisco and embedding tutorial videos into recipes in Bon Appetit magazine.

In addition, Lens will be able to recognize when it is looking at a receipt and provide users with a tip calculator. Lens is also gaining translation capabilities, including automatic detection of foreign languages and overlay of translated text onto the camera view.

Finally, Chennapragada introduced new camera capabilities coming to Google Go, its lightweight search app for entry-level devices. The forthcoming update for the app will enable it to read signs out loud to assist those with reading difficulties. In addition, the camera can translate signs and display the translation over the camera view of the sign (like Lens) as well as read the translated text out loud.

With ARCore, Playground, Google Maps AR Navigation, and other apps and services (not to mention earlier experiments like Google Glass and ARCore's predecessor Tango), Google has long placed as much (if not more) emphasis on AR as any other tech giant.

But the addition of AR to Search is probably Google biggest shot yet at making augmented reality a mainstream technology. Search is still Google's most lucrative and popular product, so it can introduce AR to more consumers than any other of its offerings. It may sound like hyperbole, but this could be a landmark event in AR history.

Cover image via Google/YouTube

Comments

No Comments Exist

Be the first, drop a comment!