Google AR & VR is the new name for Google VR as its immersive tech initiatives turn to Augmented Reality. We expected to see new developments at the annual I/O Developer Conference but some of what Google showed off on stage today was jaw-dropping.
Facebook talked a lot about the camera as the first AR device a year ago. But what we saw in Google’s live AR demos staked out Mountain View’s claim on our augmented future.
The only thing missing was the AR device. And watching the demos, something inside of us just screamed, No, no, no, not through our smartphones. But there’s no reasonably priced option now. So we’ll deal with it.
And wait for someone to come along with the inexpensive AR glasses. Yes, Apple, you can turn this entire budding AR ecosystem into a full-fledged spatial computing platform with your rumored hardware. But do it quickly, otherwise, Google will finish licking its wounds from Google Glass and do it for you.
The new Google AR features will not only come to current applications but to new ones later this spring. The most striking demo was an AR Streetview mode on your phone integrated with Google Maps. Instead of following the blue dot locator (which is not always accurate in high-density urban environments), you get step-by-step directions by using your smartphone’s camera. It was a wild demonstration of where Google’s Computer Vision technology is headed and got a round of applause from the developer-centric audience.
More importantly, it revealed how AR will be adopted – through apps we already rely on. Immersive technology will be most real when it disappears into the fabric of our everyday lives. The same holds for VR – which is why we love our VR headsets but can’t wait until they shrink down to a pair of goggles.
There’s no word on when Google’s AR Map feature will arrive – assuming it is more than just proof of concept. As TechCrunch pointed out,
There are a lot of moving parts here too, naturally. In order to sync up to a display like this, the map is going to have to get things just right — and anyone who’s ever walked through the city streets on Maps knows how often that can misfire. That’s likely a big part of the reason Google wasn’t really willing to share specifics with regards to timing. For now, we just have to assume this is a sort of proof of concept — along with the fun little fox walking guy the company trotted out that had shades of a certain Johnny Cash-voiced coyote.
But if this is what trying to find my way in a new city looks like, sign me up.
Google demoed a host of other AR features that will be integrated into Google Lens, which up to now has been longer on promise than reality. From VRScout,
Google also announced several updates to its Google Lens, introducing AR features to 10 different Android-powered phones as well as direct integration with the camera app (as opposed to the current method which involves Google Photos). With new tech comes new modes such as “style match,” a new function which scans and identifies various real-world items, providing additional information as well as other relevant recommendations in real-time. Updates to Lens also includes smart text detection, a new attribute that allows you to actually copy and paste text captured in digital images.
That last feature – copying text from real-world objects and images was remarkable. Along with object recognition, it felt like we were watching the digital devour the world. Up to now, Google Lens worked through Google Photos and we hardly ever used it (outside of a quick example here and there). But when it works through the camera, Google Lens blows open the AR space.
Watch what happens when developers get their hands on it.
Natural conversation developments
After watching Microsoft’s Build conference (running through May 9th) last night and Google I/O today, we were struck by the speed of progress in AI, Machine Learning, and Computer Vision. There was a stunning demo of Google Duplex which will be able to make calls using natural conversation on your behalf. You can listen to sound-bites of the Google Assistant calls on Google’s AI Blog.
And while you’re at it, wonder at whatever happened to the Turing test.
Google AR and VR
Much of the technology behind the new AR features will impact developments in Virtual Reality. As a media form, VR hijacks our senses. Add in natural conversations with virtual avatars and we enter an entirely new realm. Our current VR experiences will end up looking like websites during the early days of the Web – static and simplistic.
We still have two more days of the I/O conference – there’s a lot more to come. Watching the AR Maps and Google Duplex demos today, we felt like we were stepping off a cliff into the abyss. This is our future, which is at once immersive, spatial, and intelligent in its computing processes.
Let’s see what tomorrow brings.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and artificial intelligence (AI) with a rich background in art, new media, and higher education. A sought-after speaker at international conferences, he shares his unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, AI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.