It’s been a busy summer and we’ve been meaning to take a look at Apple’s ARKit. There’s been no shortage of news on the Augmented Reality front. Google has been working on Tango for over three years with more Tango-enabled devices coming to the market. HoloLens has been around for 15 months and Microsoft is now looking to make their tablets AR-enabled. Facebook has no device but recently turned the tables on us – announcing that the first AR platform is the cameras in our Smartphones.
Mobile AR apps have existed for almost a decade, giving us a hazy glimpse of what our augmented future will look like. But it was Snap and PokemonGo last summer that made filters and digital layers fun. The world suddenly found that AR could be everywhere.
And then out of the blue a few days ago, Google released an update to its discontinued Google Glass, my first AR device. Gathering a little too much dust on a shelf, Glass suddenly sprang to life, whispering . . . remember me?
The momentum for Augmented Reality has been intensifying and then along comes Apple’s ARKit. Fashionably late to the party but in the usual fashion, upstaging the entire crowd. If Tim Cook didn’t already hint and hint again about his love of AR, ARKit makes it clear. Apple will play a major role in our AR future.
Apple’s ARKit for iOS was announced in June at the WWDC developers conference. It’s an easy-to-use mobile platform for developing augmented reality experiences for iPhone and iPad. At the conference, Apple engineers showcased several examples from retail, games, and entertainment. By releasing the developers kit Apple is empowering thousands of developers to design for the millions of iOS devices starting from the A9 processor and up, including the iPhone 6S.
So what does the Apple ARKit provide? Apple breaks it down to three layers. The first one is tracking, which is the core functionally of the ARKit and has the ability to track your device in real time. It is using camera images and motion data from the iOS device in order to get a precise view of the device’s location and orientation. Most importantly, there’s no need for additional sensors or preexisting knowledge about your environment.
The next layer is scene understanding or the ability to determine attributes and properties about the environment around your device, to detect surfaces such a ground floor or a table so you can place virtual objects in the physical world. It also includes an ambient light estimation which allows you to properly light digital objects in the physical world.
The last feature is rendering which provides easy integration into any renderer “. . . with a constant stream of camera images and tracking information.”
And essential to the entire process, both Unity and Unreal engines are supporting the full range of features in ARKit.
This is the framework Apple has provided to developers to create AR experiences. Over the past few weeks, developers have already started sharing a number of simple AR experiences. The ease of development is critical. As The Verge put it:
Apple’s AR will immediately reach millions of people who already have the requisite hardware. And while it looks to be functionally as flexible and capable as Google’s Tango . . . its broader audience makes it much more enticing for serious developers to invest their time and money into. Google’s Tango is about the future whereas Apple’s ARKit is about the present.
Check out some of the examples below.
Apple Is All In AR
Apple has been acquiring AR companies for some time but Apple’s ARKit signals that it’s Game On for developers. Judging from the Apple App Store popularity and what we’ve seen in only three weeks, we expect Apple to play a major role in the Augmented Reality space.
But things do not end here. Home, Health, Gym and ARKit will eventually be driven with AI. Apple’s goal is to connect all your devices and bring us closer to an increasingly intelligent Internet of Things. In time, we’ll lose the phone and the AR will be on our glasses. For now, wait patiently for that iPhone 8 this fall. With ARKit already in the hands of developers, it’s going to be our first fully AR Smartphone.
And the beginning of a whole new world.
Maya Georgieva is an EdTech and XR strategist, futurist and speaker with more than 15 years of experience in higher education and global education policy. Her most recent work focuses on innovation, VR/AR and Immersive storytelling, design and digital strategy. Maya actively writes and speaks on the topics of innovation, immersive storytelling and the future of education and consults organizations and startups in this space.