For all the potential of virtual reality to place us in new environments, accessibility in VR remains a major challenge. We’ve already looked at Samsung’s Relúmĭno, a rough-around-the-edges product from their in-house technology initiative, C-Labs. But two developments over the past months are worth watching – Google’s work on distance perception using spatial-audio clues and the recent Microsoft “Canetroller” project.
Neither of these projects are ready for use by the public. But both address accessibility in VR issues that have been sadly neglected in the excitement over the birth of a new technology.
First, let’s look at Google’s work and then the Microsoft device.
Google’s spatial-audio solution
Using a process similar to alternative text on the Web and Talk-back (which adds spoken and haptic feedback to devices), Google created a prototype of spatial-audio navigation system for VR last fall. According to Google,
Using an HTC Vive, we built a prototype of a 1:1 scale virtual room, recorded the name of every object in the room, and linked these audio labels to the individual objects—including the floor, walls and other features. Then, we made the user’s field of vision entirely black to simulate complete blindness. To enable navigation in the pitch-black room, we created a 3D audio laser system that includes a laser pointer extending from the Vive controller to select and play the audio labels, and an audio location control (touchpad click) to provide distance and direction to the last object aimed at by the laser pointer.
When a person aims the laser pointer at a virtual object and selects the audio location control, the VR system plays a short impulse response tone at location of the controller. Then the sound is played a few more times as it quickly progresses to the location of the virtual object
The sound is processed so that a user can understand both distance and relative location in a virtual environment. It’s a simpler solution than Microsoft’s, but it lacks haptic feedback which is important.
Accessibility in VR with “Canetroller”
Unlike Google’s software experiment, Microsoft’s fascinating Canetroller is a research project for the upcoming CHI 2018 conference in Montreal. That doesn’t mean it will get to market any sooner but it seems like a project with greater potential for the visually impaired.
The Cantroller is a peripheral device for the HTC Vive headset letting the user scan a virtual reality environment through three types of feedback.
(1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. (Paper Abstract)
If you’re interested in the details, here’s a four-minute video describing how the device works.
The Canetroller is a major step forward, however, it is still unable to replicate real-life experience. As The Next Web notes, spatial-audio navigation is far more complex than feeding sound clues through a device. They quote one user,
I didn’t have a good sense of direction where I was at [in the real world]. I can hear roughly where the wall is at, by the way it blocks off sound in the real world. I didn’t have that in the VR world.
We need real solutions
While both of these projects are a start, we need real-world solutions. At this point, there’s simply no excuse for the lack of accessibility in VR – the technology is mature enough. Educational institutions, in particular, will find it challenging to use VR when it may not be accessible to all students. But NGO’s and businesses will also think twice about large-scale rollouts when some users are excluded.
Accessibility sounds like an obvious goal, but the tech industry needs to have its feet held to the fire. For virtual reality to be successful, we need to be truly inclusive.