The latest news on Facebook’s wrist devices is fascinating – it might just provide the UI solution we need for AR glasses and haptics in XR devices. The company has already revolutionized how we communicate. It may now be on the verge of transforming human-computer interaction.
Of course, all of this assumes you can accept the privacy implications of Facebook moving into the wearable space. The social media behemoth already seems to know everything about us – adding a wearable that registers our brainwaves takes it to a whole new level.
We’re busy this week with SXSW 2021, but here are the details of Facebook’s latest research (they’ll be releasing more information over the next few weeks).
Facebook’s Wrist Devices
The project has its roots in Facebook’s acquisition of CRTL-Labs back in 2019, which was developing a wearable for your wrist. Essentially, it’s an EMG sensor that can read signals from your brain and interpret their meaning. Ultimately, such a device may even be able to measure just the intention to move your finger (whether you actually do it or not).
Facebook has been working on this for years – and it may take another 5-10 years of R&D to pull it off. But the result would be a new interface for AR glasses and possibly other XR gear. If you’ve ever tried clicking on virtual keyboards in your VR headset, you know that’s not how we’re going to interact with XR devices. Thinking about clicking a button and having it click may sound like an alien superpower, but it’s where we are headed.
Here is Wired’s description of the solution, spearheaded by Andrew Bosworth, Facebook’s vice president of augmented and virtual reality:
It’s an electromyography device, which means it translates electrical motor nerve signals into digital commands. When it’s on your wrist, you can just flick your fingers in space to control virtual inputs, whether you’re wearing a VR headset or interacting with the real world. You can also “train” it to sense the intention of your fingers, so that actions happen even when your hands are totally still.
This wrist wearable doesn’t have a name. It’s just a concept, and there are different versions of it, some of which include haptic feedback. Bosworth says it could be five to 10 years before the technology becomes widely available.
It’s doubtful that the results of this research would appear in Facebook’s first AR Glasses (Project Aria), which should arrive in 2022. But if you were placing bets, you’ll probably see Facebook’s wrist devices long before we get to anything like Elon Musk’s Neuralink, which is an actual wireless implant in your skull. EMG is the easier and more acceptable alternative, but it’s always faced the challenge of placing sensors on your head. Not only are we vain when it comes to our headgear, but having a full head of hair also gets in the way. A wearable on your wrist would be the ideal solution. Down the road, it could even become a fashion accessory as the Apple Watch has become.
The Goal of a New HCI Paradigm
Facebook’s own video explanation makes it a little clearer:
Ultimately, this is about more than simply moving objects directly in front of you. It’s a human-computer interface with the goal of connecting people remotely – which is the basis of Facebook’s massive revenue stream. As Andrew Bosworth describes it on Facebook’s Research Blog,
Imagine being able to teleport anywhere in the world to have shared experiences with the people who matter most in your life — no matter where they happen to be . . .That’s the promise of AR glasses. It’s a fusion of the real world and the virtual world in a way that fundamentally enhances daily life for the better.
Facebook’s wrist devices aren’t the only solution in the pipeline – they just happen to be the one that could transform your social interactions. Take a look at the Mudra Band for your Apple Watch – it already lets you use a small range of finger gestures to control music, manage calls, and dismiss notifications on the Watch interface. It doesn’t replace interacting with the surface of Apple’s wearable, but it’s headed in that direction.
A Solution for XR Haptics?
We’ve seen enough wrist and hand devices for XR haptics, and the solutions are getting progressively better. But while they work for high-end XR experiences in education and workforce training, they’re not what you be comfortable wearing all day long with a pair of AR glasses.
A wrist-based wearable doesn’t sound ideal for haptic feedback in your fingers, but Facebook is leveraging the potential of a cognitive phenomenon called sensory substitution. The sensations are delivered to your wrist, but you feel them in your fingers.
Facebook’s Research Science Manager Nicholas Colonnese describes it in the following way:
We have tried tons of virtual interactions in both VR and AR with both the vibration and squeeze haptic capabilities of Tasbi. This includes simple things like push, turn, and pull buttons, feeling textures and moving virtual objects in space. We’ve also tried more exotic examples – things like climbing a ladder or using a bow & arrow. We tested to see if the vibration and squeeze feedback of Tasbi could make it feel like you’re naturally interacting with these objects. Amazingly, due to sensory substitution, where your body combines the various multisensory information, the answer can be yes.”
In short, your brain merges what you see and what you feel into a single sensation.
The Privacy Implications of Facebook’s Wrist Devices
Would you strap a wearable on your wrist from Facebook? What about your students? Clients? Coworkers? Your public audience? What if it was the only sensible way to use AR glasses?
The questions pile up like a deluge, and there’s no ignoring the fact that Facebook’s current record on privacy is a major obstacle. We’ve been wary of the ethical issues in brain-computer interfaces for some time, but AR devices will make them front and center in our lives.
Facebook likes to point out the ethical and privacy challenges of voice commands in XR devices, and indeed, they do make your human-computer interactions public and even susceptible to inadvertent activation. But a wrist wearable opens an entirely new realm of personal data in your digital life, especially when you integrate contextually-aware AI into AR glasses. Facebook puts it this way,
Two of the most critical elements are contextually-aware AI that understands your commands and actions as well as the context and environment around you, and technology to let you communicate with the system effortlessly — an approach we call ultra-low-friction input. The AI will make deep inferences about what information you might need or things you might want to do in various contexts, based on an understanding of you and your surroundings, and will present you with a tailored set of choices. The input will make selecting a choice effortless — using it will be as easy as clicking a virtual, always-available button through a slight movement of your finger.
However, as the Electronic Freedom Foundation (EFF) noted,
If smartglasses become as common as smartphones, we risk losing even more of the privacy of crowds. Far more thorough records of our sensitive public actions, including going to a political rally or protest, or even going to a church or a doctor’s office, can go down on your permanent record.
The privacy challenges expand exponentially when our devices are contextually aware and can record our actions. The tracking may no longer be limited to public locations and actions but include interactions with our devices and possibly even our intentions in the future. Who owns those “deep inferences” that AI is making? Where are they stored? What do they say about you if they get out into the world?
Regarding privacy and trust, Andrew Bosworth said,
You say what you do, you set expectations, and you deliver on those expectations over time . . . Trust arrives on foot and leaves on horseback.
What he doesn’t add is that when trust leaves, your personal data flees with it – and quite possibly right into the waiting arms of one of the world’s largest corporations.
The research behind Facebook’s wrist devices is a fascinating development with profound implications for the future of human-computer interaction. But how we handle the ethical issues will be a deciding conversation of our century.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.