For all the progress in virtual reality, realistic avatars and social VR remain a fundamental challenge. It comes as no surprise that, Facebook’s lifelike avatars have made the most progress in this area. The social network didn’t acquire Oculus Rift to isolate its userbase in VR headsets. Their longterm goal has always been about making VR social, about defying distance.
Facebook’s Lifelike Avatars
A new post on Facebook’s blog reveals the company’s latest research into what they refer to as Codec Avatars. RoadtoVR quotes Facebook Reality Lab’s (FRL) director of research Yaser Sheikh,
It’s not just about cutting-edge graphics or advanced motion tracking. It’s about making it as natural and effortless to interact with people in virtual reality as it is with someone right in front of you. The challenge lies in creating authentic interactions in artificial environments.
Facebook has made remarkable progress over the past two years with the simple avatars in Facebook Spaces and their more recent expressive avatars. But the current work in developing Codec Avatars is in another realm altogether. They would bring full social presence into the virtual world.
The project calls to mind Magic Leap’s Mica, which we’ve experienced on many occasions. Even though Mica has a long way to go in development (she still doesn’t talk), the encounter stops you in your tracks.
But Mica is an AI-driven avatar. Facebook’s project takes a different approach. It faithfully recreates the real person inside the VR headset. And it is equally remarkable.
A research participant and a Facebook employee meet in VR and (through their avatars) have a conversation about hot yoga. The Pittsburgh team has made substantial progress , but they are working to further refine their work by adding details to the avatar's mouth, increasing expression quality, and also ensuring realistic eye contact between avatars.
Posted by Facebook Engineering on Monday, March 11, 2019
Yaser Sheikh describes how the team at FRL is trying to pass “the ego test and the mother test.”
You have to love your avatar and your mother has to love your avatar before the two of you feel comfortable interacting like you would in real life. That’s a really high bar.
The Technology Behind an Avatar
To pull this off, the team at FRL has developed to high-end capture studios which are both “large and impractical” for use outside of a research institution. One studio has over 1,700 microphones capture genuinely immersive audio.
And the data produced is not something you’re going to store on your laptop. According to FRL Research Scientist Shoou-I Yu in the Facebook post,
To put this into perspective, a laptop with 512 GB disk space will survive for three seconds of recording before running out of space. And our captures last around 15 minutes. The large number of cameras really pushes the limits of our capture hardware, but pushing these limits lets us collect the best possible data to create one of the most photorealistic avatars in existence.
In other words, the technology is years away from widespread use. But the long-term goal is to make it practical enough that users could develop their own avatars at home. That would require Head Mounted Capture Systems (HMCs) which are far beyond the limits of what consumers would accept (and pay for) today.
The Pittsburgh team had to stretch beyond capture solutions available today — largely focused on a subject’s head and hands — and invent a series of prototype Head Mounted Capture systems (HMCs) equipped with cameras, accelerometers, gyroscopes, magnetometers, and microphones to capture the full range of human expression. These HMCs are what animate the Codec Avatars while users talk to each other in virtual environments.
It will be a while longer before we can beam ourselves to other physical locations like the characters on Star Trek. But at the current pace of development, we will achieve virtual teleportation in another decade. And yes, it will pass the “Mother Test”.
Facebook’s Research Series
The recent news on the Codec Avatars is the beginning of a year-long series of posts covering Facebook’s work in VR and AR. Subsequent articles will cover research on headset displays, computer vision, audio, haptic feedback, brain/computer interfaces, and eye/hand/face/body tracking
In announcing the series, Michael Abrash, Chief Scientist at Facebook Reality Labs, wrote,
I expect these blog posts to be markers on the journey to the AR/VR future for a couple of reasons. First, there are many people who are skeptical about AR and VR, and we hope to change that perception by sharing why we believe these platforms are in fact on the path to changing the world. Second, given the world-changing impact we expect AR and VR to have, we’d love to see a broad discussion of how we as a society want to incorporate the power of these technologies into our daily lives, and by being open about what we see coming before too long, we hope to help jump-start that conversation.
Abrash is right. It’s essential that we have this discussion on the power of immersive technologies in our daily lives. And like it or not, Facebook will have a significant role given how rapidly they are pushing VR and AR forward. The real question is – will other voices be heard in this debate about our future? There may be a “Mother Test” but there also has to be an “Ethics Test.” And no vendor should have a monopoly on how that gets resolved.
Facebook’s lifelike avatars are a signpost on our remarkable journey toward realistic virtual environments. Full social presence in VR will be a game-changer, upending the way we currently live our educational, professional, and social lives. And it’s fraught with ethical complications that we are only now beginning to address. The issue here is not so much about technology as it is about the fabric of our society in the future.
As Jermy Bailenson of Stanford has often said, VR is like nuclear power. It can enhance our lives or destroy the world. Facebook’s avatar research brings us closer to that chain reaction. Let’s make sure we have some control rods in place.
Emory Craig is a VR consultant, writer, and speaker with years of experience in art, new media, and higher education. He is actively engaged in innovative developments for AR and VR at the intersection of learning, games, and immersive storytelling. He is fascinated by virtual worlds, AI-driven avatars, and the ethical implications of blurring the boundaries between the real and the virtual.