Sensorium’s AI-driven avatars are a new generation of virtual beings that bring us another step closer to the metaverse. The company was started by Mikhail Prokhorov, the multibillionaire former owner of the Brooklyn Nets, and currently focuses on creating custom-built avatars. You may have already seen some of their virtual avatars for live concerts.
While the virtual DJs respond to the audience, Sensorium’s AI-driven avatars are much more personal, carrying out one-on-one unscripted conversations.
Magic Leap was one of the first to combine immersive experiences and AI-driven avatars in 2018 with their deeply compelling Mica. But Mica was silent, reacting only to your gaze and body movements. Sensorium’s approach is to develop virtual beings that respond to what you say instead of your actions. The AI conversations may still be a little awkward, but they’re a signpost to where we are headed.
Sensorium’s AI-driven Avatars
Sensorium AI-driven avatars are the first in a new generation of virtual beings. They are capable of supporting complex and unscripted conversations through the prism of their unique personality, built upon their “biography” and interactions with users. Overall, they mark a new milestone in the advancement of сonversational AI, raising the bar from chat-bots or traditional non-player characters to ever-evolving human-like conversations.
Katherine gets her AI brain from the Open AI Foundation’s GPT-3 technology, which is a learning AI that shapes natural language responses to queries. She recognizes speech, processes the query, and comes up with a response. Her programming is simply a set of words that can be put together as a kind of bio. Everything she says in response to questions is unscripted and unpredictable.
And that’s the whole point, as Sensorium wants to enable AI characters that you can talk to for hours inside a virtual world. Each avatar has a unique biography to draw from, which creates natural and evolving interactions.
Here is a short video of the conversation:
There are some awkward moments in this demo, though part of the reason was due to Dean Takahashi engaging Katherine and another avatar, Thomas Elon, a professional boxer, over Zoom. The movement of the avatars also cuts into the realism as they swayed back and forth – something that never happened with Magic Leap’s Mica. But that said, the conversations are remarkably realistic and will only become more so as the technology progresses.
Sensorium Galaxy
Sensorium’s AI-driven avatars will be part of the company’s forthcoming Sensorium Galaxy, a multiuser virtual world and performance space. You can sign up for early access here and check out the schedule of the upcoming events.
The company’s first digital concert environment will be launched later this year – and not a moment too soon, given the devastating impact the global pandemic has had on the music industry. Sensorium’s approach is to do far more than put an artist in a CGI environment. They use advanced 3D scanning to recreate the performer as a digital human, capturing both their physical and behavioral characteristics, including the artist’s facial expressions.
All of this will mature as we get better VR headsets and faster bandwidth. Add in increasingly advanced AI, and deeply realistic virtual beings will become part of our everyday lives.
The Possibilities and Ethical Issues
We have long predicted that the convergence of VR and AI will be one of the most revolutionary developments in the future. It will transform our virtual experiences and be the catalyst that pushes XR technologies forward. The use of virtual beings as personalized tutors will expand learning opportunities for students and open new possibilities for workforce training. And they’ll play a role in healthcare, freeing doctors and nurses to focus on the more critical aspects of patient care.
But Sensorium’s AI-driven avatars also point toward fundamental challenges. At what point does a virtual being become so believable that it begins to impact our real-life behavior? How will we respond when we can enter virtual environments that include both real and synthetic AI avatars? Will we need digital markers to help us discern that a specific avatar actually represents a real person? If we mistreat virtual humans in the future, how will that affect our interactions with real human beings?
All of this is uncharted territory, shadows on a cave wall that are coming to life. Plato was concerned with how the appearances of objects would ultimately be mistaken for reality. At this stage of their development, Sensorium’s AI-driven avatars are hardly at risk of deceiving us. But they put us another step closer to the Metaverse, where the differences between real and synthetic humans will eventually become impossible to discern.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.