Dalí would be delighted. Absolutely delighted by the deepfake Salvador Dalí experience. In Dalí Lives, the complicated, unpredictable Surrealist artist is bought to life by the Dalí Museum in St. Petersburg, Florida. It’s a remarkable experience, not only for what it is, but what it portends. Deepfake Salvador Dalí is a fleeting glimpse of our future in a world of AI-driven avatars in virtual environments.
As GQ aptly put it, it’s both “convincing and terrifying.” Convincing in that it’s more than a straight-forward deepfake video. It’s interactive. You don’t just watch Dalí, you engage with him. Terrifying in that it’s nearly irresistible. He seems to know more about you and your responses than you do about him. Of course, he doesn’t but it’s the appearance that counts. And the technology – especially the AI underlying the experience – is only going to get better.
Here’s the short video preview if you missed it.
Remember, this is still a video on a flat-screen – albeit, life-size. But it won’t be long – five to ten years – before we find ourselves in virtual environments populated with AI-driven avatars. What happens when the master of Surrealism is virtually sitting next to you in your living room? Or shows up at breakfast in the morning? Or interrupts your conversations with a friend?
And we worry about our Smartphones distracting us. We have no idea what’s about to hit us.
The Convergence of VR and AI
With little hint of exaggeration, Bret Leonard at last week’s AltSpaceVR event referred to to AI as the 3,000-foot tsunami bearing down upon us. Especially in our virtual platforms.
It’s a convergence of technology that I spoke about recently at the AI Everything conference in Dubai. As AI is integrated into our virtual environments, it will revolutionize learning and how we work. And it will transform storytelling, freeing the characters from our screens and pushing them into our lived environments. Characters will live with us, or perhaps as in Wolves in the Walls, we will become characters in their stories. Once you’ve broken this barrier, it’s easy to turn the tables.
But it will also blur the lines between the authentic and artificial in human experience. Am I interacting with a real person represented by an avatar? Or is the avatar the product of someone’s imagination? There are fascinating creative possibilities here – along with some not so positive ones. And it throws open the floodgates of deception once people of bad intent master the technology.
Today, it’s an AI-generated figure on a flat screen. You might be able to resist a deepfake Salvador Dalí. But what if it’s a deeply realistic avatar in a virtual world? Anyone from the broad sweep of human history or the present? Driven by artificial intelligence to say exactly what you want to hear? As AI becomes easier to use, we’ll be able to create avatars of any type. For any virtual environment.
Here’s the philosopher Michael Madary’s description of what that future might hold.
Any amateur user will be able to generate an avatar that looks like anybody on earth, or anybody who has lived, and then animate that avatar to do whatever they’d like. So I’ll let your listeners use their imagination as to what sorts of wild applications may come out of that.
Wild applications, indeed. And we’re still waking up to the possibilities of deception in our social media. Thinking of our march into this future, sometimes I have visions of sleep-walkers in my head.
The Dalí Museum’s Projects
But let’s dial back a bit to where we are now and the deepfake Salvador Dalí experience. The Dalí Museum has been a leader in using technology to explore the life of the artist and his work. Their VR/360 experience of Dali’s famous Archaeological Reminiscence of Millet’s “Angelus” makes you feel like you are actually entering into the scene of the painting. It’s an excellent introductory experience for someone new to VR and 360 videos.
Now the Museum has taken the next step by incorporating the use of artificial intelligence. As marketing director, Beth Bell noted,
Dalí himself was at the forefront of technology and was always experimenting and trying new things. We feel obligated to keep that legacy going. We think he would love these types of things. It’s in the spirit of Dalí himself.
Dalí not only experimented with video but was one of the first artists to experiment with holograms. Who knows what he would have done with Google Tilt brush in his hands. In a sense, his entire life was an ongoing theatrical display. If you’re curious, take a look at the utterly surreal encounter between him and a young Alice Cooper in April 1973. Only Dalí could turn Alice Cooper into a prop for his own projects.
Creating a Deepfake Salvador Dalí
Here’s the Smithsonian’s description of the Museum’s deepfake Dalí experience.
The most stunning thing about “Dali Lives” is that you’re interacting with a version of the artist himself. It looks like Dalí, it sounds like Dalí, it is Dalí. The museum worked together with the San Francisco advertising agency Goodby Silverstein & Partners to accomplish this, feeding hundreds of news interviews (both written and video), quotes from his autobiography and other written works, and archival video footage into an artificial intelligence system to recreate the artist. That 45 minutes of new footage—with 190,512 possible video combinations—is created from more than 6,000 frames of existing Dalí video and more than 1,000 hours of A.I. learning.
Those 1,000 hours of machine learning trained an AI algorithm that could then superimpose Dalí’s face onto an actor with the same body type. A voice actor was used to sync the quotes in his unique bilingual accent. Alice Cooper said he could only understand every fifth word that Dalí uttered. Here, you have them all in a deeply realistic interactive video experience.
AI and the Alexas of our Future
Dalí Lives is a groundbreaking example of a museum using AI to expand our understanding of an artist. And as The Verge points out, it’s genuinely engaging.
With 45 minutes of newly created footage and thousands of combinations, each visitor gets a different experience. There are scenes that open with him reading the newspaper, with an overlay of the current front page of The New York Times; if it’s raining, he’ll comment on the weather. He’s almost like an Alexa device.
As our virtual environments expand, avatars like Dalí will be the Alexas of our future. That sterile black cylinder that we talk to now is just a piece of hardware. A stylish tin can that struggles to understand a limited range of human questions and commands. It’s not something we’ll ever engage with. An AI-driven avatar will make it feel as primitive as a 19th-century telegraph.
Always the provocateur, Dalí lived by the mantra,
What is important is to spread confusion, not eliminate it.
Dalí’s mantra may be essential for the life of a renegade artist. But if we’re not careful, the convergence of VR and AI could do the same for our lives – and much worse. It’s more important than ever that programmers work with people from non-technical backgrounds. As technology moves from its early role as an information tool to the space for human experience, we’re doing much more than writing code. We’re writing the context and even the substance of our lives.
That tension between astonishing potential and profound risk is what makes the developments in VR and AI so fascinating. And their convergence only heightens the possibilities.
Emory Craig is a writer, speaker, and VR consultant with extensive experience in art, new media, and higher education. He speaks at global conferences on innovation, education, and ethical technology in the future. He has published widely and worked with the US Agency for International Development, the United Nations, and the Organization for Economic Co-operation and Development (OECD). Living at the intersection of learning, games, and immersive storytelling, he is fascinated by AI-based avatars, digital twins, and the ethical implications of blurring the boundaries between the real and the virtual.