The developments on display at the SIGGRAPH 2019 conference marked a fascinating week in VR and AR. It’s easily one of the most important events in the industry, appealing to researchers, artists, and media professionals. Not surprisingly, the past decade has seen the rapid expansion of VR and AR at the conference. This year’s conference brought over 19,000 media professionals and 180 exhibitors for a week of talks, workshops, demonstrations, and networking.
One major point to note about SIGGRAPH 2019 is its commitment to diversity and inclusion. It was evident in the programming committee and the overall conference.
SIGGRAPH 2019 Highlights
SIGGRAPH 2019 is one of the few events we’ve been to where you did not feel like you were packed into an urban transit system. There was room to walk, to sit, and, most importantly, to network with others. We know how expensive conference space is, but setting aside open space is as important as having exhibitor booths.
Here’s a quick summary of the highlights of the conference, including developments in AI, VR, and haptic feedback.
The Increasing Use of AI in Media and Graphics
We’ve already seen the use of AI to manipulate media through deepfake videos. But AI is also being used as a rendering engine in a positive way. Sam Nicholson, CEO of Stargate Studios, ASC (and the Visual Effects Supervisor for Star Trek: The Motion Picture), demoed new features of the Unreal Engine graphics platform that incorporated photorealistic CGI backgrounds to complete a scene. No longer requiring a green screen, the AI-driven output is rendered in real-time instead of post-production. Advances in Real-Time Rendering was one of the most popular workshops this year:
Modern video games employ a variety of sophisticated algorithms to produce groundbreaking 3D rendering pushing the visual boundaries and interactive experience of rich environments. This course brings state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.
This year the course includes speakers from the makers of several innovative game companies, such as Rockstar, Ubisoft, EA | Frostbite, NVIDIA, Electric Square, Sony Santa Monica and Unity Technologies. The course will cover a variety of topics such as atmospheric rendering in games, multi-resolution ocean rendering, practical multi-scattering physically-based materials for games, real-time ray tracing with hybrid engine pipelines, rendering strand-based hair in real-time in production settings, art-directable wind and vegetation in games, and improvements for geometry processing with mesh shaders. (SIGGRAPH 2019 Course Description)
We also met an old AI avatar friend of ours, Magic Leap’s Mica. We encountered Mica several times at the Sundance Film Festival, but SIGGRAPH 2019 treated us to a new experience of creating a virtual collage with her. Collaborating with an avatar is unsettling as you realize that you’re responding to an AI program that only appears human-like. Mica is still the most compelling AI avatar we’ve encountered – with far-reaching potential for education, training, and entertainment. And we should add, she also opens up a Pandora’s Box of ethical issues in the wrong hands.
VR at SIGGRAPH 2019
This year’s conference had the largest number of VR and interactive experiences since the event began back in 1973. All total, there were 48 virtual experiences distributed throughout SIGGRAPH’s Immersive Pavilion (which had 33 experiences) and the VR Theatre (with 15 experiences).
As VRScout noted, four of the VR experiences were world premiers.
Among these 48 offerings are four never-before-seen projects from some of the most recognizable names in entertainment and immersive technology. Experiences making their world-debut include A Kite’s Tale by Walt Disney Animation Studios (VR Theatre), Undersea by Magic Leap (Immersive Pavilion), II Divino: Michelangelo’s Sistine Ceiling in VR by Epic Games, and Mary and the Monster: Chapter One by Parallux and New York University’s Future Reality Lab (Immersive Pavilion).
One of the most compelling displays was not a VR experience, but NVIDIA’s work on foveated rendering, which has the potential to transform all of our immersive displays. As Venture Beat described the project,
A separate demo showed off Nvidia’s work on a Foveated AR Display, which the company suggests will use gaze tracking to enable multi-layer depth in AR images. In the image below, you can see how a specific small gaze area tracked by the headset becomes sharper to your eye as the background becomes softer and less detailed.
Nvidia is touting the Foveated AR Display as a ‘dynamic fusion of foveal and peripheral display,’ and releasing a research paper to accompany the project. It’s unclear when the technology will actually appear in a shipping product, but it’s interesting to see Nvidia diving deeper into the AR world at this stage.
Foveated rendering is a critical development as it would dramatically lower the amount of data we need to transmit to our wearable AR and VR headsets in the future.
Next year’s conference will be in Washington, DC, making it easier for those of us on the East Coast. If you didn’t get to SIGGRAPH 2019, definitely try to make the 2020 conference. Whatever your media interests – film, video, 360, CGI, VR, AR, Motion-capture – you’ll have the opportunity to see the latest developments and interact with some of the brightest minds in the industry.
We hope to see you there!
Emory Craig is a VR consultant, writer, and speaker with years of experience in art, new media, and higher education. He is actively engaged in innovative developments for AR and VR at the intersection of learning, games, and immersive storytelling. He is fascinated by virtual worlds, AI-driven avatars, and the ethical implications of blurring the boundaries between the real and the virtual.