SIGGRAPH ASIA 2023

 

June Kim, Conference Chair of 2023.

SIGGRAPH Asia 2024 was a huge success, thanks to the SIGGRAPH team led by June Kim. In Sydney this year, the conference welcomed over 5,690 attendees from more than 40 countries. The event featured over 900 Technical Papers, unveiling emerging technologies from Biomechanically Accurate 3D Digital Humans to Meshes with Spherical Faces, from Adaptive Shells for Efficient Neural Radiance Field Rendering to Fluid Simulation on Neural Flow Maps.

There were a range of other innovations and highlights, such as the DemoScene (which was particularly strong this year), the Electronic Theatre, and great keynotes, especially by multi-oscar winner Joe Letteri, Senior Visual Effects Supervisor, Wētā FX.  Joe did an in-depth breakdown of Avatar: The Way of Water, providing a detailed and in-depth breakdown of the visual effects and the various innovations that Wētā FX had to invent to match the complex and challenging vision of Jim Cameron.

Featured Speakers 

We were lucky enough at fxguide to be invited to help with SIGGRAPH Asia. Fxguide’s Mike Seymour was the Featured Speakers’ Chair for SIGGRAPH ’23 in Sydney. Here is a summary of the great week of talks in Sydney last week, and in particular some of the points raised in the two extensive panel sessions.

Digital Humans Panel: How We View Digital Humans

Paul Debevec,  Eyeline Studios Powered by Netflix , Mark Sagar, CEO and co-founder of Soul Machines, Christophe Hery, Director, Science Research, META ,  and moderated by Fxguide’s Mike Seymour

The panel discussed the nature of human perception of human faces and how the problem with digital humans has moved from one focused on producing realism to one of producing an authentic digital human that provides both expressions and plausible cognitive behavior, interaction, or performances. Christophe discussed that Meta is doing advanced work in interactive avatars, and the point was made that to see into the future of what is coming, one need only look in the labs of today. Tech in experimental form today becomes the consumer tech of tomorrow. Mark Sagar raised the issue of ethics and how producing perfectly rendered digital humans may not be as ethical in some circumstances as producing a digital human that the audience immediately recognizes as digital.

Far from Digital Human being a solved problem, Paul Devbeve pointed out that time and time again in computer graphics people have flagged that CGI is ‘solved’ when in reality the art of craft of producing realistic digital humans and in fact, most CG is still an enormous unsolved problem.

The panel addressed the issues of AI, specifically Machine Learning, and how this has led to a rapid rise in realism in certain use cases of Digital Humans. The advances in ML address not only the appearance of digital humans but also their ability to communicate and interact, which in turn has opened up a range of applications beyond just media and entertainment.

 

Animation Panel: The Craft of Character Story Telling in an Age of High-tech Tools 

Rob Coleman, Creative Director, Industrial Light & Magic, Emily Dean, Director, Blur Studios & Nexus Studios,  Raqi Syed, Senior Lecturer, Victoria University of Wellington,  and moderated by Fxguide’s Mike Seymour.

The panel discussed what it is to provide a performance for a character from issues related to motion capture to the issue of directing animated characters to have thoughtful subtext. Rob Coleman discussed how hard he and his team fought in the days of Star Trek to earn a non-dialogue closeup. Rather than being a nothing shot, such an earned closeup allows an animator to provide sub-text nonverbally in a way that is both powerful and rewarding for an animation team.

Emily Dean discussed her Love, Death & Robots short film and showed clips of her pulling logs on sandy beaches and throwing herself over beds during COVID to get great lines of action animation reference. She shared that reality was just a jumping-off point and that in her case, while her film was all set on a distant moon, the hero character struggles to pull a downed astronaut across the surface of the moon. In reality, a moon’s low gravity would make this relatively easy, but the story demanded the struggle and no one ever stops to question the point, – as it serves the plot and story so well.

Amongst her many other talents and credits, Raqi Syed has done impressive VR storytelling and she outlined the complexity of losing classical blocking and framing when imagining a VR narrative. A short section of her work was shown at the start when all three panelists illustrated their perspectives on crafting an animated performance. Between the three panelists, there were examples from indie to tentpole productions, VR to traditional CGI. Rob Coleman in particular discussed the differences between CGI characters and VFX characters from his work at ILM. But in all cases, the panel leaned into the notion of collaboration and crafting a combined contribution. Rob Coleman summed up the role of an animation supervisor as that of a conductor, who knows the strength of the various parts of the Orchestra and knows when to call upon each section of the orchestra to provide the performance creatively that only they can do.

Rob Bredow, SVP, Chief Creative Officer at Industrial Light & Magic. The Intersection of Art and Technology: Creative Design at ILM.

Rob gave one of the most engaging talks this year, while he illustrated his talk with some of the finishest VFX sequences from ILM’s films, he did not present a traditional making of talk. With a world of advanced possibilities, he asked, “how can design drive someone’s creative opportunities?” In his talk, he addressed the theme of someone driving creative and tech advancement no matter where they are in the process. His talk extended from VFX of films like Solo: A Star Wars Story to examples from Disney theme park design. He gave examples where limitations are the source of inspiration not the limiting factor of projects.

Robin Hollander, VFX Supervisor, Wētā FX, Blood, Guts and Fur: The Visual Effects Behind Cocaine Bear

Robin led off the week’s featured sessions with one of the most entertaining talks about the making of Cocaine Bear. The work done on the film ranged from digital bears and cubs to set extensions and some rather nasty deaths. As Robin explained the film was based on a ‘fact-is-stranger-than-fiction’, true story of a dead 175-pound black bear found among 40 opened plastic containers that had traces of cocaine in the mountains of Georgia. Wētā FX Visual Effects team produced 468 shots over a period of two years, for director Elizabeth Banks, (who, suffering from a strict COVID lock down, apparently lusted after Wētā’s meeting room donuts).

Mark Pesce, Honorary Associate, Media and Communications, University of Sydney, The Next Tools.

Mark is a highly accomplished public speaker, with a long history on television, in podcasts and he has also published extensively. Mark highlighted two key technology papers and framed them in the current discourse providing context and explanation.

The two papers were recent works from Australian researchers that collectively point toward the advent of a new generation of tools that harness artificial intelligence. The first was 3D-GPT [ Sun et al., 2023 ] this paper outlines a ‘diffuser’ models for a deeper and a more complex 3D tool chain. 3D-GPT uses chatbot agents and Blender to break down complex modeling tasks into smaller components that are handled by AI agents. The task dispatch agent knows all available functions within the procedural generation system and assigns them according to the prompt. This information is passed on to the other agents.

The second paper CICADA [ Ibarrola et al., 2022 ] shows how drawing can be redesigned to provide ‘space’ for both humans and artificial intelligences to co-occupy and co-create in a manner that enhances collaboration.   CICADA stands for Collaborative, Interactive Context-Aware Drawing Agent. CICADA uses a vector-based ‘synthesis-by-optimisation’ method to take a partial sketch and develop it towards a goal by adding and/or sensibly modifying traces. CICADA can produce sketches of a quality comparable to human uses. It offers diversity, and most importantly, it can cope with change continually and in a flexible manner.

Paul Debevec, Chief Research Officer, Eyeline Studios Powered by Netflix, From Virtual Cinematography to Virtual Production.

Paul Debevec highlighted a career contributing to the advancement of visual effects and faithful digital reproduction, which led to a brilliant demonstration of the issues that need to be addressed to faithfully reproduce accurate colors and contrast on an LED volume.

In addition to his role at Eyeline, Paul is Governor of the Visual Effects Branch of the Academy of Motion Pictures Arts and Sciences as well as a co-chair of the Academy’s Sci-Tech Council. Paul’s work has been recognized with two Academy Awards for Scientific and Technical Achievement, the Progress Medal from the Society of Motion Picture and Television Engineers, and in 2022, the Charles F. Jenkins Lifetime Achievement Emmy Award. His Emmy 2022 award clip was also able to be shared with the audience in Sydney.

 

Sabine Laimer, Sequence Supervisor: Wētā FX, Unraveling The VFX Behind The Marvels

Sabine presented previously unseen material showing the making of the latest Disney Marvel film: The Marvels. The film has a strong nearly all-female creative and leadership team and Wētā FX delivered a complex array of original superhero effects including the highly complex third-act fight sequence. Her presentation was both personal and technical and resonated with many in the audience.

Real-Time Live

Other great events at SIGGRAPH Asia included Real-time Live, a highlight of which was the RADiCAL Platform. Gavan Gravesen, Founder & CEO, and Matteo Giuberti, Founder & CTO of RADiCAL Solutions presented their AI-driven animation platform. The system can capture multiple people’s motion and allow collaboration via the net. The computer vision technology can detect and reconstruct 3D human motion from 2D content or cameras and then allow ML-powered 3D motion capture collaboration. By only sending MoCap data the system operates impressively fast,  with extremely low latency.

Sally and Emma

The winner of RTL was Sally Coleman, an independent artist and researcher. Sally presented a virtual band in a virtual world. Sally wasn’t even on the stage at the convention center – but up the road at UTS University. Her demo showcased performance capture, crowd interaction, and remote avatar animation.

To the right is Sally Coleman (L) with her Jury and Audience Choice awards for Big Sand’s live, along with Emma Smith, who acted as her technical director for the show (R)


SIGGRAPH SUNSET SAIL

And for a bunch of the speakers – there was even a SIGGRAPH Sunset Sail.

Next stop:

Leaving a mark in Sydney, SIGGRAPH Asia is moving up to Tokyo in December 2024, see you there!