Level up: Free Guy

Free Guy tells the story of a non-player character (NPC) bank teller who becomes self-aware and discovers he is actually a character in an open-world video game. Ryan Reynolds stars as the blue shirt guy who decides to become the hero of his own story by connecting with real-world player Molotovgirl/Mille, played by Jodie Comer. The film is directed by Shawn Levy and the visual effects were supervised by Swen Gillberg. Digital Domain, ILM, and Scanline, amongst others, handled the visual effects.

The film’s DOP was George Richmond. He shot the film in two different formats for the two different worlds, with a large format for the game world and the real world in Super35 anamorphic. The game world was also much more colourful with a lot of saturated colors, with the game often shot in a sort of eternal sunset lighting. “The blocking was definitely more structured in the game world,” comments Richmond. “Whereas in the real world we tried to make it much more off the cuff, people were allowed to move and shake a little bit more. We shot the real world, for example, much more handheld.” The film shot heavily on location even for complex action sequences. “I did enjoy the Robot arms we used to film in actual downtown in Boston,” he adds. “Normally when we do these things, we do them in incredibly controlled environments. Because they’re very difficult to work with, but we tried it on a Sunday morning in downtown Boston with complicated action cars and a robot arm swinging around Ryan within inches of his head. It was pretty cool.”

Digital Domain used Houdini for the effects work, including when the game is frozen and reset.

Swen Gillberg

Swen Gillberg was the studio Visual Effects Supervisor on Free Guy. He is an Academy Award nominee for his work on Real Steel. Most of his career has been focused on the integration of live elements and CGI.  He was the visual effects supervisor on Captain America: Civil War and recently he was an additional visual effects supervisor on Avengers: Infinity War and Avengers: Endgame. For this film he had to supervise both VFX with live-action, full animated game graphics and a game viewed from the outside and within. Gillberg worked with three main VFX companies for the vast range of visual effects.

For Free Guy, “Scanline VFX did the street squeeze, what we called the ‘bridge run’, the Island and all the collapsing blue tile stuff,” recounts Gillberg. “Digital Domain did all of the gameplay, and that is where we use the Light Stage scans of the actors (at USC-ICT), and also for some pretty heavy digital double closeup work doing the construction site chase.” ILM did fights and background gags and environment work, including key sequences at the end of the movie. “And Lola VFX did some amazing face replacement,” Gillberg adds. One of the great aspects of the film is how layered it is, often with funny little action VFX sequences happening in the background.

Cast between takes

Digital Domain

Nikos Kalaitzidis was Digital Domain’s VFX supervisor, whose team worked on it over 2 years ago, even though the film has only just been released now.

Neural Rendering with Charlatan.

Ageing like Beckham (fxguide Sept 2020)

Digital Domain has been working with neural rendering and AI machine learning for some time having done projects such as David Beckham aging and similar special projects. However, Free Guy represents the first time Digital Domain has used their Charlatan program on a feature film.

In the film’s third act, the character BadAss, played by Channing Tatum, gives a speech that is vital to the plot. After seeing the finished results, the filmmakers decided that they needed to change and add to the dialogue for the scene to have the necessary impact. Initially, rather than arrange a reshoot with the actor, the team decided to animate BadAss with a traditional 3D approach. Various methods of animating the digital character were employed, with none producing a realistic and acceptable result. As this was a late decision, the team had not shot or planned for this type of detailed lip sync work.

Channing Tatum and Ryan Reynolds

“We weren’t prepared,” recalls Kalaitzidis. “It was in the ninth hour. We went around the block trying every single technique to try to make them look like Channing. And it just wasn’t cutting it.”  Knowing the great work the R&D team had done in several special projects, such as recreating Dr. Martin Luther King Jr., “I went over to our Charlatan team and said, ‘Hey, you guys, can you take a look at this and see what you have?’”

Tatum did record new dialogue in ADR and based on this performance, the Digital Domain artists created a new facial model of BadAss by hand. This produced a result that was not quite a perfect likeness. Kalaitzidis decided to try Charlatan and combine a neural render with the original 3D performance. Once the neural network was able to link the two and replace the original animation, the results were a more realistic digital avatar that much more accurately mimicked the actor’s facial mannerisms and movements.

The main problem the team faced was the lack of training material. For the normal Charlatan approach, the team required approximately 30 minutes of the source actor for the machine learning to train with. During that structured filming the Charlatan team would film an actor saying a series of key visemes. Visemes are the visual equivalent of phonemes. “Visemes are saying the least amount of words with a maximum amount of facial expression, and we get the actor normally to say those couple of sentences over and over again, with different expressions, different moods – sadness, happy anger, or stoic, etc – but with the same words,” Kalaitzidis comments.  His favourite viseme expression is “the odd toy cow eats green oat cheese.” It is not meant to make sense, so that even experienced senior actors find it very odd to do, but since each viseme and thus phoneme is anatomically related to a specific set of jaw and lower facial muscular movements, there are strong physiological relationships between the specific sounds and the range of motion the AI engine needs to train on to infer a plausible final solution.

At Digital Domain the Charlatan team would normally film an actor with at least five different cameras shooting simultaneously, at high 4K resolution for the machine learning. But as this was an unplanned sequence, Digital Domain did not have anything but footage from the film. “We had of shots of Tatum in a lot of surrounding shots with the same lighting, but that was all, so we fed it in to the machine learning engine,” Kalaitzidis explains. To help, Digital Domain requested any additional takes and over-length clips of Tatum from the production. The GPUs training of this footage for about five days. “And then when it started coming out of comp, the results were instantaneous. The comps are very quick to turn around and we presented it to Shawn. And he said, ‘that’s it!’. Within one or two revisions, the team had a final approved shot.”

RyanReynolds and Shawn Levy on set

Future Charlatan Implications for Animators

As Free Guy was shot and finished some time ago, the technology has already moved on somewhat. When Digital Domain was working on BadAss the rule of thumb for most productions was that the neural rendering would work best when the actor faced the camera. As their head turned, the process, depending on the company and their approach, would normally work until the far side ear of the actor was no longer visible. From there until the actor’s far side cheek was a ‘grey area’ and then full side on (90 degree) shots were deemed highly difficult to achieve. Digital Domain is now working on extending that range and future projects are trying to improve upon what can be done. They are also experimenting with shooting with more than five cameras.

For animators, there is an important aspect to neural face transfers. Contrary to what one might expect, this hybrid approach of blending neural renders on top of CG is actually remarkably flexible. It is not a requirement to get the underlying CG looking close and then use the Charlatan technology to complete the last 5%. The process works well on faces that are only approximately like the final face in terms of texture and color. This means that Digital Domain is engineering a system where animators may soon be able to have playblasts upres-ed to photo-real quality while they animate.

“When animators are doing facial animation, they usually do it on a puppet and you immediately see the gray shade version coming out of a playblast,” says Kalaitzidis. “But what our team has been trying to do, once you have crunched the training data in Charlatan is quickly show the actual face. In our tests, an animator is animating a face and then right next to them is the same face, but with the photography neural rendered on top.” For Kalaitzidis this is the next step for this technology, allowing CG animators to immediately see, in real-time, a photo-real face while they animate. He hopes the technology will be viable within two years.

Plate photography
Final shot VFX rendered in V-Ray and composited in NUKE.

Russian Dolls:  2 movies in one

For Kalaitzidis it was like creating multiple movies inside each other like Russian Dolls, each requiring different techniques. “We worked closely with Shawn Levy and the filmmakers to essentially create Free City twice, once by augmenting live-action footage with digital additions, and again as a fully CG environment right out of a video game,” explains Kalaitzidis.

To populate the digital version of Free City, Digital Domain created digital versions of many of the characters, including Guy. To create the digital protagonist, Reynolds went to Digital Domain’s motion capture stages, where the actions for his movements were recorded and added to a library. He participated in a series of facial scans, all of which were combined to create a game-version of Guy that was similar to Reynolds, but slimmed down and stylized.

Plate Photography
Final shot

Thus a digital Reynolds existed at two levels, Kalaitzidis and the team referred to these as the ‘VFX’ Guy and the ‘game’ Guy. While initially, the production wanted to match the latest in video game CG with the Game Guy, it turned out that the quality was too good, and it was confusing which version of Guy was being displayed. Rather than aim for the highest AAA digital human quality, a slightly less real version was defined that was thought to be a mix of Grand Thief Auto and Fortnite. The entire process got somewhat ‘meta’ when Reynolds’ character entered the player’s room and sees his character on the display screen high up in the room. Here, in the game, the NPC VFX Guy (actually photographed Ryan Reynolds) looks at the in-game Guy in the same shot. Of course, all of this is notionally being viewed from outside the whole game by Jodie Comer’s Millie character.

With Guy’s legend growing, the two worlds blur more and more, leading to a confrontation at a construction site. As two players sent by the developers – one dressed as a pink bunny, the other as a cop – chase Guy into an unfinished skyscraper, Digital Domain created both the interior and exterior of the building. The team also did the Previs and Postvis for the construction site. The actors performed in front of green screens and physical set pieces, allowing the filmmakers to create a superhuman chase.

Joe Keery, the director Shawn Levy and Utkarsh Ambudka on set for the construction chase sequence.

As the action progresses upward, Guy jumps from floor to floor with inhuman ability, dodging weapons firing throughout. To allow Guy to move with unnatural movement, Digital Domain introduced a digidouble of Reynolds, while also adding additional props and environmental damage to fill out the scene. The environment then begins to morph thanks to the machinations of the game’s programmers. Digital Domain then ensured the shifting building retained the proper scale against the actors, while artists continually adjusted the lighting and textures. Guy eventually reaches the top of the building, where drone and helicopter footage was used to convey the sense of movement. It also gave a foundation for the live-action world outside, which was altered to reflect the fictional city’s skyline.

Plate shot
Bubble suit

After falling from the building, Guy selects a “bubble suit,” which allows him to bounce safely – if awkwardly – toward the ground. To create the scene, the filmmakers used a stunt performer on wires positioned against a green screen for close shots, and a digidouble version of Reynolds for the wide shots and the more extreme bounces. Reynolds himself was then filmed in a prop bubble to show Guy after reaching the ground. The footage was then combined with recordings of a fast descent captured by a drone.

Game versions of Blake Lively and Ryan Reynolds scoring Brutality Bonus Points.

The production has a family atmosphere on set, due to all the cameos and cast family members who also made appearances, including Blake Lively, Ryan Reynolds’ wife. “There was a version where they wanted Blake on one of the screens to be a character kicking Ryan/Guy in the balls,” comments Kalaitzidis. Thus, there is a scene where audiences can spot this shot as gameplay on one of the monitors. There was another shot where Guy saves a little girl from being run over by a truck. “And that little girl is Ryan’s youngest daughter,” explained Kalaitzidis. “She came into the studio, we took a lot of photographs of her and we made a gameplay version of her. That was the nice thing about working with Shawn on set, which I noticed immediately; he’s very family-oriented.”

Shawn Levy himself plays a hot nuts seller. Interestingly, the director was not initially happy with his on-screen character running to escape a giant Blimp crashing down. “So Shawn recorded himself on his iPhone in his backyard running the way that he wanted to run. We then mimicked that in animation,” recalls Kalaitzidis, laughing. “We imported his iPhone footage, looked at it, and said ‘Yeah, let’s do that!’”

Most of the Digital Domain work was done in Vancouver and their, then newly opened, Montreal Office. “I remember being there and having the ribbon cutting with the Mayor, while in the background, we were trying to deliver shots in Montreal, all as we were building the studio and hiring people,” says Kalaitzidis. When they started Free Guy, the Montreal team only numbered about 20 artists. The visual effects were rendered in V-Ray both the VFX shots and the in-game lower fidelity shots. The project was finished at 2K resolution. Digital Domain did 89 digital environments, with the team doing 347 VFX shots plus 87 gameplay shots.

Scanline VFX

Scanline also completed 404 shots across 15 sequences for Free Guy. The VFX supervisor at Scanline VFX was Bryan Grill. “I have such high respect for Scanline and Bryan,” comments Kalaitzidis. “Once upon a time, Bryan, myself, and Swen Gillberg used to all work in the same shows together. So it was quite special to work together on Free Guy, as Swen is the visual effects supervisor on the client’s side, Bryan was visual effects supervisor at Scanline and I am at DD. When we were all on set, we just had really great camaraderie together, coming up with ideas and then how to execute them… it was really great.”

Their largest sequences included the first key fight inside Bad Ass Mansion, the destruction of Soonami’s server room and the effects as Free City starts glitching away, the street squeeze as Molotov Girl and Guy drive through the streets of the city trying to escape and the transformation of Hitman’s Tower into the bridge that leads to Eden.

Free City

The primary look and layout of Free City were based on Boston, where principal photography took place. Scanline’s assets team created a library of 140 individual buildings, 86 vehicle models, over 140 individual tree and plant species, and 284 individual props which were dressed throughout the city environment. All assets were built so the team could simulate any type of destruction required. The city build spanned 1.5 miles of high-resolution downtown starting from its Free City Bank Location, through 12 blocks of city streets, all the way to Hitman’s Beach and the 1,260ft glass building centerpiece that towered over the surrounding skyline. Around this, an extra 3 miles of mid-res and 6.2 miles of low-res environment was built, resulting in a total of 10.6 usable miles.

Glitch Effects

The key principles of the glitching setup involved fracturing objects into small pieces, clustering and grouping the pieces into chunks, creating an RBD simulation, rotating the clusters inwards by their center and letting them shrink away. Additionally, Scanline created masks based on the activation to blend in wireframes and other layers. They used layered alembic files for the first time which made it possible to save static geometry, transformations, and other attributes completely separately from one another.