Blur Studio is well known for its dynamic, intricate and engaging cinematics work. But a recent spot to promote Ubisoft’s new Assassin’s Creed: Unity release pushed the boundaries of Blur’s already impressive capabilities. For the project, Blur was called on to produce a :60 trailer showcasing a 1780s Paris setting and hordes of assassins, and then also deliver scenes that incorporated online community generated character designs for an interactive version of the trailer. Added to that was a ‘Gigapan’ image delivery that allowed users to pinpoint assassins amongst the crowd of creations. We find out from Blur CG supervisor Darren Butler how this work was completed, step by step.
Watch the interactive trailer at http://acunity-unite.com
“The concept was a flowing river of assassins,” says Butler. “Part of the game is the multi-player ability so what we were trying to do with the trailer was to show this gigantic river, this flowing massive crowd of assassins and all the different kinds of variations you can create with them.” These variations came from contest winners after more than 1,400 online community members submitted their own character designs for the assassins (more on how Blur implemented these designs below).
Working with director Joe Kosinski and armed with some storyboards, Blur launched into an animatics phase for the trailer. This informed the shots and action that would be required – with much of the assassin and other character movements and stunts achieved via motion capture. Teams of mocap artists and stunt performers carried out mocap at Just Cause in Marina del Rey, with Butler noting there was significant amount of parkour type moves and heavy interaction with on-screen objects. “On stage they actually built these objects based off the dimensions of our 3D props. We were able to get the 3D assets in 3ds Max, create dimensions based off of those which went into the stage.”
Blur leveraged character and set designs based on previous work by Digic Pictures, which had produced several Assassin’s Creed cinematic pieces. “Their pipeline is very different from ours,” notes Butler, since Digic typically renders in Arnold while Blur uses V-Ray, “but we were able to get a lot of their hi-res assets. We had to go in and up-res where needed and re-work shaders and rigs. We even used some different head scan models to get a specific Blur look.”
New to Blur’s pipeline, also, was head scanning at Light Stage. “This means the character modelers have a much more solid base to start with and it makes it easier for them to get a more realistic model right off the bat,” says Butler. “What we would normally be doing if we’re sculpting a character is we’d be sculpting all the pores and wrinkles, but with the Light Stage data most of the detail’s already there. So what we get to do next is spend the time and get the dirty quality. The crying child for example – we can refine all the dirt and streaks and crying tears down his eyes.”
The distinctive Paris lighting was achieved with aid of art-directed HDRIs to provide a ’summertime warm dusty feeling’ to the shots. “We would take two or three HDRIs in Photoshop and mash them together with the parts that we like,” explains Butler.
For one particularly challenging shot following an assassin crashing out of a high window, artists had to deal with both complicated mocap and inside-outside exposure. “For that window shot,” says Butler, “they built a makeshift wall about 10 feet up off the ground on the mocap stage. And then we had a handheld motion camera running along behind him that’s following the stuntman running down the room, flipping out the window and landing on the mat 10 feet down.”
“After that we get our data into 3ds Max. We just used an extremely sunny warm HDRI and a little bit of a keylight – a V-Ray spherical light way off in the distance just to give us a little bit more of a key bump. A lot of it was mainly comp tricks. In the comp we were able to simulate it. As we’re in the dark room we can overexpose outside but then as we run and flare out the window, we can overexpose the outside. And then as we start panning and crane down then we shift the exposure and grab all the details down below.”
Since the trailer was intended to show multiple assassins in the scenes, Blur had originally looked to create multiple resolution versions to render depending on the shot. But because so much of the action showed the characters up-close, just about every digital assassin was a hi-res version. “We’ve got upwards of 80-100 characters,” explains Butler, “all with full sub-surface, all-displacement, full Ornatrix hair. We still had to break the crowd out into many render sessions. Some passes were taking upwards of 120 gigs just to calculate and render. But V-Ray was doing a kick-ass job to take all that data and get some great renders.”
Blur does not use any particular AI / crowd software. Instead, an internal point cache planter tool allows artists to animate crowds in separate files and construct cycles from the motion capture and keyframe animation. That proved tougher on this project, however, since the ‘crowds’ required significant interactive behavior, such as a scene in which an assassin slices the throat of a solider and the crowd reacts. A shot like that required specific mocap data.
Blur created these four assassins to be one kind of master asset, as Butler explains further: “We would input the ID number of the ‘John Smith’ design – which was say ‘145’ – then in our XSI animation pipeline we would animate the entire character, all four versions of the character which was all part of the same rig. We animated the hero assassin with all four rigs attached to it, and then when they export that geometry to import into Max, it would export the proper belts, upper body, hood and pants. But then we need another customized tool that would take our base shader which was all based on a bitmap number and assassin name. Our base was 01, and our tool would swap 01 for whatever color it was. So if it was 05 and the red hood it would swap 01 for 05.”
Since the interactive trailer would be shown online, Blur provided Ubisoft with an uncompressed Quicktime that they were able to scrub through. “Then,” adds Butler, “for each shot we provided another JSON file which was a bounding box – a 2D bounding box JSON file that was assassin 145 with his bounding box on screenspace as x and y. That allowed people to stop the video, click anywhere in that bounding box and then it would bring up this particular content winner – John Smith from so and so…and you could see the turntable of the character.”
“We also ended up picking three shots where it had interactive depth of field,” says Butler. “We had to deliver anywhere from three passes to 12 passes. For those three particular shots, we rendered with no depth of field and we used the Frischluft depth of field plugin for some interactive depth of field in Digital Fusion. We rendered those shots with no depth of field, no motion blur and with a very strong z-depth pass, which we then in Digital Fusion had to use to kick out a foreground, a mid-ground, a background and in some cases a mid-ground A, B and C and background A, B, C. Depending on how many assassins were in the shot, we had to deliver many many different outputs which was just a string of uncompressed PNGs. So at 60 fps, you’d be playing the Quicktime and as soon as you stopped and you click on the bounding box for character A, it would then swap to that particular depth of field pass.”
The Gigpan view of Paris was another deliverable for the interactive trailer. Here, the view widens out allowing users to stop the video and explore the scene and again click on assassins.
“That was 24,000 x 12,000 image straight out of Max and V-Ray,” says Butler. “That was one of the things we were definitely most nervous about. To get these assassins into the trailer, that was one thing – there’s about 240 in the trailer. But in the Gigapan image we ended up with 1430 exactly.”
It was initially thought that artists would need to render our multiple cameras that could be stitched together to form the Gigapan image, but instead Blur successfully rendered the 24,000 pixel image in V-Ray.
“We had to break out some layers,” says Butler, “but our longest render time was only 32 hours for that image. So we were really impressed with that. To get it to project properly, we rendered a full 360/180 spherical camera with a 360 spherical camera out of V-Ray. It gives you the entire 360 but to crop it in we used the blow-up option in Max which uses the co-ordinates we created on-screen.”
“When we found out about this project,” says Butler, reflecting on the six months of work, “we were told we were doing thousands of characters with thousands of variations with a potential of 10 million variations. We realized just how big and grandiose this would be and we didn’t know if we could do it, but we absolutely pulled it off.”
We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.