House of Moves: in-house transmedia production

For a long time House of Moves (HOM) in LA has been the go-to place for high end motion capture, but with spots like a new video game inspired commercial for Levoreact, the studio is now completing projects completely in-house. HOM has only been doing in-house full production of TVCs for the last 2 years but it is a growing market for the studio and a strategic one. While HOM is not turning its back on an A list of game and features clients, it is expanding this side of their business. Which makes it an extremely interesting production company, one tackling fully animated TVCs such as this spot with motion capture and very high poly count renders using gaming technology as only HOM can do!

The net result is one of the most adaptable modern ‘transmedia’ pipelines able to render TVCs, press, game cinematics, animated features or TV series work.

As an example of a recent project, HOM created a fully CG world in a video game-inspired :30 television commercial for Levoreact. The spot drops the viewer into a lush meadow filled with hundreds of unique and sinister flowers. The bucolic setting suddenly becomes a man-versus-nature shootout, and the spot leverages stylized CG visuals to deliver both the look and feel of an intense first-person shooter game.


fxguide spoke to Brian Rausch, vice president of production and Peter Krygowski, producer/editor.

HOM used its proprietary animation pipeline for the spot, which has real-time capabilities of game engine rendering for broadcast-quality imagery. The game engine used on the Levoreact TVC was the Unreal Engine.

Previs was firstly carried out with data from a preliminary mocap shoot, and environments, flower animation, lighting and particle effects. The game engine was used to move a virtual camera in the previs’d environment, allowing director Jim Sonzero to work out camera angles and revise the spot.

This spot was filmed on HOM’s smaller stage:

  • 80 Vicon T160 16-megapixel visible red cameras
  • 70’ x 40’ capture volume
  • able to have a large projection screen for real-time on set previsualization
  • equipped with stunt / wire work capabilities and a large selection of props and set pieces
  • as with the other stage it is outfitted with a digital video acquisition system to supply immediate video reference feedback of the performances captured on set
  • stage 2 by comparison has 200 Vicon cameras
“The really cool thing about our pipeline is that we were able to work in a fully textured environment from the beginning, with dynamic sunlight and swaying trees and everything,” explained Krygowski. “Having so much of the spot fleshed out so early was pretty mind-blowing for the client, and it was great for us because when we transitioned into the production phase it was just a matter of upgrading and finessing those environments to make them high quality, hi-res renders. We were way ahead of the curve.”
One of the HOM motion capture stages.

After previs was completed, the HOM team conducted a full motion capture shoot, modeling elements in ZBrush and working in Autodesk MotionBuilder to animate the final CG character. HOM artists spent over two months creating the final CG environments – designing and animating the flowers to “fire” like guns, determining final object placement within the environment, experimenting with depth, and more. Since Unreal leverages GPU power, frames could be rendered on only six workstations with an average render time of 3-4 minutes, which proved invaluable during production as changes could be made and re-rendered in hi-res within the same day.

“The power and speed of working this way gives us unparalleled flexibility on any type of project and allow us to deliver things that others cannot,” said Krygowski. “Leveraging game engine technology in our pipeline allows us to work with a relatively small team and without a render farm or any expensive hardware, which helps us stay under budget and avoid an infrastructure overhaul. And its reliable speed allows our team to focus on creative output instead of wasting time on endless renders.

“For instance, in the Levoreact spot, just the last shot of the character walking away had a 12-billion polygon count alone. Normally we would have needed a matte painter to help with the background and it would have taken days to render, but with this pipeline we could handle it ourselves – and in just 18 minutes per frame of render time – Unreal was really a great choice to incorporate into our commercial pipeline – it handles everything we threw at it, and its performance has completely exceeded our expectations.”

At fxguide we asked why such a high poly count? House of Moves said:

We worked that way for a number of reasons:

  • The client made several revisions on this shot all the way up to delivery, placement / types of flowers and trees, sunlight, depth fog, the strength of the wind in the trees, the height of the hills, color of the grass and the amount of clouds in the sky. Working in the game engine space, we could make the changes with a quick turn around time, keep the whole scene live and pass revisions to our clients quite quickly. And, as important, although the scene was bulky, because it was in engine, it was relatively easy to iterate as well as view in Unreal matinee.
  • We rendered the shot and subsequent passes, alphas, depth pass, separate elements, etc. on a single machine. The average render time was about 15-18 minutes a frame.
  • As a sidenote: the average render time per frame for the other shots was around 3 minutes. Also, the engine side of the pipeline was created and rendered on two machines, using Nvidia’s GTX580 cards.

Interestingly, at one point HOM looked at doing a matte painting for the background of this shot, but it worked out to be literally cheaper to do it in 3D with the HOM pipeline than pay for a matte painting – and with the flexibility to change anything if the client required it.

So fast and flexible is the game engine pipeline that while normally the flowers in the background would be poly decimated – the team did not use any geo-branching – they were all full resolution and they could keep everything in the one asset pipeline.

Literally the engine ran slow by game standards – rendering each frame at around 1 frame per 15 mins is greatly slower than realtime game rendering, but by TVC standards 18 mins a frame is completely acceptable, fast even. “We know that matte painting a background like that would roughly be about $3500 so we rendered out a few frames, did some calculations and worked out that the render time and modeling times was well under $3500 and we could have everything moving in the background – so every pixel of every frame is alive – so we just went ahead and made that choice. It improved the quality and saved us money.”

The team rendered the final plates all out of the UE3 game engine including alpha, depth matte, various fog and isolated elements. While the spot was designed to look “gamey plus”, the engines – especially the newer UE4 engine – have improved shaders. HOM selected the Unreal Engine as the first game engine to work with as “Unreal owns the vast majority of the licensed engine market and most other engines are internal engines…it came down to a business decision of what engine is used the most and the answer is Unreal, purely business, and it held up wonderfully well in this project. We have been really happy with Epic (makers of UE3).”

Another part of the decision is that there is a large amount of documentation and a very strong community around Unreal. But using a game engine is not completely simple, for example – game engines run as fast they they can, so there is no natural way different passes would render the same number of frames. HOM produced code in-house to clock the game engine so that it always rendered each layer exactly at the correct standard fps rate required.

fxguide asked the obvious question: could a HOM full engine pipeline provide high end photoreal rendering solutions?

HOM is less concerned with competing in the full high end photo-real animation market, instead they feel they have built “the vehicle for transmedia storytelling. “We have been working on a children’s animated feature, for example, and as that has been built in the game space we can legitimately use exactly the same assets in its game, and then again in the iPad mobile market and even through to the TV space.”

Working in the “dark space”

Which begs the question why not use a standard renderer like V-Ray?

“Time is one reason, and if we can stay in the game space a little bit one of the reasons we set out to build this pipeline was with directors in mind. It does not matter if it is games, film, television, whatever, it was the first time we could absolutely close the gap between on set work and final image. People are always on set trying to imagine what it will look like. We wanted people to need to have to imagine less and create more,” Rausch.

“When we work in Motion Builder right now – which is what I call working in the darkness – we take all the assets out of Maya or Max dump them into Motion Builder, rebuild all the constraints, rebuild all the lighting, shaders, pump motion data into it and then dump it back into Maya, Max and XSI. That’s why I call it working in the darkness – all that work in Motion Builder is really throw away work. You are really not showing anything in Motion Builder, so all that lighting, constraints and shader work is not against the final deliverable image. What we have worked on in closing that gap – is that any work you do – any environment, any shader, any audio cues, that you do to get these things into the engine space is all working directly to the final image. Plus the director can come in, walk the virtual camera around in his engine space and he is looking at final images”

“The director can work with his actors in that final space and see how it is going to really look,” adds Krygowski. “Lit – texture- camera – actor all in.”

HOM has a close relationship with the Unreal Engine makers but, still, “when we told the guys at Unreal they were pretty blown away,” joked Krygowski.


Pipeline

The pipeline was as follows:

  1.  The initial models were modelled in ZBrush, then exported to Maya for rigging
  2.  They roughed out the whole piece in the Game Engine  the environment, the cameras and temp animation)
  3.  HOM exported the environment and cameras to Motion Builder
  4.  HOM did motion capture on one of their sound stages to the get the elevations, paths, jumps, etc.
  5.  The character was motion captured. (The hero flowers were key framed)
  6.  Motion capture is cleaned up initially with House of Moves proprietary software
  7.  Retarget and the bulk of the character animation was done in Autodesk Motion Builder
  8.  The hero flower animation was done in Maya
  9.  The non-hero flowers, trees and grass are all procedurally generated in the engine
  10.  Animation, along with some tweaks to the camera were exported out of Motion Builder or Maya into the Unreal engine
  11.  Graphics were done in After Effects.
  12.  The particle animation, some tracking, final composite and final color correct were done in Nuke.