This is part one of a new exclusive 3 part series about projects that stay away from traditional live action and instead use animation as their primary focus. These projects range from traditional pencil animation to stop frame, CGI and even origami. We examine how they're approached in the digital age. We start by looking at the Promax award winner for Nickelodeon's "Misfit Monsters". You can also download a comphensive "making of" quicktime.

Misfit Monsters

Nickelodeon's "Misfit Monsters" was an interstitial package created for Nickelodeon's 2004 Halloween weekend.
The project was directed by Adam Mortimer, and the Visual Effects Producer was Scott Broock. The entire team had 6 weeks to complete the project but they were working on a total of 4 spots. The project was made from doing hundred of digital stills stop motion, which were then combined, stablised and 3D was composited. The project was done "old school" without the benefit of motion control, or complex motorised rigs.

movielink(misfits/, Click Here for a 62M quicktime Making of Misfits Monsters)

movielink(misfits/Misfits.m4v, Click Here for 44M video ipod verison )

The hand made sets

fxguide spoke to Emery Wells the lead compositor about this award winning project " I received a call from Scott shortly after the ceremony ended telling me we won. It was a great feeling considering all of the hard effort by the team that worked on it, including Shahar Levavi, CG Lead, Jasmine Katatikarn, Texturing and Rendering, and of course Katy Berry who worked on the miniatures with me. Shahar is now working on The Chronicles of Narnia, The Lion, The Witch, and the Wardrobe at Rhythm and Hues, and Jasmine is working for New York based FX facility, Rhino FX. Adam, Scott, and Katy continue to work at Nickelodeon & I continue my freelancing."

fxg: What was the post process, from the digital stills to final, in particular what aps did you use, and why?

The still sequence was brought directly into After Effects as an image sequence. I set the composition for 29.97 NTSC which was our output format. I only used After Effects and Shake for the post and the 3d team used Maya exclusively with textures painted in Photoshop. The main reason I used After Effects was because at the time (October 2004,) I was more comfortable with it and I had only recently started using Shake in production. Since then, I generally do most of my work in Shake as I much prefer the node-based approach to compositing. However, I still find the After Effects timeline more intuitive and it always manages to find its way into any job I'm working on. The first step after bringing the sequence in was to resize the material for NTSC and analyze the stop motion. Nickelodeon didn't have any 3d tracking software in house, so after deciding we were happy with the final stop motion camera move, I brought the sequence to an associate's studio to see if it were even possible to get a 3d track. We were hoping we would get something usable, but after some quick tests with Boujou and PFtrack, we decided it would make more sense to do a simple 2d track. It wasn't a perfect solution but there really wasn't very much perspective shifting so we were confident we could get away with it. The sequence was then retimed in After Effects using AE's time re-map tool. I used a scratch voice over track recorded by the director as a guide for hitting certain marks. From there I proceeded to composite all the individual still scenes in Shake and I assembled them all together in After effects.

fxg: Can you discuss what was done in Shake vs. AE and why ?

I used Shake for all the heavy compositing basically. All the shots that had CG characters in them were comped in Shake. The entire forest was comped together in Shake and then that was brought into After Effects and tracked into its awaiting box. As you can tell by the looks of the RAW footage, we originally intended to do a whole miniature forest but faced with budgetary and time constraints we 86ed that idea in favor of a digitally composited forest. I think it was definitely the right choice as we wouldn't have been able to make the forest look as good with the resources and time available. Yes, it probably would have made more sense to do the whole thing in Shake, but because I wasn't entirely comfortable with the package, I didn't want to waste production time on figuring things out. I used the strengths of both packages, namely the node based compositing aspects of Shake and the timeline friendly aspects of After Effects. I also really enjoy Color Finesse, which is a color correction plug-in included in the After Effects Production bundle. I used Color Finesse to give the final look treatment and I matched that look in Shake for the shots completed in Shake.

the shake tree for one scene

fxg:Can you walk through what you were doing in this screen grab from Shake?

The picture of the Shake tree (left) is only for the forest. Our destination format was SD NTSC but the forest was done at 1536x1024 and this was done to facilitate the scaling of the stop motion camera. The camera came in quite tight which meant the forest and everything inside had to be bigger in order to pan around with the rest of the sequence. Of course this also meant that our little shark had to be rendered out much larger as well which greatly increased render times.
This one spot was only 1 of 4 that we were working on to complete in a 6 week time period. We had approximately 7 machines rendering all the various elements for the package, but it still wasn't enough and we had to farm out some of the rendering to a render farm. But back to the Shake tree... though I've subsequently built some larger trees (and oh, are they fun to show your friends) the forest is still relatively large which is mainly due to the large number of input nodes. Of course each input node was shot against blue screen so each one had an associated matte that needed to be created. Keying was done using the included keyer, Keylight, and I did some custom algorithms as well based on a variant of the color difference algorithm. There was also a lot of digital "relighting" which was achieved by the combination of color corrections and masks. The goal was to give the feeling of a stormy night with a little moonlight shining down on the small clearing where the Shark lies. The model trees that I shot against blue screen were shot fairly flat to facilitate the relighting. Other than that it's a fairly straightforward composite; A lot of over nodes, blurs, and color nodes. My favorite thing about working in Shake is that every node is essentially a "pre-comp" (that's if you're talking After Effects lingo.) The output of every node is available as a whole to be altered or layered-in somewhere else. I kind of despise the whole pre-comp thing in After Effects so Shake is a nice departure from that model.

Could you discuss stablising and tracking the stop motion? Stablising is often a big issue in manual camera move -stop motion?

As I touched on earlier, the tracking was a simple one-point track. I had hoped that we could get a 3d solution, but the solvers had some difficulty with the extreme jumpiness of the sequence. Getting solid tracking points was not the difficult part as we were able to do that just fine. Perhaps, if we had more time, we could have gotten the solvers to behave, but with time not on our side, we had to make the choice and move ahead. In the end it was a combination of the After Effects tracker and some manual tracking. I would like to note that if I were doing this again today, I would definitely use Shake. Version 4 has a new tracker and it�s considerably improved from the previous version.

note the dolly markings on the ground

fxg: Can you discuss the 'dolly' system and how you marked it up.. and how you improved it in each of your 4 attempts?

Ah, the dolly system was fun. In the video I depicted what we intended to do, but ultimately not able to get into the building. So instead I simply created a grid on the floor that allowed me to visually see how far, and from where, I was moving the tripod. I snapped a photo, slid the tripod forward, sideways, elevated up, or a combination of all three and then repeated the process. The first problem I ran into was that the tripod legs did not easily slide against the carpet so it was difficult to do small, precise translations. In the later attempts, I used small pieces of cardboard
under the legs so I was able to get more precise movement, but that actually didnt turn out to be the final solution. I realized that the entire tripod needed to be on one surface that slid easily across the floor. This technique was implemented with a larger piece of cardboard and the addition of guide rails which you can see in the picture. The guide rails (which were taped securely to the floor enabled me to finally get a more precise, and guided escort, up to the front of the shelves.

fxg: Can you discuss both the on set lighting and the relighting you did in post ?

First, let me say that those mini KinoFlos are great. They are very slim and they produce a very nice light. The KinoFlos were used for the practical overhead light in the Headless Janitor diorama. We covered them with a piece of thin, painted, and cracked plastic to replicate a fluorescent overhead lighting fixture and I think they turned out great. However, the studio lights we were using would very easily over power them. I don1t recall the wattage at the moment but we were using two Lowel floods and compared to the mini Kinos, they were extremely bright.
After stopping them down a bit by pointing them at some show card and wrapping a little diffusion on them, I was able to get the dark and spooky feel we were aiming for. The Headless Janitor was primarily lit from the KinoFlos alone. The Grim Sleeper was also primarily lit from the practical bedroom lamp and we also had a kino shining through the open window, which would later be replaced with a stormy moonlit night. The Shark however had no practical lights so we used an additional spotlight to get it in the same range as the other two boxes. After it was
all lit, it was still on the dark side, but because it was stop motion we could get away with it by using longer exposure times. We obviously didnt have to worry about any motion blur. The key to our one-pass approach was just to make sure we didnt overpower the miniature practical lights, which were providing the entire mood for the scene.

For the digital look treatment we really wanted to convey the sense of looking through old optical glass. I used a series of animated blurs and aberration filters with various color grades and slight lens distortion. To complete the look I created a flickering vignette over top the whole thing.

misfits/onsetWhy didn't you use an animation package for the stop frame ? (like istopmotion?) or even direct camera control for the shooting ? If you did it again would you?

Frankly, it was my first time doing stop motion and I learned a lot. If I were to do it again I would have captured directly to a laptop and perhaps used istopmotion as youve suggested. However, I believe most still cameras do not provide a live output of the image sensor and I believe this is the main function of using such an application such as istopmotion. By viewing the live output of your image sensor with an onion skinned version of your previous frames, youre able to see where you want to go next. There is certainly some functionality to exploit from this software even if you do not have live output from your camera and Id definitely research more if faced with another stop motion job.

What was the exposure info on the stills ?

Camera: Canon EOS Digital Rebel (not XT)
Resolution: 3072x2048
File Format: JPEG
Exposure: 0.4 at f/5.6
ISO Speed Rating: 100
Focal Length: 55mm x1.6 multiplier = 88mm

I should mention that the reason I used a longer lens is because I was trying not to get too close and potentially block or disrupt the lighting. As far as f-stop is concerned, I would have liked to use a higher f-stop but due to the fact that I was exposing for the practical lights of the dioramas, the increased f-stop (and increased exposure time)would have slowed down the entire process. It might sound silly when you talk about seconds adding to your production time but when you multiply that by thousands of photos it could turn into a few hours. We were shooting in an active office and we managed to persuade everyone to stay away on Friday evening and into Saturday morning but beyond that, all bets were off.

Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.