This week MTV announced its 2012 Video Music Award nominations, including those for best visual effects. We talk to some of the nominees and also look at other recent promos that have caught our eye.
UPDATE: The 2012 VMA for visual effects was won by Skrillex – First of the Year (Equinox) with work by the Deka Brothers and Tony ‘Truand’ Datis. See our interview below.
David Guetta – Turn Me On ft. Nicki Minaj
Earlier this year, fxguide’s own Mike Seymour sat down with COPA co-founder and VFX supervisor Alex Frisch to discuss ‘Turn Me On,’ a music video for David Guetta and Nicki Minaj. Check out the interview here for info on the great CG and compositing work, and how COPA’s virtual studio works. The promo is below.
Katy Perry – Wide Awake
For ‘Wide Awake’, a 3D stereo video promoting Katy Perry’s song, Ingenuity Engine worked with director Tony T. Datis and production company DNA to produce 140 visual effects shots. Read our interview with Grant Miller, Creative Director at Ingenuity Engine, for more info on how they pulled it off.
fxg: The mirror scene is great – how was that filmed? What techniques did you use to crack and destroy the floor?
Grant Miller: The mirror scene was filmed in two passes, one shot of Katy walking down the hall with a greenscreen on the mirror, then a matching reverse of her reflection with the little girl. We extended the rather short practical hallway to approximately three times its length which allowed more room for the crumbling effect and greater depth in the scene. With the slow camera move, we normally would have been able to use some simple geometry and a projected matte painting to achieve the set extension, but because the video was shot in 3D our approach needed to be more complete. We took photos of the hallway from set and projected them on to geometry for the walls and mirror frames. This gave us the parallax and depth we needed to sell the shot, and was still much faster than building the hallway entirely in 3D.
For the floor we painted out the lighting in the set photography and used the resulting photographs as a basis for the texture. We prefractured the geometry with Rayfire in 3dsMax and began experimenting with a variety of simulation techniques for the crumbling. We ultimately settled on a procedural approach that allowed for more stable results and increased directability. We hand animated several pieces crumbling, then used Key Transfer to copy that animation to the rest of fragments, randomize it, and delay from front to back based on a spline path. Our animator then froze from certain sections to create interesting shapes as the floor collapsed. While it seems like a lot of hand work, this method allowed us to have the floor crumble exactly when, where, and how we wanted.
To sweeten the effect, additional rocks and debris were created on the interior faces of the floor fragments that would pop loose as the pieces fell. We also repurposed this particle system to drive a Fume simulation for extra dust. The floor, particles, and Fume were rendered all together with VRay and sent to Nuke with mattes and our standard set of passes for final compositing. We normally enhance our 3D work with additional filmed elements in compositing, but when doing stereo work we find an all 3D approach yields greater depth and realism.
fxg: How were the butterflies accomplished – what tools did you use?
Miller: For the dress scene the butterflies were modeled and textured based on reference photography of Katy’s dress then rigged and skinned in 3dsMax. The extra whip in the wings as they flap was added procedurally using flex. We hand animated the hero butterflies and used a particle system to drive the rest in the background. Hero butterflies were rotomated to match the movement of Katy’s dress before they take off, allowing us to achieve a smooth transition between live action and CG in the composite.
The dressing room butterfly required quite a bit of concept art before diving in to modeling. Special attention in look development was given to the butterfly’s wings to achieve the required lightness and translucency. We created a rough model of the dressing room and textured it using the plate and additional photography from set so the wings would realistically reflect and refract the environment.
fxg: Can you talk about the crane up shot revealing the maze – how did you tie the filmed and CG/matte elements together?
Miller: Production had built and dressed the first few turns of the stone maze on set, which served as a great reference and physical anchor for our set extension. We began conceptualizing the labyrinth during the shoot, so by the time we received the footage we were well on our way toward the final look. We closely tracked the camera in stereo then created rather detailed 3D geometry to match our labyrinth matte painting. The matte painting which was saved out in layers and reprojected on to the geometry to allow for parallax and stereoscopic depth. The practical maze walls were then rotomated for both eyes to facilitate the transition to the rendered set. We used Nuke’s 3D system to add additional smoke, particulate, and god rays in the composite.
fxg: Did you complete elements in stereo? What were some of the challenges here?
Miller: All of the visuals we created were shot and posted in stereo. The hallway sequence was especially difficult in stereo as the set extension was afterthought and thus not allowed for in the depth budget during the shoot. We needed to reduce the in camera interocular to a comfortable level then create a hallway that that seamlessly transitioned and extended the adjusted plate all before we could even begin crumbling the floor.
Ingenuity has a long history in music videos so the three week post schedule didn’t come as a surprise, but finishing in stereo on an already tight schedule definitely presented it’s own set of challenges. We typically make great use our library of filmed elements to achieve and enhance effects in compositing. Working in stereo, we needed to spend more time creating those elements in 3D to achieve proper stereoscopic depth. It felt like someone had taken half of our tools and still expected us to build a house in the same amount of time!
Rihanna – Where Have You Been
Baked FX is nominated for its 120+ visual effects in ‘Where Have You Been’, a feat it achieved in only 10 days. “Honestly the hardest part about making music videos these days is the time they give you to do the work,” says Baked FX creative director and founder George Loucas. “It required immense planning and foresight for us even to have a chance.” The promo was directed by Dave Meyers and the production company was @radical.media.
“The most prominent VFX,” notes Loucas, “are the desert matte painting shots. “They were created in Nuke and relied heavily on a ton of individual camera projections. Once we had an approved styleframe we broke it all into layers based on depth and started roughing out models for our foreground and mid-ground objects and features. We then created hero comps for each of the different coverage groups. A hero comp for close ups, hero comp for mediums, wides, etc, etc.”
“While fighting the clock,” he adds, “we really only had one chance to get through it and could not have discovered later that we had continuity or orientation issues between the setups. We also made sure that all of our tracks were consistently scaled and oriented to make setups go as quickly as possible. The more flashy effects were really two weeks of experimenting in After Effects working closely with director Dave Meyers.”
Linkin Park – Burn It Down
Ghost Town Media were nominated in 2011 for an MTV VMA for their work on Linkin Park’s ‘Waiting For The End’, and once again the studio’s visual effects have received a nod, this time for the same band’s ‘Burn It Down’, directed by Joe Hahn. Check out the fiery video below, and also a behind the scenes clip of Linkin Park members having their bodies scanned at Gentle Giant Studios. Ghost Town’s Matt Primm also breaks down the work.
The ‘Burn it Down’ project ended up being a very different animal when compared to last year’s ‘Waiting for the End’ video. The designerly work for WFTE gave way to a much more practical style VFX setup where set extension and environmental effects were needed on a large scale to fill out the frame in almost every shot. The practical set was really great and the DP we work with most often on the LP projects, Damian Acevedo, really nailed the cinematography but the nature of the set and camera work left us with a large amount of half-done footage from a VFX perspective. The actual set only went to about 8ft in height which meant for every wide and medium shot we had to extend all of the vertical elements by ~50% to reach the edge of frame. The moving camera and 6 band members constantly crossing in front of the vertical set pieces certainly complicated things as well, requiring us to do a huge amount of roto and tracking just to prep the shots for VFX work.
For all of the wide shots, we were tasked with building a huge digital rotunda that would complete the top of the performance space, and this element also required a hefty amount of tracking/match-moving and roto to be placed properly in each shot. Finally, after getting all of the technical issues solved for these shots, we were then tasked to composite all of the elements into shots that were double lensed. The footage was shot on what is essentially a modified lens baby with a plethora of in-camera flares. In total, we had approximately 70 shots that required full body roto for 3 or more people and 3D camera tracks.
When it came to the fire effects and the digital glitching on the band members’ bodies, we relied heavily on the full body 3D scans that were done at Gentle Giant. All of the blue digital layers moving over the band’s bodies and faces are actual pieces of their 3D bodies that were object tracked and re-rigged to simulate pieces of digital armor that would appear briefly on their body, essentially showing pieces of their digital 3D models moving over their actual bodies. A similar process was used for the entire frozen moment fire scene at the end of the video. In addition to the tracking and roto required to solo out each band member for this sequence, we re-rigged and posed each 3D model to match the actual footage and then used those 3D models as the source points for the fire simulation. The fire was created in Fume in 3dsmax.
Skrillex – First of the Year (Equinox)
The Deka Brothers (Julien and Ben) are behind this Tony Truand-directed promo for Skrillex, featuring some forceful atmospherics and creature augmentation. Check out the video below and then read our detailed Q&A.
fxg: How were the atmospherics, smoke and dust effects and ‘forces’ achieved?
Deka Brothers: As far as the atmospherics go, the crew used practical smoke on set so it was a matter of enhancing the shots rather than creating a whole smoke environment. Practical smoke is really tricky to keep consistent through an entire shoot, especially on a limited budget. It was really about keeping the best take for the edit and make everything look consistent for post-FX.
Our approach was to bring subtle moving smoke textures to the shots without getting into something too hard to achieve – since we had a limited amount of time. It was a 2D compositing approach, all done in After Effect CS5. We basically used stock footage from the Action Essential Pack of VideoCopilot, that we blended on top of the original plates using different fusion modes and transparency. We picked the smoke samples regarding the specificity of each shot (perspective, frame). It was a pretty straight forward workflow, and actually the trickiest part was to match the crazy one-frame-cut from the director, so it feels like the smoke elements were always there and had been edited with the original image, when they were in reality put in after the fact.
We also added some dust on impacts, when the man hits the wall for example, and once again we used 2D stock footage. In this case, it was a 3D comp though: they locked off the shot, as a multi pass. The talent was hitting the wall, then the FX crew did some smoke explosions on the other side of the wall. At the end, we discussed with the director and decided it would look better with a little dolly move. So we de-composed this part of the set into several layers and put them into a 3D comp, and added a lot of dust element to give more impact. The momentum when the Pedo is in his dream world was also locked off shots, and we created a dolly in in post, using camera projection techniques.
The forces: after getting a first pass of work on the atmospherics, we came back on some specific shots that would require “force field power” coming out of the little girl. Once again, we used smoke sample elements. This time we used the luminance information, and turned them into displacement maps. They supported the animations for the distortion of the environment. In this case, the smoke samples were defining the path of distortion effects and blurs: everything underneath those adjustment layers were affected by the distortion. Because it had to be specific depending on the shots, it was just a matter of key framing the animation to have it match the practical shot.
The force was also a matter a wire stunts – which means wire removal (and a trampoline removal). And we had a lot of work cleaning the plates, especially the chest of the man when he flies away for the first time. There was a harness there, and a wire covering partially his left shoulder. The cleaning was all done the traditional way: 2D compositing of fabric textures that we picked up from other shots; a lot of patches, a lot of rotoscoping work, and a lot of hand tracking.
For the fingers of the little girl, we used the Red Giant’s Trapcode Particular plug-in, which is a particle world generator with which you can be really specific in terms of defining the emitter and the evolution of the particles through time. We linked the emitter of the particles to a bunch of 3D lights (10 total; 1 for each finger) and we set Particular to use the position of those light as the emitter point. Then, it was really convenient to track the fingers and to apply those positions to the 3D light in order to have the emitter point of Particular following exactly the extremity of the fingers of the little girl.
With a bit of tweak in the atmospheric and velocity settings, we defined the type of texture and shape we wanted for those particles to create. On top of that, we used some distortion effects integrated using luma matte, to have those “smoky flames” blending more in the whole scene.
fxg: Can you talk about creating or augmenting the creature effects?
Deka Brothers: The Creature was shot practically: black paint on the body of the talent, a bondage latex mask which had the eyes filled with white material for reference. We had to clean the mask a bit as it was covered with some black tape to hide some details on the mask the director didn’t want to be seen. Then, we added a little on the character – enhanced the eyes by creating this kind of gloomy luminescence. Here, we did a bit of roto, added some light burst effects, key framed the setting a bit, and that did the trick: even if it was all composited in 2D, it felt like a 3D light was emanating from the eyes of the character. We asked ourselves if we a had to do more effects on the character, but it was working so well that we ended up leaving it just like that. Also, the director did the FX of the creature rising behind the girl.
fxg: What approach did you take to grading the video?
Deka Brothers: Tony, the director, cut the video in Final Cut, on Pro Res, so the first step was to conform everything to get back on the original R3D files (it was shot on RED-ONE MX). He gave us an XML that we imported in Premiere and re-opened right in After Effects. There, we were able bedeva seks film izle to relink the footage to the original R3D shots. Some speed effects didn’t translate that well, so we spent some time making sure the edit was right in AE (comparing it with a Pro Res render of the Final Cut edit). This actually happened before the VFX.
We were now able to work on a 4K picture but moreover, we were able to access the raw data. We tried not to make it too green (because this is usually how we like our images), but this color was there anyway from the location and the DP’s work. The grading had been a mix of adjusting the raw settings and using Color Finesse as well as all types of other tools in AE.
AE is not a grading software, and it takes a bit of time to work there, but the software is much more powerful in the sense that you can do many more things with it, especially when it comes to power windows (masks). Working with raw allowed us to play with what we called “double exposure”. The video was not shot in HDR obviously. You can adapt your raw settings let’s say for the shadows on one layer and duplicate this layer in the bin which you “develop” for your highlights. Then you combine them together in the comp window. You can do that with any type of footage obviously, but with the R3D files, you can do it at the raw level and obtain a much cleaner result.
The shot that required the most of work was the very dark shot of the man walking into the warehouse. The director wanted his silhouette to be black, but keep the highlights of his glasses. Sylvain Sechet (the DP) did a good job and the intention was already there, but we still had to do some roto work; use fusion blending mode to get it right. We wanted to have this very comic bookish look (almost cartoonish), however find the right balance so it felt natural at the same time – like it could have been shot this way. Looking for the best of both worlds in a way. After a few tweaks, we found the right balance to make it work.
Featured music videos
Justice – New Lands
Paying homage to films like Tron, Escape from New York and Rollerball, this futuristic sports-infused promo for Justice’s song New Lands includes visual effects from Glassworks Barcelona. The team there worked with director duo CANADA on filling out the stadium, adding holograms and creating anime inspired scenes. See below for our Q&A.
fxg: Can you talk about how the shoot was planned out in terms of filming and what +effects would be added later? What was the director’s overall aim in terms of the look and feel of the promo?
Glassworks: CANADA had prepared a document which detailed everything they wanted to achieve, all the way from the grade to the look of the stadium to the matte painting. Weeks before the shoot, we had a few meetings with them to clarify the shoot and post process. Luis Cerveró (Director) worked closely with David Gómez (Lead 2D) and Martín Contel (Lead 3D) to discuss the look and feel of every detail. We went through each shot with the shooting boards, meticulously analysing everything to anticipate what potential problems which might occur during both filming and in post.
The sheer scale of the film and its short deadline lead us to create an interactive post-production plan, with the help of Xavi Tribó (Head of Technology), so that our 13 person crew could keep on top of the 202 shots. This gave us total control of each shot and was very useful for Joan Amat (Head of Production) for managing every step of the project.
With regards to the stadium, we knew that we wouldn’t have enough extras to fill it, so we started to work on some tests of crowd systems and various rotoscoping techniques. We visited the velodrome a few days before the shoot to get some references to help with the CGI.
For the overall look, Luis was fighting till the very last minute to shoot it on 35mm to help achieve his intended 80’s look and feel.
fxg: Where were the game sequences filmed? Did you have a chance on location to acquire survey data, lens details etc?
Glassworks: The game sequences were shot in Barcelona at Velòdrom d ́Horta. We were able to get lens information and details of distance between camera and action etc in some of the shots, but not all. Anamorphic lenses were chosen by CANADA. This would prove difficult due to the lens aberrations problems that it would present to us on the tracking and compositing shots. Lots of the shots had to be tracked manually due to the rain and lack of lens information.
fxg: How did you approach the stadium wide shots and any crowd additions?
Glassworks: We knew that we wouldn’t have enough extras and that there would be lots of empty seats. So, we decided to build a 3D crowd system for the wide and near stadium shots, mixing real people with CGI people, which worked pretty well. Then we added some stadium elements, such as flags, that helped integrate the CGI better. The biggest difficulty was to put the crowd under the rain shots while conserving the rain and original flares. Rain FX during the shoot was sometimes not enough and we had to put CG water and water footage in to match each shot correctly. There were also rain shots that didn’t belong in the rain sequence that we needed to remove the rain from.
- Above: watch a behind the scenes featurette on the video clip.
fxg: Can you talk about how you achieved the holograms and the ‘game board’ views of the players? How did you do the head-up helmet displays?
Glassworks: The Game Board views were done by the Partizan team. The helmet interface was designed by Actop and integrated in Flame at Glassworks Barcelona. The POV of the referee was designed by Xavi’s Lab, our Special Projects division at Glassworks Barcelona. With the use of openFrameworks and algorithms programmed by Xavi, along with the use of GPU at real time, we got a special “polygon look” that delivered what Luis wanted.
fxg: Can you take me through the glowing ball and particle effects for the football scenes?
Glassworks: Luis showed us references from the film Solarbabies; he wanted to make the ball as similar as possible to this. During the filming, they used no ball apart from the rugby ball inside the field of play. We had to generate it entirely in CGI, as well as rotoscoping hands, arms and doing ball replacement in some shots. The particles were built in XSI using turbulence physics.
fxg: How were the ‘graphical’, almost Japanese anime, scenes achieved? How were they designed and filmed?
Glassworks: Xavier and Gaspart gave CANADA some references from the Cobra anime and Luis also showed us some others, like Dragon Ball and Captain Tsubasa. With all this information, David started working on some tests with the help of Guillem Hernández (illustrator and matte painter). It was a meticulous process with lots of different development stages, in order to paint and animate as close as possible to this particular look. The palette was inspired from most of these manga films mentioned above.
Coldplay – Princess Of China ft. Rihanna
QuietMan and COPA Network collaborated on the effects for Coldplay and Rihanna’s ‘Princess of China’, in which directors Adria Petty and Alan Bibby sought after diverse imagery from ninja fights to desert landscapes. Much of the piece was filmed against greenscreen and then involved matte paintings, CG work and re-projections.
Raveyards – Remember
Check out the unique CG work in this promo from director Charles De Meyer and his Chuck Eklectric outfit. We’ll hopefully be sharing some info on how the VFX were completed soon.
Pajama Club – TNT for Two
For this interactive Pajama Club video clip (see: www.pajamaclubmusic.com/3d), Custom Logic captured performers, environments and props with an XBox Kinect, using the point cloud data in a specific way to give the viewer a whole new experience. See the non-interactive video below.
Explaining their work, Custom Logic says, “We chose to use the camera in quite a different way. We wanted to use it the same way that you might use a film camera. We wanted to take it out of the game room and into the field to record footage and then play it back later. To do this, we developed software to record what the Kinect detects with its infrared camera, as well as what it sees with its color camera. We used this setup to capture the band’s performance as well as a narrative storyline.”
“We then encoded the video in a specific way that doesn’t make much sense to the human eye, but our software can decode back into 3d points in space on the other end. We then stream it over the web to a web browser just like a normal video. The data is decoded by the user’s web browser using a new technology called “WebGL.” Clever code on our website re-interprets the encoded data and displays it three dimensionally in a way that the person experi- encing it can manipulate on the fly, in realtime.”
Check out more information on how the video was made at Custom Logic’s blog.
We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.