The fractal nature of Guardians of the Galaxy Vol. 2

Set to the all-new sonic backdrop of Awesome Mixtape #2, Marvel Studios’ Guardians of the Galaxy Vol. 2 continues the team’s adventures as they traverse the outer reaches of the cosmos. The Guardians must fight to keep their newfound family together as they unravel the mystery of Peter Quill’s true parentage.

Guardians of the Galaxy Vol. 2 is written and directed by James Gunn and stars Chris Pratt, Zoe Saldana, Dave Bautista, featuring Vin Diesel as Baby Groot, Bradley Cooper as Rocket, Michael Rooker, Sean Gunn and Karen Gillan.

In the first part of our coverage we looked at EGO. Here now is a look at the fractal nature of his planet, followed by the Character Animation of the film. Finally, a golden glimpse ‘forward’ thanks to Luma Pictures.


The fractal nature of Ego

Animal Logic, Method studios and Weta digital all played key roles in Ego’s planet. While the companies did other work on the film, with regards to the planet, Animal Logic built the majority of Ego’s actual interior home/cathedral, Method the arrival and Weta handled the third act fight inside the planet, with the exception of the delightful NOT THAT BUTTON baby Groot sequence which was animated by Method studios.

Fractal world: Animal Logic

The interior of Ego’s home was vastly complex and based on fractal mathematics. While several other films have gone down a fractal path to take advantage of the beautiful complexity and detail it can provide, working with Fractals is extremely complex. For example fxguide covered the use of fractals in Suicide Squad by Sony Pictures Imageworks (SPI). In that film simulation, higher level fractals and vortical flows powered the visuals of the Enchantress’ machine in the final act. Producing forms of remarkable complexity which appear to have logic or at least plausible structure. Unlike Suicide Squad Animal logic needed static structures and one’s that could be art directed for the framing and action required. Unfortunately, as beautiful as fractals are, they just don’t respond well to art direction.

As fractals are made based on various key mathematical formulas, slight variations of the underlying maths massively alter the forms. Thus, in sequences such as the Multi-verse in Doctor Strange, fractal maths is used very effectively to produce out of this world visuals, it is rarely if ever used as the basis of architecture.

Animal Logic came up with a brilliant solution to the problem of managing the largely static fractals while still allowing their team to art direct shots. The team build fractal forms in Houdini and then cleverly included them into traditional geometry. In the video below there is first a visualisation of fractals based a wide range of maths that has been posted and experimented with by maths-animation specialist all over the internet.

Animal did over 150 shots with 147 in the final cut. They were actually brought on to solve the story vignettes that were used to explain the backstory. While they started as a version of an oil painting they evolved through being falling sand, to the final plastic sculptures. As part of this exploration the team looked at fractals.

Paul Butterworth, Animal Logic’s visual effects supervisor, says the starting point for the palace was a concept design based on the fractal work of Hal Tenney. His work is visually impressive, and the production even spoke to Tenney as part of the development of Ego’s Palace design exploration. The artist gave Animal Logic some of the fractal data he’d used, but from the team’s detailed requirements it was not directly translatable into something they could use. The concept art team had used Hal’s work a source for exploration. They had digitally cut up various parts of his fine art work to build composite images with the very impressive feel that they wanted.

“It was very elaborate,.. essentially from there we went looking at fractals trying to work out how we could create various fractal forms,” explains Butterworth. The problem was that while Animal Logic had experience with some similar mathematical forms when they created a visualisation of the internet for Avengers; Age of Ultron, most fractals are just not in a form that can be manipulated to construct complex architecture. Matt Ebb, fx lead explains, “the thing with fractals is that its chaos – and that’s the point – you have a few little numbers that control how it works.. you can deploy them and get something you could never have thought of yourself, but you can’t easily control them – they do what they are going to do.”

An example of the work of Artist Hal Tenny Le Grand Helm

The problem was solved not unlike how the concept art itself was created. Animal Logic built a system that allowed artists to find fractal forms from dozens of maths visualisations that Ebb produced. These forms could then be copied into more traditional geometry, and instanced to produce vast arrays of tamed chaos. The maths to isolate bounding boxes of fractal interest.

To use these in the more repeated structured nature of Ego’s cathedral like Palace, the team allowed their artists to go ‘shopping’ for fractal bounding boxes of interest. In a sense, the Animal team could carve out block of interesting fractal forms. These blocks are then mathematically frozen in terms of their co-ordinate space but able to positioned and placed inside more traditional geometry. Below is a screen recording of hunting out and finding a cube of interesting form that could then be placed in a building column.

The problem is that the element is still not a traditional polygon piece of geometry that can be textured, lit and rendered normally.

Manelbulb 3D for the Mac.

There are programs such as Mandelbulb 3D, which is a free software application created for 3D fractal imaging, developed by a group of Fractal Forums contributors. However, these specialist programs were never written to export geometry in a VFX sense.

Even the imagery that is generated is not really high enough resolution to use in a camera projection solution, certainly not to the level Animal Logic wanted to deliver. The program is able to produce stunning images, but these are clips not geometry. One can ‘see’ great animations but to produce the shots the film required, the team needed complex geometry that they could light, control and manipulate.

Matt Ebb was tasked by Paul Butterworth to “play for a while and see what he could come up with.” Ebb built a fractal visualizer inside Houdini, “which let us build things out of fractal formulas we’d write in VEX.”

For much of this time, Ebb studied random postings on various fractal forums and searched out code anywhere he could find it. “One of my proudest moments at this point was when I got fractal teapots working in Houdini with Vex!” he laughingly recalls.

Animal Logic now had a set of images formed up as effectively point clouds in Houdini, but this was a long way from normal geometry. At this point they were ray marching or sphere tracing through the fractal formula. The rays were shot into the implicit fractal surface, as defined by the base fractal maths.

The output that was produced was a giant point cloud “so from the camera frustum we’d generate a grid of pixels of what ever density we wanted, and then we would project that into the scene, testing the fractal formula every step of the way and when that hits the surface you leave that point at that place in 3D.” outlined Ebb. The net effect of this approach is something akin to a deep image matte, projected from the camera and ending up in 3D space. Whenever one moves the camera it re-projects this.  “This was a really important step,” explains Richard Sutherland, CG supervisor. “we could now easily look for forms we wanted.”

The camera matching -fractal tool raw in Houdini

To design surfaces that are part of a complex set and that were “part of a place that seemed as if a human like intelligence had manufactured it with floors and walls, etc,… then we needed tools, and there was not many ways to get from a pure fractal form to something that you could art direct… but now, (with this approach of bounding boxes) we could set someone loose to explore a fractal landscape to find bits and pieces that we could fit inside a framework that we model as architecture,” Sutherland explained.

The second part of their ingenious solution was to find a way to represent these forms in the Animal Logic’s render pipeline.

“Our initial thought was we would get from this point cloud (previewed in the viewport) to normal polygonal geometry,” Ebb adds. Various approaches for ‘skinning’ the point cloud were tossed around, such as implicit surfaces or level set meshing, or volume sampling. “We tried all this things, but at some point they were not looking complex enough, and we had problems fitting them into memory” adds Sutherland. The memory problem was that much worse as these fractal forms were not just surfaces but solid webs, thus they created a lot of internal geometry, “which you never even ended up seeing,” pointed out Ebb.

At some point Sutherland noticed the point cloud renderings that Ebb was doing in the viewport and it occurred to the team, “maybe we just make a dense enough point cloud and it will look as if the internal shapes are solid.” The team tweaked the surface normals and added a tweak to their in-house renderer and suddenly the point clouds appeared as if made with solid geometry. “That was the really clever thing,” comments Butterworth.

The final renders had the cameras re-calculating  the point cloud from the camera’s position per frame, in almost the same way as the earlier viewport fast renders. In effect, the point cloud geometry is rebuilt every frame as the object spins on the turntable (below), but it was placed inside traditional geometry. The inlays are fractal point clouds and the columns are traditional geo.

The team assumed the shadows would be wrong and give away the trick, but in reality it is very hard to tell any of the shadows are perhaps not mathematically correct.

This was not without its challenges. For example, a point cloud with a massive amount of points acts like a solid with a very high 3D roughness setting, thus all of the fractals looked ‘dusty’ with very dispersed spec highlight lobes. Using their in house renderer Glimpse, the team worked out ways to compensate.

Luckily, Animal Logic, thanks to its Lego movies, has gotten very good at vast instancing of geometry. Their Glimpse render in the final shots is dealing with the Palace as pieces of instanced geometry with point cloud ‘containers’ inside them, which in turn have special render views onto the fractal point clouds.

 Weta world

Weta Digital, meanwhile, was facing some of the same issues. They were doing similar tests with Mandelblub 3D as they needed to take over from Animal at a certain point in the story, and continue with the third act beneath the planet’s surface.

“it was quite elegant what they did,” laughed Butterworth commenting on Weta’s solution to the same problem his Animal Logic team had faced. “They approached it from a completely different perspective.”

Guy Williams was the Visual Effects Supervisor at Weta Digital in New Zealand on the film. While Weta mainly worked on the extensive interior end battle, their work started from the moment Rocket’s ship crashes into Ego’s palace.  From that point the Ego and Palace shots were solved by Weta. (Check out our previous story for the Ego reformation scene that takes place at this point in the story). This meant Weta needed to seamlessly pick up the Palace and continue on. As is common with such projects, the teams shared geometry and assets.

When it came to working with the Animal Logic models, Weta got somewhat of a surprise. Williams recalls, “fractals aren’t very friendly for art direction, they produce really cool looking things but…they are a very unique things. So it was very interesting how Animal Logic solved the problem. They did some really really elegant stuff with their shots,  …but when we first received the models, we expected to get a traditional model rigid surfaces or sub-division surfaces – maybe polygons.. what we got was a point cloud”. At one point Weta even though that they might have been sent LIDAR data.

It took Weta a bit of time to work out what Animal Logic had done. Because the model is a specific data set it is depended on the camera, as Animal logic was not using the same camera positions as Weta, they were seeing point clouds that were too sparse. “At first we thought it was a stylistic choice – perhaps it was meant to look like it was made out of sequins” Williams recalls.  Once Weta spoke to Animal Logic they understood how Animal had are solved the problem “and we could match what they had done, but we did have to do some work to break apart areas as the ship smashes into the palace.” Luckily, Weta itself does use a special program that does something similar with objects as point clouds, so conceptually they understood the approach.

Weta came up with their own brilliant solution to dealing with fractals. Weta used as their starting point the sierpinski gasket fractal. “It is a tower like structure…the maths behind it is that it is a sphere with a bunch of spheres inside it, cutting themselves out,.. recursively, and as you push in you start finding these beautiful forms that are between the spheres” Williams explains. These forms were geometric but highly detailed. Weta felt it was very close to what the production wanted but, once again, art directing the fractals was nearly impossible. “You can’t just get say a long thin structure out, as the maths doesn’t do that, but we knew at some point we’d have to choreography it to control it” Williams adds. “So we needed an approach that defined forms and at that point map the fractals back onto it, so in essence the fractal is a multi-phase function.. we’d skip the first phase and start with a hand built geometry, so every fractal phase after that would start with the right base shape and then do its thing – they way we wanted to”. This generated the something that Williams liked but it was still very complex. A good way to think of the Weta approach was that they were bending fractals to happen along or inside a bounding shape, and importantly, a bounding shape that Weta could define.

Sierpinski gasket

With just three evolutions or iterations of the fractal logic that cuts spheres out the shape, and Weta got something beautifully complex.

Weta’s Genius Moment

Weta also wanted to use the Mandelblub 3D software to lift out some true Sierpinski gaskets. But just as Animal Logic had found, the output from the  program was not geometry they could use. There is nothing in the program approximating something like an fbx export. While the images looked great on screen, there was no path from the pretty images the program produced into geometry that was needed for Weta’s modelling and rendering pipeline.

Days went by with the team trying to solve how to mesh the geometry and still maintain the fine detail. “Even with our best machines, after it had been meshed it came out about two orders of magnitude too low in terms of detail and resolution,” Williams recalls. The edges were curved but not crisp or razor sharp as they appeared in the Mandelblub 3D renderings ,.. the exported and meshed geo looked more like a melted wax version.

One day Williams was stunned to see a perfect test from one of his team, Pascal Rambuit.  The artist had done something by themselves and suddenly everything was perfect. Williams was stunned, the models had jumped dramatically in quality and detail and were now perfectly in the Weta pipeline.

“You forget sometimes the people you work with are brilliant in their own right,” says Williams. “We had tried for weeks with the biggest machines in the Effects department and Pascal on one of the normal modelling computers had completely solved it, – with less RAM and less cores!  I asked him ”how did you get something that was now more detail than we need?’ Well, it turns out Pascal had rendered a turntable of the form he wanted.  He just set up a 200 frame render inside the Mandelblub 3D software, and then he handed it off to … the photogrammetry software! Pascal used two tools that were never designed to work together.” Put simply, no one had ever thought to use photogrammetry on fractal animation software, “photogrammetry is for working with live action set photography, but all of a sudden we had these high resolution models we could use – it was just such an out of the box solution and as soon as we heard it – we were all like ‘Of course! Why not !’ Kudos to Pascal for thinking of it,” Williams commented proudly.

To sell the final image Weta Digital added some art directed degradation via a fine noise function so it was not perfect, and they broke up small parts of the form to end up with what you’d imagine from a living planet, that is tens of thousands of years old. The team went in and further removed some break away sections of the geometry, but as the fractals were now in the more standard Weta pipeline this was easy to do.

The scenes were lit with area lights and with complex atmospheric. The live action was shot with a lot of space lights from above to have good ambient levels. Weta added “light features” in the back of the 8km across inner chamber seen in the film. “One of the biggest lighting challenges we had was the action was inside a giant self contained sphere, you are inside effectively a ball with no ‘sun’. How do you light a dark room when there are no lights in the room?” pointed out Williams. “We hid lights in the walls effectively, so that we could have a medium for putting light into the scene but none are as bright as the sun, none of those lights alone would be enough to light the entire chamber, .. so typically you light from back to front, but we carried it a bit more with the use of the atmosphere,”

Williams commented that Weta Digital did “600 shots in house, and of those 530 were in the film, about 80 % were underground, the majority of our work was the third act”.

Celestial catch: inside then out

Animal Logic had other challenges in the film, apart from the Palace set.

The sequence where Ego and Peter play ball or ‘celestial catch’ was initially shot as an interior sequence. Only later did the sequence get changed and became an exterior space. The production shot additional footage to extend the sequence. Unfortunately, as the new footage was shot for an exterior set, the lighting had a much stronger side light, late afternoon-style directional light.

It fell to Animal logic to rework the less contrasty interior plate photography of the actors, without the strong side light, and merge it with the yellow golden magic hour lighting of the pickup shoot. This was done with extensive compositing and crafted regrading.

The team worked to produce fractal plants that could be a part of the garden sequence and bridge the live action world with the brilliant work Method studios was doing with the wide planet arrival sequence. At this stage the planet was primarily red.

Paul Butterworth was concerned naturally to make sure the new composited sequence looked realistic as well as consistent as he solved the lighting issue, so he decided to start the garden visualisations with green, not red plants. His logic was if they could first get approval of the garden as real with normal coloured plants, then they would have a base to work from as they stepped into recolouring all the fractal plants red.

What he did not count on was that the production would fall in love with green plants. Suddenly the Ego Red planet was actually green and red! Which meant everyone now needed to incorporate green into their shots.

Animal built a range of fractal forms it could believable use on the planet, especially in the wider shots.

Fractal world: Method

Method Studios were tasked with designing the lush, brilliant and impressive exterior of Ego’s planet that introduces the audience to this fractally constructed yet lived in ancient world.

Method’s green / red beautiful planet wide vistas.

A small hit squad of four led by Method Art Director Ming Pan set the look, expanding the palette to incorporate vibrant green, teal, and yellow in the final shots, which brought the Method work into line with the new Animal Logic ‘ball throwing’ green garden look.

Method had done most of the fractal heavy lifting on the “Magical Mystery Tour’ sequence in Doctor Strange. Having previously contributed to the sub-atomic end sequence in Ant-Man. There LA pipeline in particular was very well versed in getting organic shapes out of mathematically based forms.

Mutli-verse in Doctor Strange








The cave environment around Baby Groot and Rocket was designed to be highly geometric, comprising Mandlebulb and Apollonian gasket fractals, which also feature heavily on the surface of the red planet. Method received early creative from the art department for the vibrant red planet palace and surrounding environment, then extended those designs using mathematics as a foundation.

If one looks closely at the floor and walls, the Apollonian gasket fractals are mixed in with more traditional rock materials.


Rocket and giving a Groot!

Character work: Framestore

The original Rocket was animated by Framestore for Guardians of the Galaxy (vol 1). Framestore returned with both Rocket and Groot for the second film, but the scheduling of the film required that the asset be shared by several vendors.  In this film Rocket had major animation by not only Framestore, but Method Studios, Weta Digital and Trixtor film.

Framestore collaborated with VFX Supervisor Christopher Townsend to deliver over 620 shots ranging from creature work, spaceships, and ‘the best opening sequence in the world’ (as dictated in the script) .

‘We went back and looked at face shapes, the construction of his skeleton, all the time knowing that we needed to end up with the same Rocket essentially. Updating everything but keeping the character that is so loved,” commented Jonathan Fawkner, VFX Supervisor.

Once again the team used Bradley Cooper’s performance as reference, the team referencing his lip movements to achieve a seemingly natural dialogue delivery. “We wanted to create a really strong and consistent performance,” explains Animation Supervisor Arslan Elver.

Framestore once again took the lead and according to Framestore animation supervisor Arslan Elver, and Rocket got a complete makeover for the second film. He was rebuilt from the ground up with an updated fur simulation using Framestore’s proprietary tool fcHairFilters to create a photo-realistic finish.

Framestore had built the original Rocket for the first film

All of Rocket’s facial shapes were redone, including new phonemes. Rocket actually inherited a new eye rig. In this film “the eye rig (from Framestore’s Rocket in Guardians Vol1 ) was replaced by the rig done in Fantastic Beasts and Where to FInd Them,” explains Elver. “There was a character called Gnarlack, and we took the eye rig from that character and adapted it (here) which gave us a nicer motion around the eyeball when the eye shifts – it made the motion of the eye more organic and nicer -especially around the eyelids.” Elver worked on The Force Awakens at ILM in London and he says he learnt a lot from the experience. He animated and worked on the character Maz. “It was fully keyframes and based on the performance captured on set, and one thing I learnt a lot from that experience was that eyelids are crucial for emotion. Especially in closeups, subtle angles can change all the emotion of a performance.”

The entire process was key frame animated. To get the consistency of performance the team wanted, they matched the first pass of Rocket’s lines to match Bradley Cooper expressions, and then they did a second pass to adjust the animation for the characters long muzzle “to make it look even better”.

Framestore and the tail – chair cheat

One of the complexities for any animator is getting the right expressions during the animation phase, when the performance will need to read underneath the fur groom. “We dealt with this quite simply and effectively actually” explains Elver. For example, “imagine you are pushing the upper lip, without fur you might see a lot of teeth or gum – that you wouldn’t when he has fur. So we had a QC system that baked the animation each night and did a cheap overnight lighting render with fur, it was noisy but we could clearly see the lip sync in detail, so we would not commit to a full render until we knew.” This was not only an issue of fur, but the final lighting would sometimes cause the muzzle to cast harsh shadows. Rocket’s eyes were similarly affected by his bushy eyebrows with their soft fur over his deep eye sockets.

Several of the vendors commented that Director James Gunn was particularly focused on making sure Rocket always had good eye light. Throughout the film the lighting on Rocket is very much the lighting of a movie star.

Framestore expanded and updated the complex fur approach on ths film

Rocket and Groot are also very short height characters, but Elver points to the director and the DOP as having done a “great job at previzing and framing” so that the plate photography would work for the final blocking and shot composition. “It doesn’t happen often,.. but here the director really knew what he wanted and the edits were quite tight as a result”. Elver also commented on the quality of the notes the team got from the director. Instead of picky comments, Gunn provided acting direction to the animators, “he would say things like – no Rocket is internalising that line here – just really great direction for animation.”

Elver was lead animator on the previous Guardians film, and this time he was the Animation supervisor, the transition of roles provided him with a great perspective on getting the best from the characters and the team. “Casting animation is quite tricky internally,.. some people are better on performance, some on action shots, so I do cast animators based on shots.. one thing that is important is consistency of performance,.. for example reference is key, we always started with the same reference, if you just let people shot their own reference then quite often people can come back with a cartoony performance or not be consistent.”

The director always wanted Rocket to have a strong eye light

Rocket also has a tail which not only becomes an issue for animating but how the character sits, Elver confessed that when Rocket is piloting the ship, “you won’t notice it, but the way he is sitting – his tail is actually just next to him, it was a cheat. The connection to his body would have to go through the chair, but you just don’t notice it. But I really like Rocket  – in some scenes he is a jerk, he’s a bad arse, but deep inside he is a softie.. he is a great character to animate”.

I am (Baby) Groot

For Baby Groot, the team redesigned the character which was somewhat stressful as Framestore needed to get a few Rocket shots out very early in the schedule for Comic-con. One of the most beloved sequences in the whole film was Framestore’s opening fight sequence which follows Baby Groot while the rest of the team fight an inter-dimensional beast. The sequence is a tour-de force of Character animation, and virtual cinematography.

Baby Groot was a challenge for the Framestore team, who had to question how young Groot would actually appear and how he would integrate with the rest of the Gaurdians. The process was lengthy. “We went back and looked at Groot in the first movie, at his personality and character,” explains Elver. Baby Groot at the end of Vol 1 was even younger and had been animated by MPC. In Guardians Vol2, Baby Groot is animated as a basic blend shape driven FACS rig, where the team has certain additional constraints, given the character is made of wood. ” He has very particular shapes and aspects of the rig, for example the way the eyebrows work, both when he frowns and then relaxes, – but it is based on a human anatomy,” says Elver.

One of the real challenges for Groot is how much subtle animation was required. As Groot in the script, the character does not have many lines, but he is in a lot of shots and the animator’s skill is shown in how well they can provide secondary or background animation that keeps the characters engaged and alive and yet not distract from the main dialogue of other characters. The Animators needed to feed the subtext of what the scene is meant to be about, but with such highly visual characters as Rocket and Groot, they could easily be distracting. “It is one of the hardest thing in animation is not the dialogue, the more difficult thing is when he doesn’t talk and you still have to sell emotion. It can be just so difficult”, explains Elver.


Framestore’s impressive opening scene

Groot is a complex character. While there is a lot of focus on him being child like, actually the animation was more complex than just playing his character as a baby.  Framestore’s first tests were more baby like and the Director felt he was too human. Elver sees the film version of Baby Groot as being on the autism spectrum. “I think we is almost autistic, he is not stupid, he is naive, but if his focus is on one thing, then he just doesn’t hear anything else.”

Framestore’s Baby Groot

Framestore did Baby Groot and adolescent Groot at the end of the film. At one stage, the end credit scene was to be animated as Groot leaning against the wall for the scene, one foot back on the wall  looking down playing his video game. The sequence was done very quickly based on reference from on set provide by actor Sean Gunn, and yet the animators managed to deliver a tight ‘screw you’ version of the line ‘I am Groot’.

Character Work: Method

Method’s perfect match of Rocket

Method VFX Supervisor Nordin Rahhali led a team of more than 250 artists in creating more than 500 shots.

Guardians of the Galaxy Vol. 2 was the most challenging project I’ve been a part of, and the end result was also the most rewarding. It’s not often that a body of work includes such a fun mixture of elements to create; from fantastic worlds filled with alien scenery and plant life, to epic space ship crashes and giant energy tentacles, to believable performances from Rocket and Baby Groot that draw on your heartstrings and emotions.  With such a wide range of work to tackle, James and Chris were very open to our ideas which made for a strong creative collaboration,” said Rahhali.

One of the most complex sequences Method created was the film’s final scene, which featured not only a Ravager fleet, fireworks, and holographic lasers integrated with live action plates, but also a close up of Rocket shedding a tear.

The end ‘fireworks’ and holographic lasers in zero gravity.

To make the CG raccoon performance convincing, Method artists used in-house footage of Animation Supervisor Keith Roberts performing the scene for reference, studying the macro facial movements like minor eye darts or blinks, in addition to what was filmed on set and in the sound booth by actor Bradley Cooper, who provides Rocket’s voice.

Artists then translated the performance to Rocket using keyframe animation, making adjustments once his fur was added. As with the other sequences by the other vendors, there was no motion capture shoot data.

“Rocket is almost having a spiritual moment and it’s such a weighty shot so we were extra mindful to nail it without overdoing it. We still have to fight against the uncanny valley and when the timing is off on even the most subtle facial movements, the viewer’s eye immediately detects it as phony – especially when it’s a raccoon crying. It’s a very brave and interesting shot, and we had a consistent feedback loop to really hone into the emotion of it,” Roberts explained.

Method’s brilliant bomb animation scene

For Rocket, Method created its own muscle and skin simulation rigs using assets converted from production. While the character aesthetic needed to match the first film for consistency, Rocket is thrown into new scenarios in Guardians of the Galaxy Vol. 2 that required new creative. Additionally, shooting scenes with Rocket was somewhat fluid and partially improvised, with actor Sean Gunn serving as the on-set stand-in. For the full CG sequence in which Rocket attempts to instruct Baby Groot on how to detonate a bomb, Method artists really got into the act. They staged performances acting increasingly annoyed, breaking down moments like an awkward cough or arm scratch and trying out different combinations to apply to the CG characters in the scene.

Method’s Berhart planet and animation work

For the shot of the Guardians’ Milano spaceship crashing onto Berhart, some aerial photography was taken but the majority of the shot was built from scratch. Shoots were conducted on a forest sound stage in Atlanta and in a lush state park north of Portland, which was selected for an abundance of moss-wrapped tree branches that provided an otherworldly feel. Method created three highly detailed versions of the Milano depicting various levels of damage.

Ultimately, the ship plows into the ground toward the camera, uprooting vegetation as its wings are torn off. Using layered simulations, artists were able to art direct the foreground elements and balance scene elements. The sky is a matte painting based on footage from Atlanta, but given a green tint, to differentiate the planet from the familiar blue hue of Earth.Method also created Ego’s alabaster egg-shaped spaceship exterior and partial interior. To convey the scale of the ship, artists added subsurface fine details and etching. Inlaying architectural gold sculptural elements into the ship’s surface along with refraction blur enhanced the depth, and helped unify the overall modern aesthetic. On the interior, artists extended the environment, replacing a wall with animated, depth-cued fractals and smoothing any seams.


Character Work: Weta Digital

Weta Digital provided not only technical accuracy but some great emotional performances with Rocket

Weta Digital provided yet a third team working with the hero animated characters, in the film. There were four facilities in total providing the animated characters and it is testament to overall visual effects supervisor Christopher Townsend that the work appears so consistent and the performances integrate so well.  Townsend managed to build on the character performances from of the first film, but extend their acting range while supervising a huge range team of animators in four companies.

Dave Clayton was the Weta Digital Animation supervisor. Doing Rocket was the most challenging task, not just balancing Weta’s work with the other 3 vendors but just “finding the character as we had not worked on the first film”. he explains. Weta did not only action sequences but emotional Rocket scenes such as when he is saying goodbye to Yondu (Michael Rooker).

From a technical perspective Weta was building from their pipeline developments from their outstanding series of Planet of the Apes films, which gave the team had quite a bit of experience in lighting and rendering fur clad characters. Clayton said that the Framestore Rocket asset integrated quite easily into the Weta pipeline, but unlike the Apes characters, Rocket has quite a bit of fur on the centre of his face, so Weta’s animators like the other vendors had to learn to animate the facial expression and lip sync under the fur. “Which is just all a part of the learning curve” Clayton points out.

As Weta animation evolved, Clayton points out that they went back over the animation and did a second phase of animation “in which we adapted all our animation on Rocket to be a bit more ‘rocket-ish’ we hunched him more, over all made him a bit angrier, and put a lot more asymmetry into his poses.. we wanted stronger more dynamic poses… it took a few sweeps through the animation to get Rocket right”.

Weta also did some digi-double work, for the main characters such as Nebula jumping on sliding rock outcrops inside the Ego Planet. Clayton got Weta’s motion capture team to perform some of the stunts that he felt were not playing correctly in the main plate photograph, due to the limitations of wire work. Weta has such depth in motion capture that it was relatively easy to get the additional data the team needed. “You can keyframe some of that to a point, but I always go to the Mocap stage to get a really good starting point with all the great nuances, it just speaks to the audience as real, hopefully”, he explains.

To place Rocket in the scene Weta added “dirt and dust to him, and then added little wind sims blowing through his fur – to make him look a bit different” explained Clayton.

Weta’s primary work was the third act battle inside the Ego Planet



Trixter Film was the other vendor who dovetailed in their character work into the film. Trixter animated Rocket, as they handled the Forest attack scene where Rocket sets off a set of boobie traps – as only Rocket can.  VFX Supervisors: Alessandro Cioffi and Adrian Corsei. Trixter also animated the dimensional jumps that lead to the warped photoreal digital doubles Yondu (Michael Rooker),  Kraglin (Sean Gunn) and Rocket.

Luma’s Sovereign world end credit sequence

(Yes it is Adam Warlock’s giant cocoon)

Luma returned to the Guardian’s Galaxy with the prestigious and (almost) perfect Sovereign characters. Luma completed visual effects work for the end tag of the film featuring the entrancing Sovereign world led by the ruthless high priestess Ayesha.

The team created a stunning full CG environment, featuring hundreds of animated humans in blue and luminous gold pods. “The overall look of the set was roughly designed in a couple of concept images and previs sent to us by Marvel. Our approach was to first rough out everything in terms of layout and then move on to adding all the modeling and lookdev details,” says VFX Supervisor Jared Simeth.

Luma also created giant technological cocoon, which is now confirmed to be Adam Warlock’s incubator. The character, initially somewhat of a pacifistic savior, has been heavily linked in the comic books to Thanos with plot lines around the Infinity Stones/ Infinity Gauntlet and the Multiverse. The Warlock character was originally going to be an integral part of the movie, but was removed due to Director James Gunn feeling there were too many characters. There is uncertainty if he will feature in the Avengers Infinity war films, or if the Adam character will make a strong appearance in the Guardian’s Vol 3 film, rumours abound for both.

After receiving 3D concept art from Marvel. It was up to Luma to bring this enormous cocoon to life: the team added to the original concept with special details such as the long cables ejecting out of the cocoon and FX elements around the head. Luma was responsible for lighting the reflective chamber to create a dramatic and illustrious environment fit for the Sovereign world. The gold chamber and the extravagant crystal chandeliers hanging from the ceiling created reflective surfaces, which provided a great technical challenge for the team in their attempt to get clean renders. “It came down to every trick in the book! From splitting out different elements of the chamber and rendering separately with pre-rendered HDRIs to compositing tricks such as frame blending and projections,” says Simeth.

The team also touched up the live action characters to perfect their illustrious gold skin in order to achieve the exotic humanoid alien look, including the high priestess played by Australian actress Elizabeth Debicki (The Great Gatsby, 2013, The Night Manager TV 2016). These live action characters then had to be composited over the full CG environment created by Luma.  Apart from Guardians of the Galaxy Vol 1 and Vol 2, Luma has notably worked on other major Marvel films including Doctor Strange, Deadpool, and Captain America: Civil War.