Doctor Strange has been a huge film for Marvel. To achieve their sections of the film, Luma Pictures developed a set of new tools, including some they will even be sharing with the community.
Luma Pictures worked on several key sequences including the opening London sequence and they also booked ended the film with the Dormammu sequence and the Dark realm. We discussed the tools they developed especially for the film, and below is an exclusive fxinsider podcast with Luma Pictures’ visual effects supervisor Vince Cirelli, discussing the new tools.
London
For the London sequence Luma developed a new fractal tool to do volumetric meshing and transforming of the buildings. This involved true 3D fractals but as you can hear below in our fxinsider exclusive podcast with Luma visual effects supervisor, Vince Cirelli, controlling true fractals was extremely problematic.
“With what we needed to do we needed to art direct the speed, the movement and look at all of these ‘fractals’,” says Cirelli. “We created tools that allowed us to visualise the scenes inside the viewport of Maya. We developed tools for volume so we could sculpt what volumes the fractals would take as the building stretched and turned.”

Luma pictures literally choreographed the fractals, “which is not an easy task.” The 3D fractals that Luma based their work on were Mandelbox, which is a box like fractal found by Tom Lowe in 2010.
The Mandelboxes are different from Mandleblubs used the film Suicide Squad. The mandelbox is defined as a series of continuous Julia sets. Unlike the 2D Mandelbrot set, it is defined with a number of dimensions or powers. It is an example of a multifractal system, but it is not a fractal sweep in the way a Quaternion Julia set of a Douady rabbit is.
“The Mandelbox that we used allowed us to do all the arranging, mirroring and manipulation of the frequency of the volume of the London buildings, along with all the slicing and dicing,” explained Cirelli.
As fractals are extremely hard to visualise from the mathematical equations, Luma developed artist friendly approaches. In addition, if someone does get something close, it is nearly impossible to creatively ‘tweak’ the base fractal.
As the team was slicing through a volume, one problem they faced early on was how to get the surface properties of say a piece of concrete on to the right fractal surface with the proper shading. The solution was solved at render time. In the viewport “what we saw was essentially little bits of Lego on top of the building. Those lego pieces represented where we wanted the fractalization to be happening,” Cirelli adds.
The system used three levels.
First was the base animation; “the bounding boxes if you will,” Cirelli explains. This level is the simple building blocks of the scene and was all the elements that needed to be choreographed. These elements were timed to work with the actor’s performances.
Once the timing was solved and main scene was blocked, the second step involved adding in the kalideoscoping using instancing with a new rigging tool the team developed in-house. This allowed the Luma animators to control how the objects kalideoscoped and how things were instanced.
The third layer was the actual fractals. “This was the icing if you like, the fractals sit on all the instancing, and it is the fractals that are timed and animated to match what is happening on the base layer of the animation,” says Cerelli. “So by the time you get to the third layer everything is timed to the first layer of animation and so if we needed to make a change or movement we would make it on the first layer and that would propagate and ripple through to the final layer of fractals.”
“I was not sure if we would need to figure out some edge solver for the penetration of the intersecting volumes, but …you don’t want it, because through that complexity you get the visual interest”
Some of the beauty of the London shot came from the way the streets folded down to reveal the action with the characters on the street. The mood of the collapsing buildings mirroring the emotion of the confrontation. The buildings, while still appearing to be made of blocks of solid stone, folded in on themselves not unlike the scales of a creature.
Ironically, intersecting volumes, normally the curse of 3D CGI was exactly what the team used. “Yes, that gave us some of the most interesting visuals, with the flowering etc.” Cirelli points out. “I recall when we first started I was not sure if we would need to figure out some type of edge solver for the penetration of the intersecting volumes, but by the time we got into it – there was no need for it, you don’t want it, because through that complexity you get the visual interest.”
The scene was rendered in Arnold without major problems, even with its vast render complexity. The ray marching through the volumes and entire scenes of fractals, was highly complex and “very intense multi-partition rendering,” comments Cirelli.



The end sequence showdown with Dormammu was inspired by an original limited colour palette: a Doctor Strange blacklight poster from 1971. The question the team faced was how to ‘bridge’ from a hand drawn 2D poster to a complex living 3D environment. The team referenced as much flora and fauna as possible.
“The renders did not look like anything you could use or understand, but they gave the comp team an incredible amount of latitude with value ranges that allowed us to move anything around that we wanted to.”
To generate the complex colours without the benefit of special inks or black light that the 1970s poster used, the team needed to find an entirely different technical approach. From a pipeline and tool set perspective the team took the unusual decision to render out of 3D passes and data and effectively light/colour in Comp. The imagery from 3D was more vast amounts of data and 3D forms, but it was the composite team who brought this data to life, and added the colour as one sees it in the scene.
This was a new pipeline set up at Luma just for the Dark Dimension. The pipeline was more than just an ACES style wide pipeline, it was a way to get vast amounts of passes and data from the Arnold Render into the Nuke Composite. “A lot of the colouring and values choices could be decided in Nuke.. our textures are more control textures and are very much procedural – apart from hand painting to MULT control between the procedural textures,” explains Cirelli. Because most textures were procedural and thus not limited by capture colourspaces or camera sampling technology, the team could work very effectively in Nuke. Had the textures been from stills cameras, even RAW files, they would not have had the latitude the procedural textures generated.
Interestingly, the film actually had multiple final grades. One of these was a full extended colour range colour grade which meant that for those cinemas able to project the Extended Dynamic Range EDR, the team provided a wide gamut source file. Steve Scott was the master colourist on the film who oversaw all the grading of the final versions of the film. “He does incredibly beautiful work, and the sequences going through his hands have the level of finesse you want,” compliments Cirelli.
To handle the complexity of the data for the end sequence, the Luma dev team developed a tool for Maya named Multiverse, it helped extend Maya’s capabilities for loading large amounts of data in Maya. The team felt it would be worth including in the article, because it’s a tool that others out there might find useful and is available to the public.
The Multiverse for Maya tool can be found here. It is being provided by J-Cube, the R&D company of Polygon Pictures, based in Tokyo. “Multiverse is developed by us (J Cube), and Luma purchased a site license. Together we improved certain features to handle even more complexity in Maya” explained J Cube’s CTO, Paolo Berto.
They have released Multiverse for Maya v3.0.5 and Multiverse for KATANA (as a Public Beta) respectively with USD writing and USD reading functionalities on all supported Maya & KATANA versions, on all platforms(win/linux/mac).
This translates to a solution allowing users to write USD files from any Maya version (2015, 2016, 2016.5/ext2, 2017- all platforms); and read them back in KATANA (v2.5) on both KATANA platforms (win/linux).
This is on top of all Alembic I/O features which are already available in these packages.
“It is now possible to visualize very large scenes, – scenes that usually don’t fit in the GPU and that are hard to manage in the viewport or in the renderer, while preserving full hierarchy, granting material assignments, mesh/area lights, attribute overrides and full deformation 3D motion blur” explains Berto.

Multiverse for Maya is sold under a software as a service (SaaS) scheme. There is no initial license fee, instead just a yearly service cost, and there is one month free if you would like to try it.
The team claim that it “vastly outperforms” Maya built-in & 3rd party Alembic tools in all aspects (writing, reading, disk size, memory use, viewport rendering). In particular: “lighting fast reading (speed in the order of 10x faster or more), lower RAM footprint, fast OpenGL Viewport 2.0 drawing. Capable of handling & preserving very complex asset hierarchies with tons of objects, whether static, animated or deforming”.
“There is no tool in the market capable of handling Alembic and USD, in Maya in the proper
way, – not to mention circumventing all the Maya limitations” explains Berto. Multiverse answers this need. It does not load the data in Maya, instead it streams it to the GPU. “We do it as fast as we can and with as little as possible RAM usage. We preserve the hierarchy, and allow users to unpack/pack assets at their please. We render the data procedurally so the Maya scene is always slim”. As a result reading times to open
very complex scenes are very fast compared to Maya geometry and references.