Barbershop at Weta: Sci-tech winner explained

Some of the most stunning recent digital hair and fur work on film – think Caesar, Maurice and hundreds of apes in the new Planet of the Apes films, Middle-earth creatures from the Hobbit trilogy, Snowy in Tintin – has come out of Weta Digital, owing in part to the studio’s Barbershop hair groom system. The toolset is receiving a Sci-tech Award, and fxguide spoke to the award recipients about how Barbershop is used and was developed at Weta Digital.

The most remarkable aspect of Barbershop is not that the images look great when they are rendered, although without a doubt they do, it is that the software written by the team at Weta Digital does exactly what the artist wants when they are sculpting and designing a character’s hair, when that same character appears on the screen. This may seem like something that would not have caught the Academy’s attention but for all the apparent simplicity of this basic concept, the program has to do some very clever work so an artist can directly manipulate literally every single wet hair on a Caesar as well as when he is standing looking down on San Francisco – Caesar looks the same down to the individual hair.


The basis of hair, cloth, fire and water animation in computer graphics is simulation. These categories of things are too complex to hand animate so we generally build a system, run it, and see what happens. Think about a cloth simulation, the artist ‘cuts’ cloth to be the shape of the shirt and then we run the simulation and it drops, settles and folds over the contours of the digital character. A burning building is simulated breaking apart, the sim data is then cached and as with water or other sims it is then rendered. With hair simulation the same logic should apply, hair length, style and color should be defined and then the simulation should be run to see how it looks. It is all very well modeling long flowing locks, but if the second the character actually turns and runs the hair simulation means gravity flattens the hair and it loses all body and bounce, the shot is ruined. What makes Barbershop so powerful is that, amongst other things, you can model exactly how the hair will look – using familiar tools like combs or cut the hair to a certain length, then tools like the clump detection tool pass “groom- aware” information at the strand level so that simulation/animation does not fundamentally change the look of the groom. Barbershop itself does none of the final sim or rendering work, instead it exposes parameters for these functions but does so in a way that preserves the integrity of the groom.

With Barbershop there is direct control of the hair down to the individual hairs. Unlike some systems that have guide hairs that then multiply up from say 10,000 to the final 100,000 hairs at render time, Barbershop lets the artist see the final set of all hair. To adjust a hair or clump of hair does not require re-defining a procedural parameter, the artist just changes the hair and everything works. The system is not built on a common flow graph or chain of nodes. If the artist wants some clumps of fur on the back of a leg like there is on the forearm, they can just use a clone tool and ‘paint’ them on. If the ape needs dirt or dried mud, the program allows this to be added, distributed in the fur. Of course, in these two examples the fur is not painted in the 2D sense of the concept, it is much more like Mudbox in that the artist has direct control of the 3D and the mud is not just mixed in, it is moved into the hairs of the fur, which may be fully complex digital “elastic rods”. Not that the artist needs be worried about any of that. The artist just makes the fur look wet and dirty. The hair cohesion is just done for the artist, they can adjust a hair, a clump or a region, not via guide hairs but via direct manipulation.

“What we tried to do with Barbershop,” says Weta Digital’s Marco Revelant, “is keep the lookdev side (grooming, shading etc) separated from the simulation side. In other words we try and break the connection between what is grooming and what is simulation.”

If you think about that single clump of wet hair that the artist has just manipulated much like a 3D sort of hair/Photoshop tool, behind the scenes Barbershop needs to make sure that when this bit of hair is passed to the final hair simulation, that the complex algorithms, the forces of gravity, surface tension, contact forces etc are all going to produce a clump not roughly but exactly like the one the artist just designed. Furthermore, when the renderer does the fully plausible physically based lighting solution, Barbershop has to make sure that the final inner refracted reflections and color properties are all consistent to make the hair look as thick and as dense in that final Manuka render as the clump did while the artist was adding the mud to it. But interestingly Barbershop knows nothing about dynamics. Unlike many fur systems – Barbershop does not maintain a list of operations that are necessary to generate a given hair groom. The system treats hair more like Photoshop would treat an image. The hair is put in place using sculpting tools that – once they have done their job – need no longer be present. So in a sense – the hair can be treated as it’s own entity, with no dependence to any operation or surface. Barbershop also doesn’t know anything about shaders or lighting. It provides as much information as it possibly can to shaders, however. The magic of the lighting is done by the shaders.


The beauty of Barbershop is captured in the Latin expression “firmitas, utilitas et venustas” or strength, utility and beauty. The program is direct in its manipulation of hairs and not approximations of them, it is incredibly useful and has been used on nearly all of Weta’s films since Avatar, most especially Tintin and the Planet of the Apes films – and it is beautiful in its simplicity: “What you see in the preview is what you get when it is rendered”. It is so ‘obviously’ what one wants as an artist, that it is easy to skip what it is having to do in order to achieve the result.

The dwarves in the recent Hobbit films are an excellent example – their hair is complex, braided and was required to be digitally replicated for the digi-double work in the films. This type of complex hair styling would be extremely difficult in many hair packages, as would the wet and matted hair of the apes in the Planet of the Apes films.

The program was conceived inside Weta after King Kong and Avatar by Revelant and Alasdair Coull. Revelant saw that there were multiple ways Weta was achieving the same thing and that the system could be made both more powerful and simpler. Coull was the first to start coding the solution and was very quickly joined by Shane Cooper for the design of the original architectural and engineering.  At its height of development it had a team of five working on it, but many Weta artists contributed to the design ideas and the testing of the system.

The code is very solid, with the core library being renderer or simulator independent. This means that while the system works as a Maya plugin, and was designed initially to work with RenderMan, it was relatively easy to switch to the new Manuka renderer as the core code is independent of anything else.

There are huge pipeline time savings from not having a procedural base to Barbershop and being simulation agnostic. If, for some reason, the Weta hair model moved from elastic rod to some new version of mass+ spring or finite element (FEA) model, an artist could still use Barbershop’s brush based modeling tool to manipulate, visualize and sculpt every strand of hair on the back of the orangutan Maurice, or cut the dreadlock-like clumps of fur on his arms and legs. Furthermore as Barbershop directly deals with the data there is no need from a production point of view to loop back and re-work the groom once the pelt is in context of a shot, the groom is the groom and the pipeline does not lose time in having to go back and re-work it. Of course, the simulation team may need to alter any sim using the fur, but they have full control independent of Barbershop to do that. Thus the program is both technically separate and it separates the functions allowing for a more productive pipeline. This modular approach even applies to topology. The pelt grooming from one character can be applied to another, and then immediately be varied, tweaked or adjusted. Again this is just what you might guess one would want but in most hair systems such versatility without some inherent ‘history’ or ‘chain series of nodal functions’ is unusual. Most systems can apply similar rules, work things in a similar way, but not simply move and cut the hair without penalty or overhead.

The artists are able to work with very fast and powerful full hair visible on their screens and via Weta’s real time Gazebo preview renderer they can see the hair in the shot, prior to final renders in Weta’s final Manuka renderer. Actually, one of the things that Manuka has proven very good at rendering is large amounts of complex characters all with fur or hair, in this respect it seems to out perform almost any renderer.


The toolset of Barbershop is both based on real world grooming tools such as combs but also on image editing tools one might find in say Photoshop. “The artist at grooming time can access every single strand and access every single point in the strand,” says Revelant. “Through the tool they can do some natural clumping, and all this information can be retained through the pipe.”

This is not to say the program is 2D or manipulating pixels but ease of use and the powerful tools of Adobe’s Photoshop inspired the Barbershop team to provide Weta’s artists with direct simple tools that were easy to understand and intuitive to use.


The program was originally going to be first used on Snowy in Tintin but was first seen on screen an underarm hair simulation for the film Gulliver’s Travels. The program is still actively being developed – for example the wet fur discussed above in the last Apes film benefited from the wet baboons that Weta animated in the second Hunger Games film Catching Fire, which in turn had built on the first Planet of the Apes reboot. “For the last Apes movie,” adds Revelant, “a lot of the technology came into fruition with Barbershop and the new renderer. I think we reached a new level of realism – everybody here in Weta was really proud of the result.”

Marco Revelant, Alasdair Coull and Shane Cooper will be honored with a Technical Achievement Award for Barbershop’s unique architecture that allows “direct manipulation of full-density hair using an intuitive, interactive and procedural toolset, resulting in greatly enhanced productivity with finer-grained artistic control than is possible with other existing systems” in LA, February 7th. fxguide will be there and covering the event.


APES images © 2013 Twentieth Century Fox Film Corporation. All Rights Reserved.



Separate from Barbershop: Digital hair and the algorithms behind various tools from around the world will be explored in fxphd’s Background Fundamentals this term along with a host of other background lectures to keep fxphd members up to speed on both the history and algorithmic approaches of a range of technologies. If you want to learn more about the thinking behind the tech of our industry as well as cutting edge training in Maya, Houndini, Nuke and more, then check out membership at


2 thoughts on “Barbershop at Weta: Sci-tech winner explained”

  1. Not that I don’t trust who makes this decisions on the awards or who goes to pick them up, but how a modeler supervisor is part of a team for technical awards? Did he also wrote the tools? Is not suppose to be software developer or R&D?
    In any case the tool looks amazing, well deserved.

Comments are closed.