Pixar’s RenderMan turns 25 (Exclusive)

Pixar is celebrating 25 years of RenderMan. It was May 1988 when the original RenderMan interface 3.0 was launched, which translates into 25 years of PhotoRealism, actually greater than 25 years for the Reyes renderer (1984), and a bit less then 25 years of PRMan (1989).

Steve Jobs, the man who would go on to be instrumental in Pixar becoming part of Disney, had this to say in a 1994 interview conducted at NeXT’s offices: “All the work I have done will be obsolete by the time I am 50. This is a field where one does one’s work and in ten years it’s obsolete, and really will not be usable within ten or twenty years.”

We sat down with Ed Catmull, one of the founders of RenderMan, now President of Walt Disney and Pixar Animation Studios and asked him how it felt to be celebrating not 10 or 20 years but 25 years of the rendering software. You can hear the full interview in our audio fxpodcast.

Ed Catmull (2010 at Pixar by Deborah Coleman / Pixar)
Ed Catmull (2010 at Pixar by Deborah Coleman / Pixar)

Ed Catmull: “Well it is pretty amazing to me, I know software has its own cycle and certain things come and go and other things have a long life, like Unix, but to me, and I have always felt this, the only way something has a long life is if something keeps changing. We as humans do the same things as we change through our lives. If you stop changing it is because you are dead. RenderMan is 25 years old but that just means it has gone through continual evolution.”

This year RenderMan turns 25. RenderMan is really a word that is used to discuss a host of things, including the specification that would make a renderer RenderMan compliant and more inaccurately the software at Pixar such as PRMan or RenderMan Pro Server, which is Pixar’s implementation of RenderMan.


Rob Cook (2010, Photo by Deborah Coleman / Pixar)
Rob Cook (2010, Photo by Deborah Coleman / Pixar)

From the beginnings with Catmull, Rob Cook and Loren Carpenter, many fine engineers and authors have contributed to RenderMan, something everyone is keen to not lose sight of. While many people have remained at Pixar, RenderMan has from time to time been associated with many great people at Pixar. Dana Batali explains: “Reyes (Renders Everything You Ever Saw) is the brainchild of Cook, Carpenter and Catmull. It has been the foundation technology for rendering at Pixar (i.e. before RenderMan).

The RenderMan Interface (including RSL) is the brainchild of Pat Hanrahan. And if Pat is the father, then Tom Porter is “The Godfather”. By the time the RenderMan group and products were produced (1989), neither Cook nor Carpenter were actively involved. Mickey Mantle built an engineering team including Pat, Jim Lawson (co-creators of RSL), Sam Leffler, Bill Reeves, Tony Apodaca and H.B. Siegel. Darwyn Peachey came a bit later, so too did Dan McCoy, Mark VanderWettering, Steve Johnson, and Rocky Offner”.

The original RenderMan team
The foundation members of the RenderMan team at the celebration of RenderMan’s 25th anniversary. Left to Right: Tom Duff, Mickey Mantle, Dana Batali, Rodney Stock, Jim Lawson, Steve Upstill, H.B. Siegel and Mark Vande Wettering.

As Catmull points out, one of the key aspects in the early days was also getting a book out to explain and define RenderMan and so key on the marketing side was “Upstill, who wrote the book in close collaboration with Pat, Tony and others,” Batali adds.

Per
Per H Christensen

 

Today key RenderMan staff continue in that fine tradition and senior software developers such as Julian Fong, George Harker, Chris Harvey, Per H Christensen and others are known widely for their exceptional work with RenderMan through their extensive published papers.


pose_left
A young Dr Ed Catmull.

RenderMan in some senses is even older than 25. Interestingly, last night at SIGGRAPH 2013 Pixar held its annual RenderMan User Group, and it was here at SIGGRAPH Anaheim 1987 when the original paper: The Reyes image rendering architecture was delivered, by Robert Cook, Loren Carpenter and Edwin Catmull. The year before RenderMan as a program was officially born. RenderMan is a remarkable piece of both specification and code.

A scene from The Blue Umbrella.
A scene from Pixar’s The Blue Umbrella, 2012, rendered in RenderMan.

“It lives far longer than we ever expected it to,” says Carpenter “and it has scaled on the basis of complexity of images by hundreds of thousands to one based on what we initially did with it.”

To celebrate we spoke to Ed Catmull (listen to our podcast), and to Loren Carpenter. We had previously spoken to Rob Cook who has now retired from Pixar.  Loren Carpenter is still at Pixar as the Chief Scientist.

For the full story on the origins of RenderMan one needs to look even further back to before Pixar to the birth of computer graphics, and to Lucasfilm.

Lucasfilm and pre-Pixar

It has become fashionable today to poke fun at George Lucas, and the second set of Star Wars films, but this ‘Jar Jar Binks school of humor’ belittles the enormous contribution Lucas made to the industry. It not only trivializes the huge, if not vast impact of Industrial Light & Magic, but also the early investment in computer graphics that was the Lucasfilm team that Catmull headed up.

In the past, Catmull has discussed the importance of how in the early days of Lucasfilm the team was ‘protected’, and allowed to grow. We asked Catmull what he meant by that. “I am very appreciative of the fact that when you are doing something brand new a lot of people don’t understand what is coming and those of us who are working on it don’t know also! What we had was an environment in which you could discover, and it was protected in the sense that at Lucasfilm, George Lucas felt that technical change was important to the field, he didn’t know exactly what it was, but that is what made him all the more important, he was willing to put his resources behind something where he did not know exactly where it was going. And that was not the norm. No one else in Hollywood would do that. And the fact that he funded us and made it happen – enabled us to put together a really extraordinary group of people.”

Dr Alvy Ray Smith (photo by Kathleen King, copyright)

Ed Catmull and Alvy Ray Smith formed the cornerstones of Lucasfilm’s Computer Division in a first floor windowless room, 3210 Kerner Boulevard. The room was part of the Lucasfilm/ILM model shop and visual effects mecca in San Rafael. Catmull and Smith had met in NYIT after the University of Utah. “I was the old man at 31, Ed and David (DiFrancesco) were just barely 30 and 26 respectively,” says Smith. “Thus began the NYIT – Lucasfilm – Pixar computer graphics dynasty, a marriage of the house of Xerox and the house of Utah, pixels and geometry, art and technology. The movies we dreamed of then – completely generated on computers – was shown in November 1995, 20 years later. Toy Story was that movie.”

Ed Catmull was always highly technical, a physicist by trade. He loved film and had written a 2D animation program ‘Tween’ first in PDP 11 assembly language (re-written later in C), at NYIT which would be the forerunner of the CAPs animation system Pixar would sell to Disney years later.

Ed Catmull was Director of the Computer Division at Lucasfilm and Alvy Ray Smith was Director of Graphics Branch, but at the time of forming, neither were making films nor even contributing to any – the computers that were being used were involved in controlling motion control rigs but there were no direct graphics required for The Empire Strikes Back, for example. Yet there at Lucasfilm was the most incredible array of talent. George Lucas wanted computerization in editing, audio and video (laserdisc) but not graphics per se. At this time the team included Bill Reeves, Tom Duff, David DiFrancesco, Tom Porter, Loren Carpenter, Rob Cook, and even Jim Blinn briefly. As we commented in our fxguide interview with Dr Alvy Ray Smith –  “A dream team if there ever was one.”

Ed Catmull came from the University of Utah, at one of the most important times for computer graphics in its brief history. The University’s alumni reads like a who’s who of the computer graphics industry from Catmull to Jim Clark who would form Silicon Graphics, Alan Kay – a key figure at Apple to John Warnock who would go on to form Adobe. Ed Catmull was actually taught and advised by Ivan Sutherland, of Evans and Sutherland. In fact Ivan Sutherland had wanted Ed Catmull to work for a company he was trying to fund called the Electric Picture Company – the world might have been quite different had he succeeded. “I am acutely aware of those little things that could have altered history along the way, if they had gotten funded I would have probably gone with them, and they would have probably failed as a company!” laughs Catmull.

'Road to Point Reyes' : Pixar, 1983
‘Road to Point Reyes’: Pixar, 1983 (copyright).

When RenderMan was first discussed there was no notion of a ‘product’. The team had a charter to computerize Lucasfilm. “I had taken it on myself to produce a rendering system that would produce film quality images,” explains Carpenter. “By that I mean something the computer would generate that we could intercut with live action so that you couldn’t tell who did what, and this is standard now just go look at any standard movie.” That meant that the renderer had to produce images that were as richly detailed as a live action camera could capture and that was way beyond the best of the state of the art, the most you could hope for then was a few polygons and they looked like plastic at best. We had a lot of conversations, Rob Cook, Ed Catmull and I  – and worked on the problem for most of a year or more – and I had some ideas I brought with me from Boeing. Rob Cook had some ideas from Cornell and Ed had some ideas from Utah (University), and we worked these ideas around and we came up with a synthesis that became the Reyes Rendering Algorithm, and it was the foundation of what became the RenderMan renderer – before that it was the Reyes Renderer,” recalls Carpenter.

loren Carpenter
Loren Carpenter.

Loren Carpenter was an employee at the start of Lucasfilm’s CG division, in fact he deliberately pitched to join the team. In an incredible effort for the time, Carpenter produced a two minute video, with a real sense of narrative to be presented at SIGGRAPH knowing Ed Catmull and others would be there. Carpenter at the time worked for Boeing in Seattle. He had heard about Lucas’ new computer graphics division that was starting at Lucasfilm in 1979, and he wanted to be a part of it, but he had no computer graphics credentials. As an employee of Boeing, he had stumbled on computer graphics and in fact SIGGRAPH almost entirely by accident. While he was working he was still “going to school at Washington (University) at the time,” says Carpenter. “And I would stop by the engineering library on the way to and from work. I was always interested in graphics – due to my science fiction background, and I wanted  to see movies that did something interesting instead of these cheese special effects that were around in those days. So there were papers appearing from Utah and others showing the very earliest attempts at say drawing a triangle, and I said ‘I can do that’. I read Jim Blinn’s and Ed Catmull’s thesis and I thought ‘I can do that'”

Around this time Carpenter had been given a project at Washington University to write a simple renderer that worked in a small memory computer, “and so I had some experience, and I was interested in making all sorts of pictures, so I took my university software into Boeing and started using it to make pictures.” This was all being done on the side in his ‘spare’ time in around 1978.

Boeing’s interest in this was valid – after all, it produced passenger planes, and so Carpenter made computer graphic versions of the companies planes. If one had ever looked at real promotional films of Boeing planes from this time, they were always filmed flying over mountains. So Carpenter decided his computer graphics planes needed mountains too. “Because every Boeing publicity photo’s got a mountain behind the plane,” he jokes.

One of the publications from that time was a Scientific American magazine with a review of Mandelbrot’s original 1977 book (based on this original research paper) called Fractals: Form, Chance and Dimension. There was a photo in the article of some mountains. “Hey, that might be a way to put mountains behind my pictures (of planes),” says Carpenter. “so I went out and bought the book and read it cover to cover – twice, and nowhere in there did he provide any sort of algorithms or methods or hints of methods to produce the kind of mountains I wanted!”

Carpenter thought on the problem for months and finally had a breakthrough on how he could make his mountains using fractals. “This of course meant that I had to write a paper to explain how this worked and also I could see that the solution would animate, in order to prove that I had to make a movie. It was an absolute requirement that I make a movie to show how this stuff worked.”  Important to Carpenter even then was, “I wanted to make it graceful, we’d go to SIGGRAPH and see some technique … and they were interesting, but they had no aesthetic integrity, they didn’t have a beginning, a middle and an end – most of them had no grace at all – so I wanted to make sure I put that in there too.”

The short film to demonstrate was called “Vol Libre,” excluding credits and titles – it is about 1 minute 45 seconds of original animated fractals, and he showed it to a packed computer animation festival screening at SIGGRAPH 1980. He was hoping to attract the attention of Catmull and Smith, who he knew would be in attendance. They were and they immediately hired Carpenter.

The tradition of short films would play a key role in Pixar’s corporate culture and over the years often a testing ground for new rendering techniques. This year at SIGGRAPH, Pixar is presenting papers on The Blue Umbrella – the latest in a long line of Pixar shorts, in this case showing the latest physically based lighting and shading with ray tracing in RenderMan. In the early 80’s however, Carpenter’s mountains would be a key ingredient of a short sequence inside a major Hollywood film, Star Trek II: Wrath of Khan.

ILM had gotten the special effects contract for Star Trek II. “One of them was the Genesis sequence. We had to show what this could do, we had a number of examples from our friend Jim Blinn’s work at JPL, so that became the metaphor. So I said ‘I can do the planet’, and Tom Duff said ‘I can do the craters’ and Bill Reyes said ‘I can do the fire’ and Tom Porter did the stars – I helped with the stars.” While much has been written about the Genesis sequence, one fact is rarely discussed, the stars behind the planet. It goes a long way to explaining the mentality of the team at the time. Their mix of being artists and scientists – Pixar’s team were not just film makers or just nerdy genius scientist – they were a wonderful mix of both. It is true as the planet is reborn with a starling mix of particles and fractals, but the planet needed to have star in the surrounding space.

Carpenter along with others at Pixar wanted the stars appearing behind the planet during the Genesis sequence to be accurate relative to its location in space, yet contain constellations visible from Earth. They chose the star Epsilon Indi from the Yale Bright Star catalogue – “that was the only catalgue we could get,” explains Carpenter. “At the time it had about 6000 to 7000 stars in it and so there is always a backstory you try and cook up to make things plausible if you have any scientific bent. I looked through the star catalogue to see which stars would be close to the Earth – as the Star Trek universe is relatively close around the Earth, and have a G type star of around roughly the same brightness (as ours) and so it might have a planet around it that might support life. I looked for stars no further away that 25 light years and a G-type, and one of them Epsilon Indi, it’s the fifth brightest star in the constellation Indus in the southern hemisphere. I had the star database, which is roughly 3D as we have parallax distance on them from the Earth going around the sun – and you can get a fix on how far away it is by how much it wobbles relative to the background stars.”

“So I had distances for all the stars and their colors,” he continues, “so I could put them up on the screen and move them around. So I put myself at Epsilon Indi looking back at our sun to see what I would see, and sure enough you see the Big Dipper, and sure enough there is an extra star, our sun in the Big Dipper as seen from Epsilon Indi. Sure enough as you fly past and pull back the Big Dipper with an extra star in it is in the background.”

The Genesis sequence was not rendered in RenderMan but its success creatively and technically pushed forward the group.

Reyes Rendering Architecture: The RenderMan Spec 

The original specification was not for a product, in fact it does not even define that you need to render with the Reyes Scanline approach or ray tracing, it is about what you are rendering not how you are rendering.

“With the renderer we wanted to start a new approach to rendering,” says Catmull. “This is when Loren Carpenter, Rob Cook and I started thinking about how we would redo rendering which included motion blur, and the first motion blur work was work I had done at NY Tech, I was experimenting with it and the importance of it, and what you’d need to do. Rob Cook made a brilliant breakthrough in how to think about stochastic sampling to solve the problem. But this was all just one train of thought to make the renderer better and better. And then there was a discussion with Jim Clark about how we’d do a real time renderer, but by this time we were in a separate company. Reyes had started back when we were at Lucasfilm but it wasn’t until we were a separate company that we worked with Silicon Graphics and others on a standard. Once the standard was worked out, and a standard was agreed on, then a date was agreed on to announce the standard.

The standard was ‘owned’ by one man, an important point for Catmull in the success of the specification. “We had Pat Hanahan as the lead architect on the design of RenderMan, and Pat is a remarkable person. I set up the structure so Pat made all the final calls, at the same time we involved as many companies as we could, 19 if I recall, and of those 6 or 7 were really heavy participants. But that being said, we gave the complete authority to make the final choice to a single person. And I think that was part of the success, that it has the integrity of the architecture that comes from a single person, while listening to everyone else.”

Catmull also recalls that single vision approach being tested. “At the last minute the guy running graphics at SUN tried to run away with it! He wanted control of the standard moving forward to to be put under a committee, so the night before we were to announce it, I got a call and he said SUN was not going to support it unless it was going to be put under a committee and the committee had to have various companies on it. And I said I would not do that, and if that meant they would not come in on the press conference then that was OK, but I was not going to yield on having a single architect.”

And so it was the next morning at the 1993 SIGGRAPH, Ed Catmull turns up to the press conference where the standard will be announced,  and “what happened was that they (SUN) turned up at the conference – as if nothing had happened – it was the weirdest thing,” recalls Catmull. “At that time SUN was very much an advocate of industry standards, and so it would not have been good for them to not support the standard.”

“We thought of it at the time as an analogue of PostScript,” adds Catmull.

“Post Script was the default standard for high quality desktop printing and so people had written programs that generated it,” says Carpenter, “and people built printers that would print it, so we had the idea that modelers would generate RenderMan RIB files and renderers would read them and it did not so much matter what you rendered it with, so long as it conformed with certain rules from the specification.”

The RenderMan Spec includes many key ideas that had also been born at Lucasfilm, such as Rob Cook’s shader tree. And while the early days of RenderMan at Pixar is tied up with the Reyes scan line renderer, in fact, from that first document ray tracing was clearly imagined and expected, it was just not cost effective.

09May/1984/1984_pixar
The original image by Pixar 1984.

RPH-19In 1984, for example, Carpenter et al published a distributed ray tracing paper that was key to most of the work that would come in ray tracing. It contained the landmark 1984 pool table still demonstrating distributed ray tracing effectively and photo-realistically rendering motion blur. That image was done by Tom Porter, still today at Pixar, as VP of Production. “He set out to do a picture that would show we could do a number of things all at the same time. One of the more brilliant features of the scheme was that we could do anti-aliasing, motion blur, depth of field, all in the same structure and you could tailor quality over whichever dimension you wanted. He wanted to do a pictures that showed this, so he came up with a picture of a pool table example. He did not do it with the Reyes algorithm, but with a ray tracer that he wrote. “I was working in the building at the same time as he made that picture and we were all impressed of course,” says Carpenter. (See our previous coverage of the pool table shot).

The 1984 shot clearly shows ray tracing is not something Pixar came to late in the game. They literally defined most of the game, worked out how it could work and lead the charge on inventing it for quite some time. But Pixar as a company was focused on tech for the purpose of making actual films. “The original motivation to do the research that led to these technological discoveries was to expand the range of possibilities for storytellers,” Ed Catmull once said. And as such RenderMan was not primarily used as a full ray tracing renderer until relatively recently. One reason why it was not more popular earlier inside Pixar was memory issues. The size of what sort of scene you could fit into RAM was very much a limiting factor. RenderMan with the Reyes renderer could render scenes much more complex than what a ray tracer could handle for many of the 25 years RenderMan has been used in production. Today, RenderMan does have a ray hider, which allows RenderMan users to choose between scanline or ray tracing approaches to tackle their productions.

“One of the features that makes it so long lived,” says Carpenter, “is that it was originally conceived to be infinitely scalable. All the rendering systems and algorithms that existed at the time, when the Reyes Algorithm was invented it had memory limits, you basically had to read the world into memory and make a picture or you had to do an out of order sort, you had to read the data into the machine and juggle it and write it back out to disk, and do that a whole lot of times, and eventually you’d have the data on the disc in such an order that you could read it in and make a picture. Nobody had a system that could read in a compact description of a very rich world and then make a picture from it. That is what Reyes can do. You can’t break it – there is no limit to the amount of data you can stuff into RenderMan and it will still be able to make a picture.”

The original intent was to cover realtime and photorealistic rendering, points out Catmull. “So some of the structure of RenderMan and the order in which you can make calls was to enable it to handle realtime.” Silicon Graphics and Jim Clark who personally knew Ed Catmull from the University of Utah were early heavy participants in the formulation of RenderMan.”The fact is Silicon Graphics was the lead in real time computer graphics, and Pixar was the lead in photorealistic rendering,” says Catmull. “We were trying to cover both and think far into the future, and at that time we were fully aware of the implications of Moore’s Law and what it meant for software and the future of software. We had to think long term into the future. Now as it turned out, as we came close to coming out with it, Silicon Graphics had to pool all their engineering support into a new product that was coming out, and so what that did is that it pulled away some of the attention from them, but the design had already been put in place by that time.”

Pixar  at SIGGRAPH 1991: image via 3dstreaming.com
Pixar at SIGGRAPH 1991: image via 3dstreaming.com

The RenderMan Name (and Rendering Hardware)

What is also somewhat forgotten is that when Pixar launched the standard, no renderer or product met the standard. Not even Pixar’s own Reyes Renderer. “It took us some years to meet the requirements of the standard,” says Catmull. “It was roughly two years. Along that way, since rendering was so important, we felt we would need to create a hardware renderer to make pictures fast, as CPU were too slow to make it practical, to do a really high quality film image. So we started a project to create a (h/w) renderer. We had already built a laser scanner for film recording the Chap Pixar Image Computer.”

A  Pixar Image Computer. (One sits now that the front of Pixar's internal RenderFarm)
A Pixar Image Computer. (One sits now at the front of Pixar’s internal renderfarm in Emeryville, CA)

RenderMan’s name comes from the fact that the team had a number of projects going on at the time in the early 80s, and one of them was to build a small machine that could do image compositing, painting and so forth, says Carpenter. It was a 2D graphics engine and ideally it would be the brains of a new form of computerized optical printer. “Officially when we started at Lucasfilm we thought we’d finish up with a direct replacement for an optical printer, where you scan in film, crunch on pixels and you scan out film, and you do all those ships in front of blue screen and people in front of green screen. So the idea of our machine was pretty much a pipeline where pictures come in and pictures come out, and this thing in the middle would duplicate an optical printer. At some point having it programmable made a lot of sense. Especially as a lot of the hardware in there was common to image processing.”

This would then allow for painting, wire removal, matting etc. all things that required human interaction. “So instead of building a black box with two laser scanners at either end,” says Carpenter, “we built a general more programmable engine that would attach to a desktop computer and we could control it – that became the Pixar Image Computer.”

In the late 80s, parallel processing held the promise of vast leaps in performance. At that time a CPU had been specifically built for parallel processing called the Transputer. It was made by a company called Inmos, based in Chipping Sodbury near Bristol in the UK, and they made the T414 and T800 which importantly was floating point. “We built something that was perceptually similar to the Image computer – which was an integer machine, but this was floating point, so while the image computer was called the CHAP this was called the FLAP, floating array processor,” explains Catmull.

“We built several of them and had them running so when we showed RenderMan running at conferences, it was running on this parallel processor with transputers in it – 64 Transputers in it,” says Catmull. “It was built to be part of a pipeline, but as we developed it we realized we were competing with Moore’s Law with CPU and we probably couldn’t get far enough ahead of it to justify it so we actually stopped the hardware effort. At that point it became pure software. But we actually made a bit of a mistake. We started off designing the architecture so it was effectively multi-threaded, but once we got off Transputers we stopped keeping that alive and so going though years of single processors, we let most of the multi-threading erode. So when we finally went back to put multi-threading back into it (RenderMan) is was a bit like a restart. Had we kept a multi-processor version going it might have been easier to make the transition to multi-threading, but anyway now we are fully multi-threading again.”

One of the engineers on the Transputer project, Jeff Mock, was working on a smaller version of this first rendering machine. Unlike other CPUs of the day this was designed to be connected and run in parallel to other Transputers over a reliable built in serial communication – bi-directional on all four sides of each chip. It had dedicated links and it was expected that by running say 4, 8 or 16 Transputers together one could get vastly faster computing, much as multi-cores work today but each chip was a separate piece of silicon.

Mock’s plan was make this smaller faster new engine for doing TVC work and shorter projects. Loren Carpenter recalls: “He managed to get 16 of them on board with some memory – using very innovative packaging techniques. But one 1/16 of this board was this one part or module that he was designing very carefully so he replicated it 16 times on this bigger board and would have a lot of compute power. He finished the prototype with one Transputer on it,  and it was about 2 1/2 inches by 5 inches long, and he stuck it in his pocket, and he was walking around the office showing people, “Hey this is my rendering engine here,; and at that time there was the Walkman, the very earliest of the mobile music devices, anyway he pulled out this module and said this is a RenderMan, as it renders and fits in my pocket…and that’s where the name came from.”

RenderMan’s first film

John Lasseter And Ed Catmull (copyright Heusser)
John Lasseter And Ed Catmull (copyright Jeff Heusser).

The first film to use RenderMan was short film TinToy. In the scene where the Tin Toy hides from the baby under the couch, everyone at Pixar who could model had to create a toy. The project was very collaborative. “I made the fire hydrant,” says Carpenter, “I had a bunch of these little walking plastic toys on my desk and the word came out – we need to populate under the bed and there is way too much for us to do, so everyone in the company gets to make something, so we all picked something and made something.” For example, Craig Good created an airplane and Ed Catmull created an elephant.

Tin Toy was the first 3D computer animation to win an Academy Award.  John Lasseter, the film’s director, commented later that, “The most important thing that Tin Toy did was plant the idea in our heads, the idea of toys being alive, and out of that grew Toy Story.”

Today

The latest release of RenderMan is RPS18 which has just been released. PRS18 has geometric area lights with the option of ray tracing, plausible shading, energy preserving multi-bounce lighting and complex subsurface scattering, able to light vast geometric complexity with ever faster render times.

As of today, RPS19 has been announced and will soon enter Alpha with even more complexity and it is possible this year Pixar will release a third version inside one calendar year.

pixar25

We are currently in what could be thought of as a third phase of RenderMan, which really starts with RPS 16 which introduced ray tracing. There was the initial RenderMan, followed by a period that added great realism though expanding the shaders, and now today the trend is to physically based shading and lighting. But the move to realistic global illumination inside Pixar’s RenderMan covers the entire product life.


25th Timelines
The 25 years of Pixar developments and films – click for large version

The “historical flow” of illumination-related technology can be approximated inside RenderMan as:

1987:  RSL is introduced.  Plastic shader is described.

1987-2002:  RSL evolves to include new techniques:  deep shadows, “magic lights”, more elaborate proceduralism.

2002:  Raytracing is added:  gather(),  transmission(),  indirectdiffuse() extend the collection of “built-in integrators” (eg diffuse(), specular()).

2002:  Point-based subsurface scattering is added.

2002-present:  Custom shaders implement area lights with ray traced shadows, they become more affordable due to Moore’s law.

2005:  Since indirectdiffuse was too slow for computers of the era, Pixar introduced point-based GI.

2005-present:  A mixture of all these techniques are in play to achieve final frames.

2011:  RSL2:  coshaders and shader objects are introduced to help manage incredibly complex shaders and inter-shader communication.

2011: Pure raytraced subsurface scattering added.

2011:  Introduction of the directlighting integrator. Now the details of arealight sampling are fully managed by PRman

2011:  Introduction of the raytrace hider. PRman can be used in pure-raytracing mode.

2012:  Introduction of geo arealights:  extending the support for arealight shapes from simple to arbitrary (geolights).

late 2013:  RPS19 will be released with support for separation of integrator from material, streamlined, fast, pure C++ shading environment augmented by built-in advanced GI integration technology (Bidirectional Path Tracing with Vertex Merging).

Version 19 will have the option of either uni-directional path tracing, or bi-directional path tracing with vertex merging (a new technique that has appeared in the last couple of years). It will offer a new Bxdf and pattern plugins – it will ship with a foundational set included. It will also offer very fast specular reflections and refractions.


 

Rendering from RPS16 Dylan Sisson, Pixar

It is a remarkable effort from a remarkable team over a long period of time…What is also remarkable is that at some point in the history of RenderMan the separate development team did not get disbanded and completely absorbed into production. The argument is something like  ‘you are making films that are worth billions, and you have an insanely talented group of technical staff, experts in exactly what you are doing, why not move them into production?’ The pressure must have been enormous, and yet that same concept of protection Ed Catmull felt himself at Lucasfilm is provided to the RenderMan team, who many people don’t know were moved away from production and are actually based in Seattle, not Emeryville.

For a long time the RenderMan group was co-located with the animation production team, “but what we found is that production is a major magnet for people,” says Catmull. And so when Dana wanted to move up to Seattle, the decision was to move the whole group up there, so we are trying to have the best of both worlds: you want to have the integration with the studio who has the real problems, at the same time you want it set up so that that group has got its own protection so they can solve their own problems, without losing everyone into the crisis du jour at the studio – and we have one on every film. So that is how we did it – we provided them with their protection which comes from distance.”

 

Dana
Dana Batali at Siggraph 2013’s RenderMan User Group (with Chris Ford holding a gift for Pat Hanrahan).

Yet the RenderMan team being a part of PIxar Animation Studios means that they benefit from the real world fire of production problems. “You need the real problems,” notes Catmull. “How do you get driven by the real production problems without being overtaken by them, and consumed by them? It is a balancing act, and it is one Dana (Batali) has figured out. The way it works is that we have given Dana a free hand in maintaining the product, given Dana a free hand in how the product develops, it isn’t as if he comes to me and says is it OK for us to put the following features in – he never asks. The charter is that he is meant to respond to what is needed.”

But again in many other companies a key component like your rendering engine would be proprietary and certainly a company would not allow others to buy the full version. Perhaps a cut down version. One might expect for example key features to be withheld and never released. “There isn’t anybody here to say don’t give away Pixar’s secrets, that discussion doesn’t happen, but it did happen once,” Catmull stops to point out. “There was once a discussion about something to do with hair, it was a particular issue, and at that time a decision was made – well, we don’t want it to go out of Pixar yet, and what happened was that the customers were miffed. It is too small of an industry for people to not find out. What it meant was that would put them in a ‘second seat’ as far as technology – which is unacceptable for them as they need to be at the leading edge. So basically I realized that this was a mistake and so I apologized and said ‘we screwed up’. So we set it up so that when we make changes they are in response to what people need and they never ask me what is to go in there. Everyone is on an equal footing and I think that is a better way of doing it.”

Pixar has always been very generous in sharing its research and the development of RenderMan at SIGGRAPH with papers and talks. “There was no business reason to keep things to ourselves, partly because what we were doing was so out there,” says Carpenter. “There were a handful of people who could understand it or make use of it, so it didn’t bother us if we did something, we took some time to get experienced with it of course before we can write it up, so by the time it gets into the hands of other people you are a year down the road. So we figured it was fine to tell people what we were doing so long as we could stay year ahead, that’s just my personal take on it, to help people. That is why we published the original distributed ray tracing program, it was that we were tired of seeing dirty pictures with aliasing and things. We were like ‘Hey guys do this and it will all go away!”

Over the past 25 years, RenderMan’s pioneering contributions in computer graphics have dramatically transformed the animation and visual effects industry, establishing an unparalleled body of knowledge, expertise, and production insight into all aspects of rendering at the highest levels of cinematic realism. Pixar recently pointed out that, “RenderMan today is responsible for generating more film pixels than every other solution combined and has become the standard for cinematic photorealism, used in 19 out of the last 21 Academy Award winners for Visual Effects.”

 

Image by Pixar's Dylan Sisson. The left image uses image based lighting, with geometric area lights used for the right image. The same shaders are used for both images. Environment lights are used in both images, with one bounce color bleeding. On the right, the neon tubes are emissive geometry. 
Image courtesy of Dylan Sisson, Pixar. The left image uses image based lighting, with geometric area lights used for the right image. The same shaders are used for both images. Environment lights are used in both images, with one bounce color bleeding. On the right, the neon tubes are emissive geometry.

Continuing to build on the advances embodied in the recent release of RenderMan 18.0, Pixar’s RenderMan team is currently developing new paradigms for rendering high-level visual effects, retaining all of the traditional strengths of RenderMan while establishing new levels of ease of use and accessibility.

Some of the newest features include beam diffusion for sub surface scattering, accelerated re-rendering, enhanced volume rendering, and path tracing with geometric area lights. “Geometry lights are awesome!” said Mitch Prater (LLC), Shading System Architect for Laika VFX. “It’s a fantastic achievement with geometric area lights that have tons of great features, better sampling for faster and better results, interactive path tracing, and expanded controls and shading information access. It’s a huge step up from 17.”

“The all-round improvements in RenderMan Pro Server 18, such as memory usage, a new path tracing hider and improved subsurface scattering, have enabled us to greatly extend our physically-based lighting and shading workflow,” said Dan Seddon, Creative Director, Method Studios, New York. “The addition of geometric area lights has provided us with vastly improved sampling whilst simplifying their control with fewer parameters. More than ever, we can now render with the same setups regardless of whether we are rendering hard surfaces, organics, or volumes. All of this while having the familiar tricks and tools we often need to fall back on when the schedule crunch arrives.”

edcatmull_featured
Ed Catmull.

The company remains focused on a blend of art and science.  “An integrated culture is one in which you have gotten all these components,” says Catmull,  and they are blended. An integrated culture does not mean you have a technologist working with an artist, it means there are artists who are also technologists and there are technologist who are also artists, and frequently you can’t even draw the line. And drawing a line between the two groups becomes fruitless and that’s what integration is and so when it comes to how we make pictures, we understood from the beginning, yes the technology is changing but we are telling stories, we are making art, we are trying to figure out how to represent things, we are trying to understand human perception, and they are all interesting facets of conveying information and conveying stories to people.”

Monsters University. Rendered in RenderMan by Pixar.
Monsters University. Rendered in RenderMan by Pixar on their 25,000 core render farm.

 

4 thoughts on “Pixar’s RenderMan turns 25 (Exclusive)”

  1. Pingback: Pioneers | Polygon Rant

  2. Pingback: Pixar’s Renderman | Emma C McCormick

  3. Pingback: Other Transputer interfaces - GeekDot

Comments are closed.