Side Effects Software – 25 years on

This year marks the 25th anniversary of Side Effects Software. The company’s Houdini product rides high as the go-to tool for procedural and effects animation. We talk to co-founder, chief executive officer and president of Side Effects Kim Davidson, and two of Side Effects’ staff honored at this year’s Sci-Tech Oscars, Andrew Clinton and Mark Elendt, about the invention and integration of micro-voxels in the Mantra renderer. This work allowed, for the first time, efficient rendering of volumetric effects such as smoke and clouds in a micro-polygon imaging pipeline.

Listen to our interview with Kim Davidson below:

 

Prisms

Before there was Houdini there was Prisms, which was based on in-house code written for the computer production company Omnibus. In 1987 Omnibus was the largest CG production house in the world having just acquired Robert Abel and Associates and Digital Productions the year before, who had a CRAY computer. But in late 1987 Omnibus got into trouble and fell into bankruptcy. Kim Davidson and Greg Hermanovic bought the rights of the Prisms 3D software source code from the receivers and incorporated Side Effects Software as a company (as you can hear in our exclusive interview with Kim Davidson).

Action 3D: The screen grabs are from PRISMS 7 running on Kim’s own SGI Octane

Prisms or the Action 3D animation module at its core was the first of the company’s products. By the time the software was being phased out, Prisms was a suite of software products, all referred to as Prisms but really almost separate products in their own right. Interestingly, many of Houdini’s features and products can be mapped back to that set of applications that passed the torch to Houdini around 1986 (although there was a period when both Houdini and Prisms was sold).

Click on any image for a larger version

To work on Prisms meant as it does still today – an offer of expanding on the functionality of the products via procedural animation. In Prisms this was controlled and programmed by SAGE which allowed the procedural SOP’s to produce say complex reflections on logos that were not naturally available directly. But Prisms also included Mojo – a morphing package, Moca a motion control interface and ICE, Interactive Compositing Environment. All this in addition to the tools around the core Action module, such as the channel editor JIVE and the Material Editor Lava.

Mojo morphing

From the earliest days, Side Effects threw good parties. As one original user Harry Magiros, (who went on to be an Australian reseller for Side Effects), recalls, “Greg would throw these amazing parties with interactive stuff happening – all live – which back then was unheard of, today of course you can do it with a PS3. But it was such a great family atmosphere at the first user groups. We all had Greg and Kim’s home phone numbers and they never minded us calling in the middle of the night if we had problems. While the other companies were spending a fortune on marketing, [Side Effects] got to know everyone personally and that was a great move at the time.”

ICE editor (click for larger image)

Actually the first every user group was officially the User Group at SIGGRAPH 1989 in Boston. “It was in our hotel room, recalls Davidson, fondly. “Twelve people in attendance including Greg and I, and it included two people from Omnibus Japan and two from Computer FX.”

Click below for the original meeting notes from that meeting in pdf format (cira 1989).

SIG_PRISMS_UG_1989

At the time of Prisms 1.0 the only people who could really afford complex state of the art 3D in the entertainment industry were those requiring station IDs and logo flips. Much of the early work of Side Effects was supporting broadcast design. Areas such as feature film visual effects and high end simulations were not yet developed markets in 1987 when the Side Effects team put the first graphical user interface on their procedural modelling code.

Prisms was actually a considerably more mature ‘scientific’ product, to quote one original user – more mature than many of the other products around it at that time. In the early days it was Softimage that was perhaps Prisms’ biggest rival.

Today

Today Side Effects with Houdini covers all the major areas of 3D production, including:

Modeling – Most geometry entities incl. Polygons, NURBs/Bézier Curves/Patches & Trims, Metaballs

Animation – Keyframed animation and raw channel manipulation (CHOPs), motion capture support

Particles – A huge aspect of any Houdini pipeline

Dynamics – Rigid Body Dynamics, Computational Fluid Dynamics, Wire (Curve) Dynamics, Cloth Simulation

Lighting – node-based shader authoring, lighting and re-lighting in an IPR viewer

Rendering – supporting a variety of renderers; mainly: Mantra, Renderman, mental ray & various 3rd party aps

Volumetrics – generations/population/manipulation/rendering of scalar- and vectorfields

Compositing – full compositor of floating-point deep (layered) images

Pyro was initiated by a Dutch intern named Coen Closters. “He was just playing around with some things, it got picked up by the management and suddenly he developed this entire feature for Houdini, under supervision obviously, but still,” commented DNeg’s Nick Van Zutphen, a former Side Effects employee. “With me sort of the same thing happened – I took the internship because I wanted to learn DOP’s, this interest lead into me doing a 3 and a half month research project which resulted in a new FLIP variable fluids solver. It was built upon existing Houdini nodes. The result showed some painful shortcomings of the solver at that time and resulted in the fact that Side Effects has totally rebuilt their FLIP solver with the support of very nice looking viscosity, which also includes the things that I couldn’t because I don’t write C++.”

Kim Davidson adds, “The internship Nick spoke of started in 1996, since then we have had over 130 interns at both our LA and Toronto office. The majority, like Nick, have gone on to work as Houdini FX TDs. All great people.”

Mantra

According to Nick Van Zutphen, who helped us compile this story, in 1988 a guy in a big wool sweater showed up at the Side Effects office, ‘sheepishly’ looking for a job. That person was Mark Elendt, who at the time was working for an insurance company. The insurance company part didn’t really impress Kim Davidson and Greg Hermanovic, but what they did notice were some photographs Elendt showed taken from an Amiga 1000 screen (with 512kb RAM). It displayed renders of a typical late 80’s ray-traced sphere. “He had written a ray-tracer as a hobby,” says Van Zutphen. “This was the prototype of Mantra, which is Houdini’s native renderer.”

Mantra is still to this day the Side Effects Houdini packaged renderer. It is very similar in many ways to Pixar’s RenderMan, a renderer that many Houdini customers also use. And it is for Mantra or rather more specifically the voxel rendering that it provides that lead to this year’s Sci-Tech Oscar success for Side Effects’ Andrew Clinton and Mark Elendt. They were awarded the Technical Achievement Award (Academy Certificate) for the invention and integration of micro-voxels in the Mantra software. This work allowed, for the first time, unified and efficient rendering of volumetric effects such as smoke and clouds, together with other computer graphics objects, in a micro-polygon imaging pipeline. Mark Elendt has been awarded twice before by the Academy – this is his third personal award, see the timeline below.

Interview with Sci-Tech winners

We spoke to Andrew Clinton (3D Graphics Programmer) and Mark Elendt (Senior Mathematician) about their Sci Tech Awards for the micro-voxels invention and integration.

Andrew Clinton at the Sci-Tech Oscars
Mark Elendt, Senior Mathematician, Side Effects

 

 

 

 

 

 

 


fxg: Congratulations on being recognized by the Academy. Certainly, this work on micro-voxels has been several years in the making for you guys.

Andrew Clinton: Yes, the work started in 2006 when one of our developers started working on a fluid simulator, and we needed a better way to render and look at the results that were coming out of that.

fxg: Can we go back to the beginning then – when did the renderer Mantra first appear and what is its architectural approach to the problem of rendering?

Mark Elendt: Mantra first appeared in the Prisms software which was the pre-cursor to Houdini in the early 90s. It’s been re-written probably three or four times since then. The architecture is a hybrid scanline ray tracing engine, with a programmable shading language. It supports various primitives like NURBS surfaces, sub-division surfaces and polygon meshes.

fxg: So how did micro-voxels fit into that?

Clinton: Before we implemented the micro-voxel feature we had the hybrid feature, and the scanline part of that supporting polygon surfaces and that kind of stuff, and the only way to render volume was to write a shader, to do your own custom ray marching or to render sprites. So we wanted to bring some of the features of the multi-polygon renderer into the volume renderer, so you get the best of both worlds.

Elendt: So the micro-polygon renderer was almost a standard Reyes architecture, where you take a complex surface and split it up into more simple surfaces until you get something called a micro-polygon. Then you shade each micro-polygon and sample those. That worked fine for surfaces but there was no way to render for volumes using that kind of technology. So we wanted to extend the Reyes architecture for volumes and do more complicated amorphous primitives.

Micro-voxel representation

fxg: What are the advantages to going to the micro-voxel approach?

Clinton: One of the biggest advantages of going to the micro-polygon framework is the fact that the shading and the sampling are two different things and there are two different quality control knobs for both of those. So you can dial up the quality of your shading and you can keep the quality of the image sampling – the motion blur and all that stuff. Or you can say, ‘I want higher quality motion blur’, and you don’t have to pay any extra cost for shading. We can take advantage of the efficiency savings from that, and you get greater control. So you get better or faster motion blur where you don’t have to trace as many rays per pixel to get good quality motion blur. It’s the same thing with depth of field. Also, we’re able to integrate it into our image processing pipeline where you can generate deep images and deep shadows.

fxg: It also extends to more subtle light interactions – in addition to motion blur. You can mix two volumetric primitives. Can you talk about that?

Clinton: With any standard renderer you can put surfaces that overlap or the surfaces coincide and the renderer can handle that quite well. In the case of volumes, that was a little difficult in the past because you might have two ray march shaders but they don’t know what the other shader is doing. So if you have two overlapping ray marching shaders but they’re computed independently then it’s actually impossible to get the correct compositing between those volumetric primitives. So what works really well with any kind of Reyes or ray tracer is that all the ordering is done simultaneously for all primitives, so you get the right interweaving with volumetric surfaces.

fxg: When you mentioned deep images before, there are different uses of this term by different people. There’s the z-depth concept and then there’s a more advanced deep compositing approach that means, for example, that you don’t have to re-render when placing other images in a volume cloud rendered image. Where are you talking about?

Andrew (L) and Mark at Side Effects’ office in Toronto.

Elendt: We have multi-plane rendering but we also have what we call deep camera maps. This is our proprietary solution we’ve had for a couple of years now where you can store multiple color channels at every depth in the image. You’ve got the full image all the way through to the composite, so you can insert something in the middle or do full depth composited images. We’re really looking forward to Weta’s OpenEXR extensions, and hopefully we can get rid of our proprietary solution and move to open source.

fxg: This of course hits at the strength of Houdini and one of its most popular applications, which is doing volumetric effects animation.

Clinton: Well, one thing is that when you’re doing compositing of one of these deep images, normally the shading doesn’t happen again. So say if you move an object forward or backward inside a volume, whatever shadows were computed on that volume would not be updated when you re-do the compositing. So you have to re-run the render on the volume to the shadows. It will properly composite the sample, so if you have volume in front of it and volume behind it, it will properly move forward or backward in z-space. If you move it closer to the viewer, there will be less volume between the viewer and say the sphere, so that will be properly accounted for when you re-do the composite.

fxg: So is that facilitated by your micro-voxel approach?

Clinton: Well I would say it’s actually two separate things. You can actually generate a deep image using any of our rendering engines – one of them is the micro-voxel rendering and the other is an actual ray marcher. You can switch between these depending on what kind of quality or result you need. Often the micro-voxels will be the faster approach, but both of them will boil down to the same output from the renderer. You could end up with a deep image or you might want it to be flat in the end.

Elendt: The big thing is that, prior to micro-voxels, if you were doing volume shading using a shader, it was really difficult to get that information into deep shadow maps or deep maps at all. There was no facility for you to write samples into the deep maps. But having the volume rendering as an integrated primitive, and part of the rendering pipeline, allows that to be easy. It’s just like having motion blur or displacement shading on the volumes because they’re part of the rendering pipeline, so the renderer can do a lot more with the volumes than it would if you were going outside the rendering process and doing your volume shading in a shader.

Clinton: So you can think of the feature that we added as happening at two stages. One of them was integrating the volume primitives as a native part of the renderer which lets you say produce these deep images or having the motion blur work correctly. And the second part is going beyond that and saying we can put the whole micro-polygon framework and make that work with the volume primitives as well. That gives the de-coupled shading advantage where you get extra control and really fast motion blur.

fxg: Where do you think the renderer, Mantra, sits in the marketplace?

Clinton: One thing that places a renderer on the marketplace has to do with how general it is. And that has to do with the fact that we’re based entirely on a programmable shading language and also that you can control a lot of things with that language – not just with surfaces but we have actually have entire algorithms written in the language – like our physically based renderer.

Elendt: It’s like Houdini – it’s very flexible.

Clinton: We’re also striving towards a ‘one button’ approach as well. We want good quality built-in shaders and we want to simplify settings and have good defaults.

Elendt: In fact, micro-voxels is a prime example of this. It used to be very hard to render volumes and smoke and fire. And now it’s a lot easier.

fxg: What’s the development moving forward in terms of micro-voxels, or is that something that is now complete?

Clinton: I’d say the micro-voxels work is fairly complete and actually has been for a few years. We’ve done a lot of work on interactive rendering – a few years ago we added an IPR, so it’s entirely based on ray tracing. And that’s one of the key things about our renderer – it’s a hybrid. So you can use the ray tracer to get fast previews but then use the micro-voxels on the farm when you do your final renders because it’s faster in cache mode. I think bringing some of these concepts back into ray tracing is a possibility for future work. Say, taking some of the good stuff about motion blur, de-coupling the shading and bringing it back into the world of ray tracing.

fxg: Can you tell me about the physically based rendering – how fast are you going in terms of full GI-type solutions?

Clinton: Well we have a physically based renderer and it’s mostly based on ray tracing but you can actually change the imager to be micro-polygon. So there’s the two versions – a full ray-tracing one and the other one which uses the micro-polygon renderer. Once you’ve done the full shading using the micro-polygons it will move into ray-tracing at that point for extra bounces. So the way we’ve approached physically based rendering is to have a shader that does this. We have surface shaders which will produce a BRDF – the description of how surfaces reflects light. Then there’s a separate shader which actually does the entire ray tracing algorithm to do the lighting simulation in the scene.

fxg: For somebody who’s coming to Houdini and may be used to RenderMan, how easy is it to understand Mantra?

Elendt: Coming from RenderMan, it’s a lot easier to understand than something like mental ray. Mantra shares a lot of concepts with RenderMan. So there’s the programmable shading language idea. RenderMan is probably a little more sophisticated with its shading language – they’ve done some fantastic stuff with the modern RSL. But a lot of the concepts are very similar, even to the point where they are both a REYES renderer.

fxg: Well congratulations again and thanks so much for talking with us.


– Above: watch the 2011 Houdini Demo Reel.

Timeline

1987

Side Effects forms to sell software (but does some production to just get up and going)

First to put a GUI on a procedural modeling system

1988

First to incorporate an expression language in the user interface

1989

First to add metaballs
First to have a polygon reduction tool

1992

First to include a particle system
First to have a morphing package (Mojo)

1993

First to have integrated motion capture (Moca)
First to include time frame sampling (Tima)
Houdini development work starts – ICE is the first of the new approach

1995

First to integrate all components (modeling, animation, rendering, compositing) into one executable
First to support NURBS, polygons, and Beziers as “equal citizens”

1996 Dec

Houdini is released
Houdini wins CGW Innovation Award

1997

Contact, one of the first major motion picture films where Houdini is used for almost every effects shot. Then for Titanic, Digital Domain use Prisms widely.

First Sci-Tech Oscar: Greg Hermanovic, Kim Davidson, Mark Elendt, Paul Breslin
For the development of the procedural modeling and animation components of the Prisms software package.
Through a procedural building-block process, the Prisms 3D AnimatioN Software is used to simulate natural phenomena, and create particle effects, complex three-dimensional models, and motion for feature film visual effects.

1998

(Houdini 2.5)
Technical Achievement Award from the Academy, for pioneering procedural animation in Prisms
What Dreams May Come wins the Oscar for Best VFX Oscar
First to put a GUI on a procedural particle system
First to introduce hierarchical splines
v2.5 of Houdini released and wins CGW Innovation Award for CHOPS

CHOPS would be the last major piece of code worked on by Hermanovic. He is now president of Derivative.

1999

Houdini on Linux, being the first major 3D application to exist on Linux

2002

v5.0 released of Houdini, plus

Houdini Halo, a stand-alone compositing and image editing application

Second Sci-Tech awarded: Mark Elendt, Paul H. Breslin, Greg Hermanovic, Kim Davidson
For their continued development of the procedural modeling and animation components of their Prisms program, as exemplified in the Houdini software package. Through a procedural building-block process, the Houdini software is used to simulate natural phenomena using particle effects and complex three-dimensional models

2003 May

v6.0 released, new Digital Asset Feature

2005 Oct.

v8.0 released

2007

v9.0 released, new interface and workflow. Beta released at Siggraph

v9.5 – first Mac version

2009 April

v10.0: Released, Pyro FX toolset and stereo support

 

 

 

 

2011

v11.0 released July with Alembic support
Houdini v12 shown at Siggraph to huge buzz

Key features to include a new geometry engine built from the ground up, Pyro FX 2.0 for fast fire and smoke, an integrated Bullet solver, super fast instancing tools and an enhanced viewport experience

2012

Third Sci-tech Oscar for micro-voxels in the Mantra software
Houdini development is now focused strongly on performance and GPU improvements

 

Copyright (c) 2012 fxguide, LLC.

1 thought on “Side Effects Software – 25 years on”

  1. Pingback: The State of Rendering – Part 2 | 次时代人像渲染技术XGCRT

Comments are closed.