Autodesk Users Group 2005 Technology Demo

The Sunday before NAB started with two fxguide sessions — one for smoke and the other for flame. Immediately after the flame session, the “first” Autodesk User Group Meeting took place and showcased what has been under development for the last year at discreet. In this article, we’ll take a look at what was shown regarding the effects products….

As past attendees know, not all that was shown during the user group meeting will make it into the product line (remember seeing Mental Ray Action renders?), but it was still quite an interesting look at future technology. All in all the Users Group meeting was a very strong showing from beginning to end.

autodeskNAB05/mhcarl
Helie and one of his targets Carl Bass -- Chief Operating Officer of Autodesk

Product Designer Martin Helie, known in some circles as the “Mayor of Discreet”, took the helm to showcase product technology which is being considered for future effects releases. His overview was a humorous hit with the crowd as he poked fun at himself, the company, as well as Autodesk executives — pushing the comedic envelope. Even on the flame-news mailing list, user Chris Noellert (Frithiof Film to Video. AB, Stolkholm) commented that “Martin’s presentation on flame was hideously funny.”

It was certainly a nice change of pace due to the fact that his presentation was the last of the evening after a long session (and day). There was also a bit of apprehension amongst users (and maybe in Montreal) regarding the first public event under the new Autodesk name. Helie’s presentation helped show that much of the spirit of the old discreet is still very much alive in Montreal regardless of the new name.

Many of the improvements shown were in batch, which has become a focus of recent releases. Seeing new life and breathing fresh air again was the much-rumored (if by “rumored” I mean “actually shown as part of flame 9.0 on the Autodesk web site”) paint node. Subject to changes, the paint node is slated for the next release.

autodeskNAB05/batchnodes
The new batch nodes

Also new inthat release is a Text node as well as Burn-In Letterbox, Burn-In Timecode, Average, Compound, and Mix. The text node has full compatibility with the desktop module, which helps reduce the number of items that break clip history. However, the paint node is an entirely new layer-based paint node. Helie showed the ability to feed a foreground and matte into the node as a layer and use a reveal brush to reveal imagery behind the foreground layer in context. The Text node (and module) now also provides options for higher levels of sampling — up to 64 samples. Action also provides the ability to render at 32 and 64 samples — this will especially help when using 3D text and geometry.

A lot of work has gone into optimizing processing in the software. Helie showed some incredibly optimized gaussian blurs in the software — a 100 to 200 pixel blur on an HD image was shown with next to no waiting. Granted, the software was running on a 4x700Mhz processor Tezro, but even considering the machine it was running on, the blurs were strikingly fast. Also shown were new blur modes including an extremely fast radial blur. A camera defocus-like blur was also mentioned but not shown. The blur tool also had some color correction controls as well as matte controls for comping the blur back over the original image.

More interesting was the long overdue showing of region of interest processing in batch. This took several forms. The first was a standard rectangular region of interest in the 2D processing pipeline through batch nodes such as the colour warper, keyer, etc. Perhaps even more interesting was the explanation that batch would only render what was seen on screen. So if you were working on a 4K image but were zoomed in and could only see 10% of the image on the screen — only that 10% would be processed. This will make a tremendous difference in comparison to the current generation software which needs to process the entire image before updating the display. Finally, there was an intelligent auto-cropping for processing which used the matte signal to determine what area of the image to process.

Another improvement was in action…where the region of interest automatically and intelligently looked at 3D space and rendered only what would be seen through the camera viewport. So if you had rotated an HD image where only the top 1/4 of the image was visible in the camera view, the bottom 3/4 of the image would not be processed. It seemed quite intelligent in dealing with such situations. Taken a step further, if there is a batch node pipeline feeding this action layer — only the top 1/4 of the image would be processed *throughout* the pipeline.

One problem with processing in batch is the idea that a single frame process in batch must be re-rendered on each and every frame. This can be quite inefficient for high resolution imagery and processor-intensive nodes such as as sparks. The new text node — where there is a new 64 sample render option and many times the result could be static single frame — also highlights this inefficiency. For fans of the “More” button, Helie showed the “Less” button which essentially doesn’t reprocess the node after the last frame of a clip or animation is reached. All of these options will dramatically increase the speed and interactivity in batch if implemented.

An implementation of versioning was shown in batch, which allowed for almost instant switching between versions of a batch setup via a popup menu. All versions are saved and track back to a “master” batch setup which can reference these versions. Hopefully there will be tools such as duplicating a version, locking a version, and processing multiple versions at once.

Up in Montreal there has been a continued (or renewed depenending on your point of view) emphasis on image processing technology and development. While a lack of focus on this has been strongly denied by Autodesk over the last several years, there has still been a perception among users that this focus had been lost….and in the end it can be the perception which drives the reality. From what was shown throughout the evening at the meeting, it seems as though that perception is going to be changing. Among the technology shown which is hitting flame/inferno was optical flow motion estimation. While the Furnace set of plugins have in the past provided similar functionality to what was shown at the users group meeting, it will obviously be an improvement to have the nodes built into the effects products and have them fully integrated as opposed to a spark.

The first application of this technology is expected to be in the next version batch timeline, which will include a new motion estimation timewarp. This is thankfully not simply the placement of the current Motion node into the timeline, but new technology which discreet has been working on. The results are much better than what is seen in the current Motion node, which quite frankly wouldn’t be that hard to surpass. This technology is also making it to the desktop timewarp module.

In addition to timewarps, Helie also showed the ability to use motion estimation technology to derive motion vectors and post-appply motion blur to a scene. The demo scene was of a car driving on a winding road through a cornfield. By applying the motion vectors to the scene, motion blur was added. However, the car also had motion blur applied to it while it should have been in focus since the camera was following it. To get around this, the area of the car was sampled with the pen and the motion blur ended up being applied to the areas surrounding the car and generally not to the car itself. Also shown was a film damage node, where an area of the image with dust or a scratch could be selected and the software would interpolate information from the previous/next frames to remove the damage. Finally, there was a new option in the degrain node which uses motion estimation to intelligently remove grain while attempting to keep as much of the image detail as possible.

With 16-bit and 32-bit float processing becoming the standard, Helie briefly showed a couple of batch nodes (Colour Warper and Conversion LUT) which were capable of rendering at 16-bit float. Open EXR is expected to be supported in the effects products, so it is a natural extension that there would be the beginnings of higher bit depth processing and conversion tools. It was mentioned that this would not be something where a switch was flipped and magically all nodes supported 16-bit, but instead individual nodes would be rolled out little by little over time.

The first implementation of a new batch node was shown — let’s call it 2D Particles. Helie showed the Combustion 2D particle generator running in batch and implied that all the presets could make it into the effects products. Thankfully, he also alluded to the fact that they were considering improving the 3D particles in action, which really haven’t gone though any revisions since they were first introduced in flame version 5. It would be great to see new features in this action susbset — while not comperable to 3D application particles they are certainly incredibly useful in many situations.

Speaking of Action, there were also several improvements to that module in the mix. The first shown was a transparent UI similar to fire…a technique first shown as a technology demo of the Strata and Mezzo product (aka Toxik) at NAB 2002. Helie showed that whenever you tapped in a value box to adjust the value, all but that value box would disappear from the screen and you would see only imagery of the composite. For those working in HD or higher resolutions, this will come as an especially great improvement since the UI effectively would cover much of your composite when working. The UI would also disappear when moving items using axes in the schematic.

In addition to the UI improvement, a new type of freeform surface was also shown in action. It allows for the artist to place points arbitrarily on an image surface and then move these points to distort the image in 3D space. Consider it almost a freeform deform surface…you are no longer locked to a fixed grid (though the UI may be hiding the fact that underneath it is simply the same old mesh). Hopefully in the final implementation there will also be the ability to lock points on the surface to keep distortions from occuring outside a certain range.

All in all, the system products effects session was very impressive from a potential new features standpoint. The ROI implementation seemed very well thought out and will provide an immediate speed boost to both processing and interactivity. It was also refreshing to see some new technology start to enter the product in the form of motion estimation vector analysis. It will be interesting to see what makes the light of day in future releases and what ends up in the discarded ideas trash pile.

>
Your Mastodon Instance
Share to...