Total Chaos for V-Ray – What’s Next?

Today the Total Chaos conference starts in Sophia Bulgaria, and with it, the official launch of the new version of V-Ray for 3ds Max, called V-Ray Next. The conference is a gathering of the V-Ray faithful for two days of talks and hands-on workshops. The event covers architectural visualization, automotive design, games and visual effects.

People arrive as the doors Open on Total Chaos 2018,  Photo credits: Zap Andersson.

V-Ray Next

While many details of the next major version of V-Ray have been released during the Beta, here is a summary of the new version.

V-Ray Next is being released first for 3ds Max, the historical first love of V-Ray, with the Maya and other  versions to follow during the year.  Currently, the Maya version is about to go into Beta, with the expectation that it will be launched fully at SIGGRAPH 2018. All the other versions of V-Ray will follow towards the end of the year. “Believe it or not our second biggest user base, after 3ds Max, is Sketchup”, said Lon Grohs, Chief Creative Officer, of the Chaos Group. Downloads for 3ds Max will be available on Tuesday the 22nd.

One might ask: Why V-Ray Next and not V-Ray V4? “The answer is pretty simple, V-Ray 4 for 3ds Max doesn’t sound very good” explained Grohs. “And we also thought it was a good chance to highlight that a lot had changed, especially under the hood, to make it the next generation of renderer “. For example, the new image based environment light is 2 x to 7 x faster. V-Ray Next also includes new GPU rendering (2 x faster).  “Our latest R&D has helped boost overall rendering performance by up to 25 percent, giving users a much faster baseline,” said Vlado Koylazov, CTO of Chaos Group. “The speed gains from scene intelligence and a new GPU architecture make it even faster.”

AI and Machine Learning (sort of)

Chaos’ approach to AI is not strictly machine learning, “we don’t give the data to multiple machines to learn but do give  the V-Ray light cache capabilities to learn as much as possible from the scene. The first example of this was in our previous version when we added adaptive lights and the system could learn where all the lights were in the scene, and know which ones would be contributing, thus allowing you to render millions of lights really quickly” explains Grohs. Another example was the introduction of Variance-based Adaptive Sampling. Chaos eliminated the need to set individual subdivisions on materials & lights, or even camera effects like depth of field.

Chaos has taken what it learnt in these examples and advanced to the next level with features such as the Adaptive Dome Lights (ADL). In the same way that machine learning makes choices based on what is learned about a specific problem, V-Ray has been adopting learning techniques for analyzing a scene as it is rendering. V-Ray Next builds on previous smart features with two new breakthroughs:

  • Adaptive Dome Lights and
  • Scene Intelligence.

Firstly, the new Adaptive Dome Light (ADL) automatically produces cleaner, more accurate image-based environment lighting that’s much faster. The ADL also removes the need to set up skylight portals at windows and openings, making it especially helpful for architectural interiors. From Chaos’ internal evaluations, Grohs comments that “for environment HDR lighting, dome lights are basically in 90% of the scenes that are done in production, so this is a tool we really wanted to optimize”. This led to a 2 x to 7 x environment lighting speedup.

With ADL, there is no longer a need for light portals.

The system is streamlined, in addition to being faster to render. There is no need to adjust light samples and there are not even any user controls as V-Ray does it all automatically.

The results are both faster to get correct, and faster to render by a factor of 2 x. As the sampling process has been improved the results are also cleaner and more realistic, although the technical details will only be released later this week at the Total Chaos Conference. This feature is most effective on interiors and will be a big hit with architectural users.


36 min to 18min

Auto Camera settings

The second breakthrough is Scene Intelligence which comes to the V-Ray Physical Camera via point-and-shoot-style timesavers such as Automatic Exposure, Automatic White Balance and a simplified UI. Now, a perfect render exposure is much more likely on the first render.

The system solves the scene in much the same way that a smartphone works when taking a picture. While you can enter values manually, the automatic tools not only predict the right result but reduce rendering rays by focusing the computer power where it will be seen.

 

V-Ray Denoise

In V-Ray 3.x Chaos introduced their denoising solution which allows the user to render an image up to a certain point and then let V-Ray denoise it based on what it knows about the scene. This process runs very well on GPUs and takes only a few seconds a frame. But significantly, it is not real time.

Using this previously “learned” data is the basis of the new machine learning (ML). In V-Ray, ML can use the data learned during the light cache pass to help solve a variety of rendering problems using a neural network. By feeding the neural network thousands of different noisy renders along with the clean final versions, it can learn how to solve the noise problem using this image data and then apply the solution to other cases. This is how NVIDIA has created their OptiX AI-accelerated denoiser which has been added to the V-Ray pipeline. NVIDIA built a neural network using thousands of images rendered in Iray and from this learned data, one can now apply this learning to other ray traced images.

Noise (partial render) vs. Realtime denoise

The OptiX solution can denoise a render in real-time. The solution is less ‘perfectly’ true compared to the GPU version, but it is completely plausible and useful in almost all situations when rendering a single frame. The issue users may find is when it is used on animation, since the OptiX is estimating noise reduction per frame, it may be better to use V-Ray’s denoiser with cross-frame denoising to avoid temporal artifacts. Also it has most impact when there is more noise, so OptiX denoising is very good during the early stages of rendering.  It is great for tests and early lookdev. “With the combination of learned data and render elements, the OptiX denoiser can give you a very good prediction of the final image, even with only a few samples. While this type of denoising will work on GPUs or CPUs, the biggest benefit for the user is when working interactively” explained Christopher Nichols, Director of Chaos Group Labs.

Note: OptiX denoising refreshes after each V-Ray pass. V-Ray’s progressive rendering is done in passes. In simple scenes, passes can take milliseconds and in more complex scenes, passes can take minutes.

With the OptiX denoiser you’ll only see the denoised result after each pass.


Faster GPU Rendering

V-Ray RT is no longer a product. It has been replaced with V-Ray Next GPU. V-Ray RT was the company’s interactive rendering engine that could utilize both CPU and GPU hardware acceleration to see shots quickly.

V-Ray Next marks the debut of Chaos Group’s fast new V-Ray GPU rendering architecture, which effectively doubles the speed of production rendering and fully replaces V-Ray RT. Through a redesign of its kernel structure, V-Ray GPU offers a dual blend of high-performance speed and accuracy across interactive and production renders. The redesign has also prepared V-Ray for new improvements in GPU hardware, which will allow developers to incorporate new features without impacting performance. For instance, V-Ray GPU already tops previous generations of NVIDIA’s Quadro cards, running 47 percent faster on the new Quadro GV100 (Volta GPU Architecture – launched 6 weeks ago at GTC).

This means the Chaos Group can harness NVIDIA’s RTX technology. RTX is highly optimized for ray tracing on Volta GPUs. OptiX AI accelerated denoising leverages RTX technology (5 x AI denoising performance improvements). Importantly for the future, the GV100 not only has GPU functionality relevant to normal high end graphics, but it also has 640 NVIDIA Tensor Cores for dedicated AI functionality (compared to zero on the CP100 Card).

The new V-Ray GPU Next has a new architecture with a dedicated UI, supporting the new features such as ADLs, auto exposure & white balance, and fog, hair shader etc. “It shares most of the same features with V-Ray Next but it is a completely different render engine…We have a dedicated UI in which all the features and settings are supported and they work straight away in the GPU engine – the way you expect them to work”, commented Blagovest Taskov, Lead Developer at GTC. The previous version of RT shared the same UI as the normal V-Ray and this could be problematic. Both versions of V-Ray ship as standard and both can work in production or IPR. But significantly the results between the Production and IPR will always match. and there is a single set of settings for both.

“We went from a GPU mega kernel to a multi-kernel architecture”,  explains Grohs. “Now that we have done that we have a 2 x speed increase but we are also able to introduce new features that won’t slow down the code”.

V-Ray also provides GPU-accelerated support for:

  • Environment fog, and
  • Volumetric effects


Hairy materials

The new Physical Hair Material produces more realistic-looking hair with accurate highlights. Building off a paper produced by Disney Research, Chaos Group has reduced look development down to a few sliders, giving artists full control over glossiness, softness, randomness and more. Easy-to-use melanin controls can quickly dial in any color from blonde to redhead.


Hair dye sliders have also been added, so characters can even “rock a green mohawk’, when the brief allows. It is more accurate in terms of the physiology of the hair. The highlights look better and it’s much easier to set the coloration correct because you have melanin controls which you can change with a simple slider” points out Grohs.

3ds Max Only

For architecture and interior design users, the Chaos group has implemented a feature called Light Analysis that was often used by Mental Ray Autodesk customers. The integrated lighting analysis tools allow users to map luminance levels throughout the scene, “so one can render things such as heat maps and relative greatest values, which is pretty important for architecture users” comments Grohs.

Total Chaos (or Chaos-Con)

Total Chaos is being held in Sophia, Bulgaria, the home of Chaos Group so as to allow as many of the companies programmers and staff to attend as possible. By holding it locally the engineers can join the other 900 attendees. All of the event is being presented in English and if this works, the company may consider moving it around Europe, starting with Prague next year.

At the event there are three tracks. The first is an Artist track with approximately 600-700 people. There is also a Developer track with up to 300 people. The third is a much smaller track which is called Craft and includes the masterclasses. The plan is to post as many videos of the talks online as possible, (with the exception of production talks containing studio material).

Phil Miller, formerly of NVidia but now at Chaos and company founder Vladimir Koylazov, (or Vlado to millions of dedicated V-Ray users).

Martin Enthed, IKEA Digital Lab
Alternative names for the conference !

Total Chaos #theta360 – Spherical Image – RICOH THETA