Home Page › forums › Autodesk/Discreet › Flame and Smoke › The future
- This topic has 20 replies, 8 voices, and was last updated 12 years, 5 months ago by rohit.
-
AuthorPosts
-
April 3, 2008 at 4:18 am #206631rohitParticipant
cos ffi on linux on PC ROCKS!, its come a long way.. i wonder what new nvidia hardware // quadcore processors would be supported by sept this year ? at nab we get much more open exr support especially in action..hopefully by sept we get full blown exr support across the board…
April 3, 2008 at 1:32 pm #206629AnonymousInactiveNew machine for all advanced systems from NAB is going to be HP XW8600 with QUADRO FX5600G. (fx5600sdi for LUSTRE only).
As of this moment my information is that they will still be on dual dual core.
But that may change soon.BTW. 2008 SP2 is out for those who care …
Cheers
April 5, 2008 at 6:29 am #206632rohitParticipantand open exr support in action 🙂
May 20, 2008 at 5:18 pm #206627AnonymousInactiveIt seems to me that the future of Flame, now that all new releases are on commodity hardware only, needs to be a completely re-written render engine using GPU. Martin showed a pretty nice GPU based version of action/batch at the LABs show a few years back, with nice interactive features like real-time depth of field for example.
To give you an example of the power of GPU based processing, FilmLight (the home team) showed 37 layers of 4k grades running in real time with no rendering required at NAB. You could tweak the bottom-most primary grade under all of the 37 layers of secondaries (shapes, keys, 6 vectors, etc…) would update instantaneously in 32 bit float. Now this was with 8 mid-end gpus and a huge amount of storage, but Flame should have some similar capability in my mind, epsecially when it comes to float.
With the last 2009 and 2008 releases being far more based on workflow and integration, one has to hope that the next extension of their release scheme is performance.
Best,
ChrisMay 20, 2008 at 7:56 pm #206625kubanParticipantI also hope that Autodesk now concentrates on GPU based rendering. Most of the action/batch can be implemented on GPU. There is no need to render and then readback to system memory. That is the old SGI architecture logic, where readback was very fast. And for antialiasing, IR had a hardware accum. buffer. Now the bottleneck in linux is PCIe readback. I haven’t seen higher values then 1GByte/sec from an Nvidia card, last time we tested that. So action can render really quick, but for multisample it relies on CPU probably.
If you ask me, multisampling should be implemented on GPU, and only 1 resulting buffer should be readback into system memory. And also source nodes in action, are very slow, since they also work the same way an action node works in batch. It renders on GPU and reads back to system memory. Since that resulting layer(media) has to be blurred with CPU, keyed with CPU, etc… If all the keyers, CC, blurs, are implemented in GPU, there will be no need for readback within action. Even in batch, if the next node can readback the result from texture memory, that buffer shouldn’t be copied back to system memory. Ideally, only optical flow nodes, and current sparks should be cases, where the readback is done.
A mostly GPU version of flame might be 100 times faster with current hardware in many cases. Now we dream of a GPU flame, when do you think this will happen?
May 21, 2008 at 6:07 pm #206628loopsParticipantAgree. Quadro GPUs have a hardware accumulation buffer so motion blur and DOF could be in hardware already 🙂
They also have hardware antialiasing similar to the magic “no-hit” antialiasing on the old Onyx graphics. And it’s much easier to render straight into a texture buffer than it used to be.
All this is easier talked about than implemented I’m sure, but one day! 🙂
-
AuthorPosts
- You must be logged in to reply to this topic.
