Post from RICOH THETA. – Spherical Image – RICOH THETA
Z. Nagin Cox, NASA JPL Spacecraft Operations Engineer, shared stories of NASA missions as the keynote speaker of this year’s SIGGRAPH 2016. She showed a range of projects from JPL, joking that the Caltech JPL lab scientific community were not quite as ‘bad as the Caltech Scientist depicted on Big Bang Theory”. She went on to show, not only importance of visualisation work done at NASA over the years since the work of Jim Blinn around the time of the birth of SIGGRAPH, but also how new technologies today such as VR are being used by NASA. For example, when Cox returned from her long weekend for the 4th of July, she put on a NASA VR headset to view of where Curiosity was on Mars at the start of her shift. “To see where the robot was” and where it was had been since she was last in the office.
“I cannot tell you the impact that CG and VFX has had on what we do at NASA JPL”, she explained – pointing out the benefits NASA has gained from the SIGGRAPH and film community. This is not just in terms of the tools but also in terms of inspiring a generation, “everyone I work with grew up with either Star Wars or Star Trek as inspiration” she added.
Right is a ‘selfie’ of NASA’s Curiosity Mars rover that Cox discussed at length.
This image shows the vehicle at the “Big Sky” site, where its drill collected the mission’s fifth taste of Mount Sharp. The scene combines dozens of images taken during the 1,126th Martian day, or sol, of Curiosity’s work during Mars (Oct. 6, 2015, PDT), by the Mars Hand Lens Imager (MAHLI) camera at the end of the rover’s robotic arm.
Cox explained the rover’s goals include, investigation of the Martian climate and geology; assessment of whether the selected field site inside Gale Crater has ever offered environmental conditions favorable for microbial life, including investigation of the role of water. She reminded the audience in light of current global climate change debates that Mars indeed did have water 3 million years ago and that such research is vital in reminding us that climates on planets can change, – dramatically.
Massive released Massive for Max which fully integrates with 3dMax (including when Max is used for farm rendering) and also a new version of their Massive Parts
The new version of Massive released at SIGGRAPH is 8.1 and it is the version of Massive one should use if you are new to Massive or want to use Parts. The complexity of creating Agents in Massive is dramatically altered by the new Parts.
“The Parts Library is perhaps the single most significant improvement in Massive since it was first released,” said Massive creator and CEO Stephen Regelous. “Small studios will find this lowers the barrier to generating sophisticated crowd shots, while larger studios will find the Parts Library useful for creating crowd assets that can be shared across multiple Agents as well as multiple projects.”
Last year, Massive unveiled the Parts Feature for Massive 8.0, which introduced the ability to rapidly assemble Agent brains by dropping in Parts containing set behaviors. While it radically improved the process of creating Agents, creating custom brain parts required considerable time and expertise. Now with the new Parts Library, artists can create intricate behaviors simply by mixing and matching from 40 pre-assembled Parts. These Parts include options like collision avoidance, quadruped terrain adaptation, formations, procedural prop animation and lane following to name just a few. More Parts will be added to the library in the near future at no additional cost.
Massive Parts are entirely customizable without any coding, completely avoiding the pitfalls of the black box approach used by other crowd simulation systems. Artists can select the Parts they want to use from the library, then simply drag and drop them into an Agent’s brain. Parts automatically “know” how to connect to other parts, and are designed to load any additional Parts on which they are dependent. The entire process takes minutes, even seconds.
The Parts Library also allows artists to build Agents in a way that keeps their brains separate from the Agents themselves. Sections of created brains can be shared instantly across multiple Agents; a feature large VFX studios have requested that was not technologically possible until now. Because the parts are saved separately from the Agents, the assembled brains can be reused and shared across multiple characters and multiple projects. The Parts can even be shared across multiple studios as they are not subject to licensing.
The Parts Library is available now to all Massive Prime users, who are currently on support and maintenance. Massive Prime 8.1 is priced at USD $16,000 including the first year of upgrades and support.
Birds of a Feather: State of Cloud Rendering for VFX
Troy Brooks , Moderator VP Technology DHX Media
Todd Prives, Google (bought Zync)
Kevin Baillie, Co-Founder Atomic Fiction & CEO Conductor IO
Phil Peterson, Senior Architect at Shotgun
Brennan Chapman, Lead Pipeline TD at Moonbot Studios
Gerald Tiu, Senior Technical Evangelist – M&E & Strategic Architect at Microsoft
Birds of a Feather sessions are more casual than papers, with a lot of interactivity and discussion between panelists. This session started with what was mentioned as the first area panelists are commonly asked about – Security. In the beginning the studios said no, companies worked hard with MPAA and the studios to address concerns and get certification.
Barriers to entry. Software licensing is one thing people like Google / Zync addresses. Hard to scale up if licenses are not part of setup. To be efficient and price sensitive needed software has to switch to metered licensing, pay for what you use. Users need to pressure vendors to support this approach and price it fairly.
What has been the growth since last year’s panel? Microsoft said exponential. Hard to keep up with… Microsoft was not even on panel last year. Google agreed, reporting they seeing more tier one major vfx facilities now using regularly. Users span all areas… features, TV, game cinematics, commercials as well as other industries like oil and gas, genomics.
What barriers remain from facility side? Forecasting when and what they will need. Getting initial data up to cloud. Pipelines can be set up to treat as a separate system or part of your network. Path on disk is a dated way to think. This was followed up in the Q&A, “how do we escape the “tyranny of paths”. Shotgun uses object storage for all images. Need software vendor help. Again talk with software vendors about your needs.
Why use 100 nodes vs 2000 nodes? Cost analysis. Extreme would be node per frame which could be costly. Scaling is key, this allows facilities to weather slow times and have great capacity when busy.
Big scale up requires good communication with cloud providers. Unlimted may not mean truly unlimited so planning is key. This led to a comment that in the future a facility may need to access multiple vendors seamlessly. Audience member pointed out that if all of vfx was rendered on Google cloud it would be a very small data set to what they are used to. Scalability tools are there.
Service providers need to provide tools to help properly estimate and manage costs. On premise render farm requires different vastly different thinking vs cloud.
One facility reported they now do two review sessions a day instead of one as they are no longer waiting on renders.
There was a moment of random personal thoughts about using cloud services to let people work from home and stop the nomadic lifestyle that comes from chasing subsidies. [Jeff Heusser note: I don’t think that this would satisfy subsidy requirements as most are strict about where artist is located physically. In fact I have been wondering if an artist is physically in Vancouver but the computer they are working on is in Culver City… does this break a subsidy requirement?]
There was discussion about the benefits and dangers of unlimited resources. As with any resource, there is a cost so unlimited still requires management and dealing in financial reality. Using the word “unlimited” with a client needs qualification that there are cost implications.
Cloud effectiveness is only as good as you can get material in and out. Is using standards like http, UDP, Aspera, etc. the best way… what is the next thing? Aspera is in wide use, but expensive. Studio in cloud is the eventual solution, transfer once… possibly even from shoot. Move between other vendors seamlessly, fast. Renders, workstations… all in cloud.
Where will we be 5 years from now? Studio in cloud was most consistent answer. Hope we’ll all be talking about none of this stuff, just doing great work with unlimited resources”. Of course this led to the question “What will we complain about?”
ARNOLD for MAYA
- Maya LookDevkit shaders are supported
- Maya Color Management is supported with MtoA
- Extended support for Viewport 2.0
- Support for Maya UV-tile tokens
- Support for more materials (phong, anisotropic, ramp, mountain, water, crater, granite, leather, rock)
- Support for stereo / multicam renders
- Fixed support for import/export AOV overrides
- PolyTools used to crash MtoA
MAYA 2017 NEW FEATURES
Maya 2017 comes with several new features related to rendering. MtoA 1.3.0 allows the following ones to be supported with Arnold :
- New Interactive sequence render mode (menu Render -> Render Sequence)
- Light Editor
- Scene assembly render settings
- AOV callbacks
- Render Setup Node templates