New Cooke S7 T2 Lens

There are many new lenses at NAB but the ones that caught our eye were from Cooke. New S7 lenses for the Cooke look on the new big format sensors.

The new Cooke S7/i lenses cover an image circle up to 46.31 mm diameter. "That’s every format from RED 8K VV 21.60×40.96 mm (46.31 diagonal) to Full Frame 24×36 mm (43.3 mm diagonal) and anything on down" commented Film and Digital Times reporting on the news.

Full Frame and Large Format is gathering momentum. It is future-proof because these lenses work equally well on Super35, current Full Frame, VV and the inevitable arrival of new cameras to come. The new Cooke S7/i lenses are extremely fast and go incredibly close. Maximum aperture is T2. Minimum object distance is around 4 to 6 inches in front of the wider ones. Front diameters are nicely compact at 110 mm. Cooke /i Technology connections are in the expected places. Focus is the same familiar Cooke cam-drive mechanism that feels as smooth and silky as the image looks. The main difference is large format coverage. Available beginning June 2017 in focal lengths: 18, 25, 32, 40, 50, 75, 100 and 135 mm. Price will be between S4/i and S5 (!!!).

The demo footage below was shot on Cooke Optics S7/i Prime Lens Full Frame. The footage was shot on RVZ’s RED W8K VV (courtesy of Samuel Renollet) by Michael Lindsay and Brendan McGinty.

Lindsay posted this on RedUser

"Over the last few years we have been testing different lenses as options for larger format filming. We firmly believe that with bigger sensors, and the more effortless presentation of detail that they allow, our preferences in glass will subtly change. We already felt our priorities change in the move from 4k Red1 to 6k Weapon..Because of my hunch/theories about big format optics (and the inevitability of big chip next generation cameras) we annoyed Cooke, amongst others, with irksome questions about where are their large format lenses … Serendipity stepped in and just after the BSC show I got the opportunity to have a look at an initial 1st build set of Cooke s7/i. These lenses cover Vista Vision+ and are obvious optical partners for Vista Red chips either in a DXL or Weapon body. Also we tested them on a Phantom 65 to look at coverage beyond 40.96mm x 21.60.... so I know they go a bigger than Vista… I believe they will also easily cover the Alexa 65 in the smaller 16:9 mode... but not in the Full 2.11 OpenGate."

Adding

"All the lenses are T2 and wide open they are just beautiful. We shot with the full chip at all times but were often desperate to crop to 2.39 as they feel really great in that aspect ratio... also they focus close and since you can switch to S35 mode the 75mm which is very close focusing anyway becomes like a macro. "

 

Light Field Lab

One of the things about NAB is the chance to talk to people early in the product life cycle, one such small company that really got our attention was Light Field Labs.  It is too early to know how or even if their products will deliver the goods, but the promise is too good to pass up. The company is new and formed by some ex Lytro engineers in San Jose. It is co-founded by CEO Jon Karafin, former Head of Light Field Video at Lytro. Karafin previously served as VP of Production Technology at RealD and as Director of Production, Technology, and Operations at Digital Domain Media Group. He is starting the company along with CTO Brendan Bevensee, who was formerly Lead Engineer at Lytro, he guided the design and build of Lytro Cinema. And finally VP of Engineering Ed Ibe, who previously worked at Lytro as Lead Hardware Engineer on the Lytro Cinema camera, overseeing mechanical, industrial, and electrical design.

The interesting 'promise' of Light Field Lab (LFL) is that they aim to build a new independent advanced the display, delivery, and interactivity of light field content solution. Light Field Lab aspires to bring real-world holographic experiences to market. Already in development, their displays aim to show photo-realistic objects to audience - as if they are floating in space – all without the aid of eyewear.

Their plan is to make a powerful integrated processing and imaging solutions that will allow these immersive visuals to be delivered over commercial network speeds. Future releases of the technology promise to allow users to touch and interact with holographic objects.

Jon Karafin will present a first look introduction to Light Field Lab at the NAB during the two light field technology sessions that seem really worth checking out:

Saturday April 22nd; 12:40PM; RM S222-S223; “Will Light Field Change the Way of Content Production?”-- Tuesday, April 25th; 10:00AM; RM S222-S223; “Next Generation Image Making”

It is impossible to say yet if LFL will pull off the dream of 'holographic style - glasses-less displays. It is the Holy Grail of next generation technology. So often seen in films, it is easy to think such a thing already exists. One good sign is that perhaps the leading company in Lightfield generation seems impressed. OTOY has been advancing Light field capture and VR displays for some time, (as we have covered here at fxguide before).

"We have arrived at a time where technology breakthroughs bring us closer than ever to what we imagined to be science fiction just years before. One of the main reasons I started OTOY was to help make the Star Trek Holodeck a practical reality in my lifetime. This has been a dream shared by many of us in the industry and around the world. Jon and the team have the expertise, ability and vision to bring this experience to consumers through miniaturization of inexpensive light field display hardware," said Jules Urbach, CEO of OTOY Inc.

If it was not for their track record and the comments of people such as Urbach, LFL would be easily ignored as hype, but as this team has actually brought advanced novel technical concepts from the academic world into commercially viable products before, As Urbach adds "If anyone can pull this off, they can.”

 

The team seem to think that after focusing on holographic capture and display technology development for the last decade that there have been several of our recent breakthroughs that might now allowed them to take something that previously existed only in the realm of science fiction and convert it into reality. As they stated on their web site:

"There are many holographic variations that aspire to display virtual objects without the use of a head-mounted device or stereoscopic glasses. In general, this is achieved through emitting light from each photosite (pixel) such that the viewer only sees it when certain conditions, such as ray angle or wavelength, are met. This can be achieved with lasers, optics, or other emerging beam-steering methodologies."

The company says that their passion is for immersive displays that provide a group social experience, instead of one where the whole audience sees a single identical view, but is blocked off from each other by headsets or other devices. "Just like watching a play in a theater, where everyone sees the same narrative, but views it from his or her own place in the audience".

When now becomes the million dollar question: While this is all very much still in development, The team is targeting having engineering samples to distribute in 2018 and development kits in 2019.  But ask them at NAB and see what they might be willing to say? Definitely one to watch.

 

MaterialX: An Open Standard for Network-Based CG Object Looks

Material X is an open standard for transfer of rich material and Look-dev content between application and renderers. It originated at Lucasfilm in 2012. MaterialX has been used by ILM in feature films such as Star Wars The Force Awakens and real-time experiences such as Trials on Tatooine.  The original Specification was made public in July of 2016.

MaterialX is supported by both Autodesk and the Foundry, and we are hoping either at NAB or Siggraph in July that we will hear more from one of these two key companies about the adoption of MaterialX.

The official position for the Foundry as of today, just prior to NAB, is that their products are targeting it for future releases.  We expect to see movement in both Mari and Katana workflows, but Material X is still at an early stage, and it is understood the Foundry would still like to see a tad more work done to the standard. As such there is no official  release dates yet from them."There's long been support here for the project. The earliest reference I've got to the idea comes from conversation with Jack Greasley almost five years ago, which maybe shows how long it takes for these things to cook" commented Simon Robinson, Co-Founder and Chief Scientist at the Foundry "But it's great that Doug Smythe has given the project such momentum".

(Keep an eye out : We have heaps more about the Foundry coming out around NAB)

The project specifications are being lead by Doug Smythe and  Jonathan Stone at ILM.

You can download the spec here

They wrote that:

Many Computer Graphics production studios use workflows involving multiple software tools for different parts of the production pipeline. There is also a significant amount of sharing and outsourcing of work across multiple facilities, requiring companies to hand off fully look-developed models to other divisions or studios which may use different software packages and rendering systems. In addition, studio rendering pipelines that previously used monolithic shaders built by expert programmers or technical directors with fixed, predetermined texture-to-shader connections and hard-coded texture color-correction options are moving toward more flexible node graph-based shader networks built up by connecting input texture images and procedural texture generators to various inputs of shaders through a tree of image processing and blending operators.

There are at least four distinct interrelated data relationships needed to specify the complete "look" of a CG object:

  1. Define the texture processing networks of image sources, image processing operators, connections and parameters used to combine and process one or more sources (e.g. textures) to produce the texture images that will eventually be connected to various shader inputs (e.g. "diffuse_albedo" or "bumpmap").
  2. Define geometry-specific information such as associated texture filenames or IDs for various map types.
  3. Define the parameter values and connections to texture processing networks for the inputs of one or more rendering or post-render blending shaders, resulting in a number of materials.
  4. Define the associations between geometries in a model and materials to create number of looks for the model.

At the moment, there is no common, open standard for transferring all of the above data relationships. Various applications have their own file formats to store this information, but these are either closed, proprietary, inadequately documented or implemented in such a way that using them involves opening or replicating a full application. Thus, there is a need for an open, platform-independent, well-defined standard for specifying the "look" of computer graphics objects built using shader networks so that these looks or sub-components of a look can be passed from one software package to another or between different facilities.

The purpose of their proposal is to define a schema for Computer Graphics material looks with exact operator and connection behavior for all data components, and a standalone file format for reading and writing material content using this schema. Their proposal will not attempt to impose any particular shading models or any interpretation of images or data.

They want the following requirements to be satisfy:

  • The material schema and file format must be open and well-defined.
  • Texture processing operations and their behavior must be well-defined.
  • Data flow and connections must be robust and unambiguous.
  • The specification must be extensible, and robustly define the processing behavior when an operator type, input or parameter is encountered that is not understood by an implementation.

If you hear anymore at NAB please email us here at fxguide - we think these open source initiatives help everyone as OpenEXR and Alembic have shown.

 

 


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.

  • Nathan Westveer

    Cooke is legendary in Optics, so this is no way a comment at them. But why are we making sensors even bigger? Heavy DOF is more of a Portrait Photographers tool. There is Some in Film making, but honestly It’s over hyped. Just count in a movie how many shots with Noticeable DOF you see. Filmicly, there is often just a slight DOF, and if there is more it’s because a you’re shooting with a longer lens or there is a shot where you are actually using the effect to Rack.

    Larger Sensors require you to Gather more light. Something Smaller Sensors benefit from this. Micro 4/3rds lenses have such low apertures, because past the nodal point the light is much more concentrated compared to Fullframe. Having a F1.4 lense on M4/3 is not only cheap, it delivers quite an appropriate DOF effect.

    Other than meeting miniaturization of fitting Resolution on a Sensor half way. Why is Red doing this again?