We’ve all seen incredible high impact sports replays, slow motion film scenes and poetic balloon bursts. Now, the team behind the digital high-speed camera responsible for many of those types of shots – the Phantom came from Vision Research – has been recognized by the Academy of Motion Picture Arts and Sciences at the Scientific and Engineering Academy Awards, which fxguide recently attended.
With the Phantom 65, HD Gold and Flex being Vision Research’s most recent models for the motion picture industry, the cameras have found high profile use on such films as Sherlock Holmes: A Game of Shadows (for a scene of the main actors dodging bullets through a forest), Inception, Green Hornet, Zombieland, Captain America: The First Avenger and TRON: Legacy. Phantom cameras are also prevalent in academic research, medical work, military use and sports broadcasting.
Below, fxguide’s Mike Seymour speaks to Vision Research Chief Scientist Radu Corlan and Chief Technology Officer Andy Jantzen, both Sci-Tech award recipients, walking through the various parts of the camera that make up the Phantom tech. And stay tuned to fxguide for our continued coverage of the Sci-Tech Awards winners and a fxguidetv from the ceremony itself.
fxg: Firstly, congratulations on being recognized by the Academy.
Andy Jantzen: Thanks, it’s gratifying to be the first digital camera to be recognized by AMPAS.
fxg: We’re not just talking about a digital camera and high speed here, we’re also talking about the camera as a general camera, too, aren’t we?
Jantzen: The Phantom camera product line’s claim to fame is a high speed camera, but the cameras are equally capable and happy at shooting 24P as well.
fxg: What does it take to make a high end, high speed camera? Why is a high speed camera hard to make?
Radu Corlan: One of the difficult points is that you have to advance the field of sensors and processing. Most of the development that has been done elsewhere is towards making normal speed cameras. Many of those components, be it sensors or converters, memory systems – don’t really apply to high speed. So you have to re-build every block of the camera, customized to high speed. That’s why you don’t benefit from a lot of other work.
fxg: You’ve got to be able to get frames running quickly and deal with the sensitivity issues, as well?
Corlan: The first and foremost thing is just reading the pixels out of the sensors. You have to start with a sensor that has to be designed specifically, a long and expensive process. In many cases in electronics, if you increase your bandwidth, you increase your noise as well. From one side your sensitivity and noise are more critical, and then on the other side you have to do it at high speed which makes it much more difficult.
fxg: There’s also the rolling shutter.
Corlan: The Phantom HD Gold has a rolling shutter, and the Flex has a synchronous shutter. On a high speed camera, when the shutter rolls it rolls very fast regardless of whether you are shooting at a high speed or a low speed. From that perspective having a roller shutter is not that much of a problem.
Jantzen: When you think about it, every motion picture camera has a rolling shutter.
fxg: Yes, even a film camera has a rolling shutter from the blade.
Jantzen: Correct, the top of your frame is exposed at a different time than the bottom of your frame.
fxg: Yes it’s just a matter of getting it where it’s acceptable – especially when you’re doing imaging that needs to be accurate. Is the sensor technology in the Gold and the Flex – is it the same sensor tech or different versions?
Jantzen: Actually they’re quite different. The HD Gold and the 65 are a family of sensors – a 2K sensor and a 4K sensor that have rolling shutters. That sensor we began developing that in 2003. The sensor that’s in the Flex is a global shutter or a synchronous shutter – we produced that in 2008 or 2009. They’re all CMOS sensors and made with different technologies. We try to take advantage of the general state of the art at the time.
fxg: So we’ve got our sensor running fast enough and we’re dealing with the noise, but you’re going to have to put the data somewhere. How have you approached that?
Corlan: This is really the difference between the way a high speed camera is operated and the way a normal camera is operated. Sometimes for the high speed camera you cannot afford to record everything until an event happens, because sometimes an event can be unpredictable or the exact point of it is not known.
So the way most high speed cameras operate – it started with scientific ones – but even on the newer ones, is to record inside RAM inside the camera and just loop over continuously until the thing you want to record happens. When that happens you trigger or stop the camera.
fxg: So in this model you’ve got a bunch of RAM acting as a buffer and you’re looping through the buffer, and at some point I say, ‘That was the thing I wanted,’ and then I can record that off to some other medium.
Corlan: And in some cases, especially in production, you look at it and you may not even like the take. So you can loop it into the buffer, see if you like it, and then you can save it later. Also being RAM inside the camera, it allows the full speed of the sensor to be taken advantage of.
fxg: Right so because the RAM is so fast it – one of the reasons you can get astronomically higher frame rates is because you go to much smaller frame sizes – it’s all about that bandwidth.
Corlan: RAM is fast, but bandwidth is always a limitation. You need to put a lot of modules on just to keep up. After you have recorded into RAM you will download your images into a computer. That’s where the CineMag, a custom non-volatile recorder that we have designed, is useful.
fxg: In addition to the high speed buffer, you’re going from your RAW format to whatever I want to look at on monitor. How much coding needs to go into the RAW files from the CMOS chips – how much do you have to deal with the files?
Corlan: There is actually very little done, normally just black referencing and calibrating the camera.
fxg: Is there any gain in the pipeline?
Corlan: Going to the RAW file there is no gain. Essentially it’s being black referenced and calibrated.
Jantzen: The files are RAW and the fit into post-processing environment – it is that real RAW data that everybody wants. It’s never cooked and it’s always RAW.
fxg: One of the good things these days is having access to options for dealing with the files – there are quite a few apps for dealing with them with these days. How have you found this?
Corlan: When we create the RAW file we tell people what’s in the file, in terms of metadata and describing the file structures.
fxg: What are some of the performance specs of these Phantom cameras, in terms of frame rates that they are capable of shooting?
Jantzen: On the 65 you can do a full 4K by 2400 high at about 140 frames per second. The Flex which is 4MP has two different modes – it has a normal mode that will do over 2500 fps and a high quality mode that does 1300-1400 fps at 4MP which is 2500 by 1600 resolution.
We measure our cameras by throughput and the horizontal / vertical resolution times the frames per second – and when you think about it the Flex camera is six billion pixels per second – that’s the speed of it.
fxg: At 1200 ISO.
Jantzen: Yes, and the 6 billion pixels per second is the constant – and you divide different resolutions into that to get your different frame rates.
fxg: It must really open up the possibilities in terms of options for filmmakers.
Jantzen: The cinematographers and directors out there are very creative people, and we try to give them a well-engineered tool and let them do their thing. At Vision Research we can be creative with cameras, but when it comes to filmmaking, that we’re not.The Kato fight from Green Hornet, which was partially shot on Phantom Flex (vfx by CIS Hollywood, now Method Studios)
fxg: What about mounts – what are the main preferences in the filmmaking industry?
Jantzen: Overwhelmingly for that industry it’s the PL mounts. The PL lenses – the people in the industry have the lenses, have the motor drivers for them, have the focus pulls and matte boxes.
fxg: There are different industries too which shoot at even vastly higher frames. Can you talk about these other specialist areas?
Jantzen: Probably the next highest profile past motion pictures is what we do for live sports broadcast, whether it be soccer, NFL, baseball – every day in almost every sports broadcast you will see slow motion replays and they come to you courtesy of the Phantom camera line. We won an Emmy award for it last year. We do car crashes and bombs and bullets for military applications, packaging machinery, for putting pills in bottles. Anything that’s too fast for the eye to see – and there’s a lot of it out there – our cameras are used for. They’re used on microscopes, airplanes, tripods and in buildings.
fxg: I saw a demo once of stills flashes going off – they wanted to see the rise and fall of the decay. You could see enough frames around a single flash.
Jantzen: The flash bulbs being used today – which is actually pretty old technology going back to the 50s – they take about 30 milliseconds to go through – 30 milliseconds is 30 thousandths of a second. In 30 thousandths of a second, we can give a customer several hundred thousands pictures of that.
fxg: So what’s next from the Phantom cameras and what things still need to be solved?
Jantzen: Resolution, how many pixels, speed, frames per second, and sensitivity – do I have enough light to take pictures at that speed. So we have these three things that we constantly bump into and constantly trade off against. Sometimes you’ll see us going faster, sometimes higher resolution, sometimes you’ll see us going with higher sensitivity. Generally we’re pushing those boundaries.
fxg: You mentioned the development of the chips in 2003 and then 2009. Is there a cyclical timeline to the generation of sensors?
Jantzen: It’s true that our two offerings for the entertainment and broadcast industries were about five years apart – we developed other sensors between 2003 and 2009 and also since then but for industries other the filmmaking industries. There’s a very strong relationship between the film business and our other markets, and they feed each other. The entertainment industry is concerned with frame speed but also image quality. Our academic customers are less concerned with image quality but more with features and frame rates.
fxg: With the Sci-Tech award, does it seem like a long road to gain acceptance?
Jantzen: Our first foray into the entertainment industry was 2003, and we first got noticed around the 2005-2006 time frame when the HD first came out. But the market has grown. What digital photography has done for our traditional markets – which by the way used to be film – used to be 16mm rotating prism film cameras – what digi has done to broaden our academic markets, the same thing is true in the entertainment market.
The Scientific and Engineering Academy Award recipients from Vision Research were:
– Radu Corlan, Chief Scientist, who was responsible for the sensor specification and design, camera architecture, firmware, CineMag and CineStation mass storage devices.
– Petru Pop, Software Architect, who was responsible for software design and tools to realize the image processing pipeline as well as the CineMag and CineStation mass storage devices.
– Andy Jantzen, Chief Technology Officer, who contributed to the sensor specification, camera and workflow requirements, and system integration.
– Richard Toftness, Vice President, Research and Development, who was responsible for the system product realization, production and engineering support, and product fine tuning.