Forum Replies Created

Viewing 10 posts - 31 through 40 (of 40 total)
  • Author
    Posts
  • in reply to: Workflow for 4k film post production #213556
    Michael Dalton
    Participant

    Comments inserted:

    > Of course we’ll need something like an iQ 4k, to get some real time, for
    > conformation and color grading. That’s a workstation i have seen running, and
    > it seems pretty fast, but are any other solutions existing ? For color
    > grading/conform/composite ? I don’t know if something like an Inferno 4k exist

    When you say 4k I assume that you’re talking about shooting 35mm, and scanning on for example a Northlight or Arriscan? Be thinking about using ir channel data from the scanner and say Pixel Farm’s PF Clean or MTI’s Correct for handling the cleaning of the scans.

    Which bring’s the next issue. You’re going to need to be looking at things like SANs or rather intense NAS (think 10gE or infiniband) head for moving the data from the scan repository to the grading/conforming machine, and as the workspace for your retoration/retouch software – even if you’re going to generating 2k proxies for do most of the bump and grind work. It would be worth looking into say Bright Systems or DVS SAN on the SAN side and say Max-T’s Sledgehammer for the NAS end with inifiband cards.

    Also before jumping immediately onto the 4k Pablo bandwagon it might be wise to investigate other alternatives that are a little more open and a little more modern. We own and operate 2 iQs and despite that I’m an extreme fan of our Filmlight Baselight8 system, which isn’t just rendering in 6 fps but doing primary and secondary grading on Full-AP 4k in real time. You also don’t pay the same premium as with Q with regards to storage and nothing – NOTHING – will outperform it. It can do the full conform from the DPX files, the grade in full 4k, either against Sony’s projector or by using a 2k proxy and Barco, Christie or other projector.

    Most importantly Baselight has the truelight calibration system built in. A DI product is worthless with out being able to gaurantee that what you see on your projector or monitor is what you’re going to get on film.

    What might be smart is instead of upgrading your eQ to 4k Pablo just go up to iQ4 and truelight hardware box on SAN with a Baselight4 or 8 instead. This would give you all of the mastering tools such as real-time 4k pan and scan to 2k, HD or SD that you really want to have when mastering a DI show.

    Regardless, Quantel will tell you that the best approach is “the jack of all trades” angle. From experience I can tell you, more machines means a greater flexibility as well as reaping immense benefits from picking “the best tool for the job”.

    > But i’m more concerned about the stations.

    That’s the easy part.

    > Do you guys ever work at 4k ? Which software will be the most responsive, if
    > the choice is between Fusion/Shake/Combustion ?
    > And what kind of hardware, quad cpu ? 4gb of ram ? more ? of course it’ll be
    > hard to have real time from hard drives for all compositing stations.

    Quite often. Almost all the sofwares you’re going to come across wil allow you to work with a proxy. It’s pretty much how you’ll get through the whole ordeal. It pretty much comes down to the toolset you like and want to abuse. Personally I like Nuke nad the 4.6 version has Truelight built in.

    > Does working at 2k proxy, then render at 4k is an usual workflow in 4k film
    > post production ?

    No not so strange. Although the bulk of VFX is still completed in 2k.

    > I’m also concerned about monitoring, are they any DLP projector working at 4k
    > ? for a reasonnable price (i mean not projectors designed for theaters), and
    > does monitoring at 4k is useful, or 2k is enough ?

    Sony. Bear in mind here something quite important though. After the 4k material is recorded out on the Arri laser to intermediate then you’re going to strike a print. The kicker is that you’re going to lose an immense amount of resolution once you strike the print (positive). In all actuality your 4k source will yield a closer to 2k result on pos. Something you’ll notice directly in comparing a 4k digital projection to a first gen (0/A-copy) print from the lab.

    We can also argue the benefits of the different projection technologies, but I would stick it out with the Barco and Christies of the projection world for large format projection with a lot of light to play with a minimum of 16ft Lambert is really required for matching the film projector and potentially 20-2 ft if the projector has a native white point that’s not 5400K.

    Smaller format projectors like the Cinevation CGP35 are good choices for the 2 – 4 meter screen size as they will easily put out the 16ft lamber needed.

    > I also have seen that Apple Cinema display can handle 2k anyone tryed them
    > with 4k footage downsized to 2 k?

    For colour fidelity you’re better off with CRTs.

    > If you also have special thoughts about 4k workflow, errors to avoid etc don’t
    > hesitate to share them.

    I’ll put you on the road but I’m not going to walk it for you 😉 Don’t cheap out on storage. By SAS or FC and a significant amount of it.

    > Thanks for your answers, and appologies for my poor english
    >
    > Julien

    Did fine I think.

    Chris

    in reply to: matrix bullet time type shot… kinda, need help #213418
    Michael Dalton
    Participant
    vincelapince wrote:
    hi

    first sorry for my poor english

    I think you can shot with the movement you like, then, at the t time, modelize all your background elements in a 3D program (huugghh…), apply the texture from the t image, and then 3D track the movement and animate your camera in 3D with that, then recompose. But it’s a big work of roto for your main characters , and a big work of modelize.

    I hope i was clear.

    Bye

    Seems to me that the “smartest” way to handle things would be a little of a variant from what was suggested. Of course what path to take fully depends on just how much the camera needs to spin around the frozen people. If you’ve got big sweeping moves to make seems like Motion Control isn’t such a bad idea…

    First I would storyboard/shoot board out the sequence with the DoP/Director and decide which angles, composition and movement would be used for each shot in the scene. It wouldn’t hurt to go ahead and establish things like the final desired lighting as well (High noon, sunset, overcast…). Make animatics, time things, plan plan plan.

    Once all of that stuff was decided I would go to the location when the environment could be controlled as best as possible and shoot all the plates required of the the environment with a digital stills camera, scene by scene according to the planning you completed in the first step. Once all of the environmental elements have been shot, it’s into 3d to stitch them back together over relatively simple geometry and cards to build a handy dandy digital studio for use later on. At this stage it could also be smart to start thinking of simple digital props (other than your extras) that you can throw on to your digital set to add a certain degree of life to it… if it’s an outdoor shoot, car projections on simple geometry, matte paintings of clouds and such. One seller could be frozen atmospherics like a frozen in time steam element from the subway or a puddle being splashed, again using simple geometry.

    It would also be smart to establish the lighting (to later be replicated on the croma shoot) with the production team. Try to get as good a hold as possible on what conditions you’re going to try to be matching later on – it’s the only chance you have of pulling off a believable green/blue screen comp. Like where is the sun?

    After that I would go into the studio and shoot Croma of the principles doing their action, using MoCo and matching the lighting you’ve sketched out on your digital set. Place markers out on the floor for where the extras will be standing and tracking markers (small) out for 3d tracking the shot later, if you can’t translate over the moves from the rig to your 3d/2d software. Shoot a few passes of you principle actions then add in the extras at the marker points and do the same action with them in place. The joy of this approach is that by shooting in several plates you can already on the set begin to layer up your final composites in a pretty intelligent manner and save yourself work later on. For example if your hero is going to be placed just behind five or six extras but in front of say 20 other extras, you can shoot the 20 extras just standing still as one pass and the other five foreground people, again still, as a separate plate.

    Remember on the rig you can scale the moves, not only in physical scale but also in time (provided you adhere to the physical limits of the rig) so depending on what the desired effect is you can play with the shutter and fps of the move to either get a streaking effect on the extras, or a tighten the shutter a bit to reduce the motion blur.

    Another thing to remember is that the rig’s control software can program the firing-off of gpi triggers into the move. This can be pretty helpful for say a popping on and off lights (for say a close range gun shot) for your different plates. It can also slave to external TC of say a digibeta or SP, for things like sound sync. Good things to think of.

    Then back to the 3d set, plug in the camera moves for your respective shots, (either by translating them or tracking them) and start comping. Chances are you could “unwrap” a few frames of the extras plates, to use as projections on simple “marshmellow-man” geometry if you needed to. Otherwise the rest of the job will come down to making things look as believable as possible. Good croma key work, camera tracking, lighting (both in 3d as well as relighting your 2d croma plates with color correction) and rendering.

    The thing with these shots is that no matter how good you do them they’ll always look like an effect – it’ll never look “reel” which can really be a problem when trying to get approval. That doesn’t mean it can’t look good – it just means that you really have to be at the top of your game and the idea of what’s actually transpiring really needs to be smart.

    Of course I’m sure there’s 1000 other ways to solve variations on the problem as well. On a simple locked off shot or nodal pan, you obviously wouldn’t need motion control. Or if in a particular section, the motion needed to have that weaving kind of handheld feeling, you could slightly over frame the croma plates by a few percent and add a little shake to the final comp after the fact. In short there a ton of tricks to use, but you’ll want a few scenes that are real money shots to sell.

    Regardless best of luck,
    Chris

    in reply to: HolySmokes!! He’s on fire- VFX-question #212209
    Michael Dalton
    Participant

    A few years back while I was at Chimney, we did this video for Chris Cornell, where he’s walking around on fire. The shoot was actually very simple. First Cornell was shot walking around doing what ever he needed to be doing. After developing and TK, the shots that needed fire were pulled and exhaustively checked. Then it was a black screen studio shoot, with a guy in a black pyro suit that was shot in 50 (or was it 100?) completely inflamed, and trying to move in the same way as Chris moved on the shoot the previous day using the same lenses and camera positions. The pyro shoot was one stop under to make sure that the screen and guy were as black as possible, if memory serves.

    Almost all the shots were locked off but if you had say a motion control head you could do some pretty cool stuff.

    The rushes were so funny to watch… this pyro guy desperately trying to walk calmly for say 10 seconds, then running off camera chased by 10 guys with fire extinguishers.

    After that it was just to use extended bicubics in flame, screen the flames on after a little key tweaking, and warp and track them to Chris.

    It’s a real brute force approach but it worked fantastic. I imagine in your situation in addition you would want to put a slightly smoldering shirt on the guy, and shoot a few extra smoke elements to make everything more realistic.

    Best,
    Chris

    in reply to: Something I saw #213098
    Michael Dalton
    Participant

    Sounds like a job for 3d tracking and projectors onto very simple cards. Pick the frame in the sequence where the matte painting will be the largest and do the deed. Position a card/image so that it lies on the same plane as the face of the building that you want to add the painting and parent the projector to the card/image and at the frame you painted the destruction. When you scrub the timeline, if you’ve lined everything up right and your camera tracking is good, the matte painting should lock to the side of the building nicely.

    It would probaby pay off nice to top it off with some small pyro elements of fire and smoke, again parented to the cards/images. Flame’s a good bet but it would be a pretty sweet Nuke comp as well.

    Interestingly enough there’s an article on the front page of FX guide from ILM – the Alien Smackdown – where they discuss some of these techniques with regards to Flame.

    Good luck,

    Chris

    in reply to: flame masks and boujou data #213232
    Michael Dalton
    Participant
    merv wrote:
    thanks for the link chris…learn something new everyday…:)…cheers…merv

    My pleasure…

    Chris

    in reply to: flame masks and boujou data #213231
    Michael Dalton
    Participant

    A pretty powerful 3d tracker…

    http://www.2d3.com/html/products/boujou3_overview.html

    Best,
    Chris

    in reply to: flame masks and boujou data #213233
    Michael Dalton
    Participant

    Back in the day, the way we did it was to import the action setup that Boujou would spit out, then copy the camera’s anim info, go into the keyer and paste the anim data into the key/gmasks camera.

    Worked like a charm for getting info to and from Gmasks.

    Chris

    Michael Dalton
    Participant
    -k wrote:
    eltopo wrote:
    Comments anyone?

    Yip. It just once again shows, that all that talk about the G5 being superiour compared to intel Pentiums, all Steve Jobs nice little comparisons between his fancy G5s and an oh so slow Dell Workstation, etc have probably been complete BS. Not that we did not know that. I just find it amazing that people still think Apple would not bend the truth just like all the other companys do.

    -k

    Or you could see it as a laptop built from todays intel microprocessor technology is slightly faster than IBMs chip tech from 2 years ago… running code optimized for both platforms.

    “Ditch your old dual G5s and buy a laptop” I guess is the message right?

    That’s what the test shows. Nothing else.

    Chris

    in reply to: Best Monitors For Production, CRT, TFT, LCD ? #213314
    Michael Dalton
    Participant
    FLGB wrote:
    I also think that, currently, CRT monitors give a far better image than LCD’s. The biggest problem with LCD monitors is that the deepest black they can display is equivalent to a 25% grey on a CRT.
    The overall brightness of the image is not the same all over the screen, and the small imperfections in the LCD matrix add a sort of grain.
    The gamut is also wider on CRT monitors.
    It’s not really an issue if you want to make 3D, and basic compositing, but if you want to do high-end compositing, color grading, etc CRT is still best.
    I’m using lacie Electron Blue monitors with calibration probe and I never had to complain about their quality.
    Maybe the nex gen OLED screens will be more suitable for the VFX industry.

    Not to really disagree with what you’re mentioning here, because you’re very correct, but color accuracy on the monitor is based on a point of measurement and comparison to a specific target. So what display device to purchase is so dependent on the task and the target color gammut.

    For example, let’s say that the avergae Sony CRT monitor, out of the box is putting out 35 ft-lambert and has some amazinigly high contrast raio and you’re trying to calibrate your monitor against some film source, with a reasonable amount of ambient lighting. Inevitably what’s going to need to happen is a thourough decrease in brightness and contrast on the Sony to get a output of closer to 20-25 ft-lambert to allow for ambient light conditions and monitor brightness that will translate over nicely to the 16 ft-lambert and minimal ambient lighting conditions found in an average Cinema.

    If you add the fact that the white point will need to be steered potentially toward 5500/5400 kelvin then the brighness of the white point will change again. In the end perhaps you’re monitors overall brighness and contrast ratio has been severely reduced but, you will be able to achieve a decent match to a film projector using some of the commercially available mon cal utils.

    The same holds true to a certain extent with matching to say REC709 or SMPTE274. There are also published specs for how the monitor should respond (while they seemed to be followed loosely) and that should perhaps act as the basis for how to gauge a monitor purchase. Can this monitor put out say 40ft-lambert. Does it have a contrast ratio of bla bla bla…

    Or conversely you can approach it from the angle of, “this monitor has extrodinary color fidelity stability and a decent enough gammut for being productive without breaking the bank but a not so great contrast ratio and viewing angle.”

    Not a single display device you can buy has all the qualities required for being called the perfect match… arguably.

    For best all around, for critical colour evaluation for TV, but a sony Grade 1. For Film buy a Barco or Christie. For moderate colour evaluation for TV or film a sony 24 inch data monitor refurbished will get you far. Otherwise one of the newer higher grade monitors from say cinetal or ecinema are good bets for monitors and the CP35 from CInevation is putting out 16ft-labert at 5500K for mid range film work. For the least crtical environment consumer grade lcds are alright if you know what to do with them. Shake ships with a basic Truelight license which can be upgraded to a full working copy of Truelight along with thier probe which will let you maximize the output of even lower end displays for film work. Ironically the previous batch of apple 23″ cinema displays (not the aluminum ones) had a considerably better contrast ratio and clor gammut than the current shipping screens. Strange.

    At any rate good luck and pray for quick and speedy delivery of Toshiba’s SED stuff,

    Chris

    in reply to: Read Node Functionality #212498
    Michael Dalton
    Participant

    Well… on each seat your could easily use the env array to declare whatever info that you would need and then just code those variables into menu.tcl

    Under env in computer properties set a couple user variables, for example using your paths –

    PROJECTPATH C:/
    PROJECTNAME NUKE

    and then in menu.tcl change the line where it sets imagedir to something like>

    set imagedir $env(PROJECTPATH)$env(PROJECTNAME)/footage

    and the workdir to something like>

    set workdir $env(PROJECTPATH)$env(PROJECTNAME)/scripts

    Or some vaiation thereof. I would venture to say that this is probably the simplest way. With this method when you click on images you’ll go to the projectpath/project/footage/ and when you go to open and click on workdir you’ll be magically wisked away to projectpath/projectname/scripts/

    Best,
    Chris

Viewing 10 posts - 31 through 40 (of 40 total)