Leading facial performance capture company Dimensional Imaging (DI4D) is launching its new Head Mounted Camera (HMC) system at the upcoming Game Developers Conference 2019 (GDC). The new DI4D HMC provides the highest fidelity facial performance data ever captured with an HMC used in conjunction with the DI4D post processing pipeline. Although the DI4D HMC will enhance most third-party facial processing pipelines, it has been designed to be particularly effective when used with DI4D’s own advanced facial processing solution.
The new system uses a stereo pair of 4MP cameras and synchronised on-board strobed LED illumination to ensure camera focus and even illumination across the face, while minimising the apparent brightness to the subject.
Many of the world’s productions use HMC units built by Technoprops, which has been the gold standard for high end HMCs. But with Fox studios buying Technoprops and founder Glenn Derry becoming VP of Fox Studios’ VFX Lab, coupled with Disney’s Fox Studios deal, several other companies are now looking to produce HMCs. Other companies already make HMCs such as Standard Deviation etc, and there are more that will be announced either at GDC or soon after.
While there are excellent solutions to face tracking using iPhone X RGBD face technology (and fxguide will be covering this next week), for high end work, a precise HMC with dedicated hardware is still the preferred model for most AAA studios and VFX houses.
DI4D was formed in 2003 and based in Glasgow, UK and Los Angeles. DI4D provides 4D facial performance capture services, systems and software for visual effects in movies, television, video games, VR and advanced research applications. Recent projects to have used DI4D include the Warner Brothers movies Blade Runner 2049 and Fantastic Beasts and Where to Find Them, the new Netflix animated anthology Love Death and Robots, and several more movie, television, video game and VR projects currently in production.
DI4D did facial performance MoCap in four of the episodes of the Netflix Love Death and Robots. The DI4D team worked with Axis Animation and Blur studios for those episodes.
Leading research customers include University of Glasgow, Imperial College London and Dallas Children’s’ Hospital. DI4D’s proprietary motion scanning technology captures 4D data comprising video rate 3D surface scans tracked with a dense fixed topology mesh. The DI4D solution, which does not require any facial markers, make-up or structured light, provides high fidelity facial motion capture data delivering the subtle nuances of a MoCap actor’s facial performance.
DI4D do extremely high end work For Rachel in the VFX Oscar winning Blade Runner 2049, DI4D tracked the actresses facial performance using a 7,400 vert mesh – the densest mesh that DI4D had tracked to date. But they have subsequently increased the resolution of the tracked data to approximately 30,000 verts. This results in the delivery of highly realistic facial animation.
The DI4D team will also be showing a new tech demo for GDC featuring a new version of DI4Ds facial performance captured using the new DI4D HMC and processed (offline) using their latest 4D processing pipeline. The pipeline is based on per frame 3D scan data derived from passive stereo photogrammetry and subsequent mesh tracking using optical flow to create a point cache. This is then “solved” to the rig and rendered in real-time in UE4. The new demo is expected to be impressive and aimed at demonstrating “movie quality” performance driven facial animation especially targeted for use in high quality games.
The DI4D HMC is designed so that a performance actor can deliver a take, and it can be reviewed remotely by the director or supervisor but the actual face processing is not expected to be done live. The DI4D unit neither streams full video nor processes on the electronics the actor is wearing. “The current (initial) model assumes high quality offline processing – but one of the advantages of having our own helmet is the flexibility for future development and customisation it offers us. There is certainly scope in future for having the cameras wired directly to a regular sized more powerful PC capable of more sophisticated live processing”. DI4D is planning on showing more ‘live’ functionality at SIGGRAPH 2019.
The functionality at first release is:
- It is possible to locally view live streams of video from remote recorder.
- It is possible to locally start/stop recording on remote recorder. Recording can be started/stopped on multiple recorders with one button.
- It is possible to locally view and control playback of previous recordings from remote recorder. Including looping, scrubbing and going to specific time or frame. Playback can be initiated across multiple recorders and will be synchronised by timecode.
- It is possible to locally perform a “live” calibration (of a single system) as follows:
- Press calibrate button to initiate special calibration mode recording.
- Press stop at which point previous recording will automatically be opened and played back at specific frame increments.
- Circles will be detected and shown on playback view as they are found ~1fps.
- Calibration will be performed and calibration will be stored both locally and transmitted to the remote recorder for sharing with other clients for reuse in the event local software has to restart.
- On set you can locally build and view a low frame rate 3D model whilst performing any of 1-3 above, subject to having a valid calibration. You can scrub or pause during playback if you want to view a specific frame in 3D.
- You can view locally in a web browser the images that correspond to live or playback view that was initiated from another DI4Dlive instance.
Colin Urquhart Interview
We spoke to Dr Colin Urquhart DI4D’s CEO, about the various design designs the team made in building the new HMC.
FXG: Is the rig balanced, to avoid neck strain issues that are not uncommon on some cheaper system, as the rigs are front heavy on your head?
CU: It is designed to be. The brace element that runs over the top of the head is raked back to counterbalance the cameras and lights. The whole helmet is designed to be as light as possible. We are using a carbon fibre boom. Tensioning wires are used to fit the helmet securely and snuggly to the head and to spread the pressure evenly avoiding pinch points.
FXG: In addition to the helmet itself, there is a belt worn recording unit (mini PC) and battery. You decided to not do much processing on the ‘backpack’, what processing is then done on the unit you wear?
CU: Full resolution JPEG compressed stereo video is streamed onto the recording unit – but no primary processing per se is carried out on the recording unit. While there is no facial processing there is a low frame rate “raw” 3D reconstruction, which is currently intended for setup and QC. We carry out stabilisation in our offline post-processing, but there is no real-time stabilisation of the captured images.
FXG: So what is sent to the master computer?
CU: A low bandwidth live preview stereo stream is transmitted wirelessly from each HMC to the control PC. Someone can remotely review previous takes that are stored on the recorder. It is possible to remotely build and view a per frame 3D reconstruction using passive stereo photogrammetry of a segment of a take stored on the recorder, and it will be possible to remotely view a low frame rate live 3D reconstruction using passive stereo photogrammetry (e.g. useful for setup and QC).
FXG: You decided to not use infra red cameras, that is interesting?
CU: The current cameras are greyscale but not infra-red. Our (offline) primary processing pipeline does not produce good results from infra-red images. I believe there is a near infra-red version of the cameras that could possibly be swapped in for use with a third party processing pipeline.
FXG: Did you look at RGBD cameras at all?
CU: We believe that our offline stereo reconstruction does a better job than current depth cameras. We have concentrated on achieving the best stereo 2D image quality possible – which will in turn generate the best results from our stereo reconstruction.
FXG: And the cameras that are standard on the new HMC, what are their technical specs?
CU: They are 4MP cameras, the sensors are 2048 x 2048 pixels. We normally capture 2048 x 1536 and we can capture at up to 60 fps at that resolution. Time code is transmitted wirelessly to the helmet(s), and the cameras run at a multiple of time code, in sync.
FXG: Can I buy this outright ? How much is the HMC and when does it ship?
CU: Yes it can be purchased. We are expecting to ship first units in July 2019. List price is USD $29,950 with discount for multiple units. (This does not include our post-processing, which is currently offered primarily as a service.)
GDC delegates can see the new HMC at the DI4D booth S1060 at GDC 2019