The Imaginarium Studios bring the world's first Real-Time Digital Character to a Shakespearean Performance with Xsens and UE4

 

For the first time since its debut in 1611, William Shakespeare’s The Tempest will be seen onstage with all the wonder and magic that the author dreamed of, thanks to a new collaboration between the Royal Shakespeare Company (RSC), Intel and in association with The Imaginarium Studios. The performance will include the first use of a completely digital character in an RSC production, made possible through the use of Xsens’ real-time, MVN motion capture technology and Imaginarium Studio's incredible nightly live on stage motion capture.

The Tempest is Shakespeare’s comedy about a major act of betrayal, ill treatment, the development of magic arts and a plot of revenge.

The Tempest, one of Shakespeare’s final plays, tells the story of Prospero, an exiled magician who decides to settle old scores with the help of his magical servant Ariel.

In the play, twelve years prior, Prospero was Duke of Milan, "being of a bookish disposition", withdrew more and more into his studies, leaving the management of his state to his brother Antonio. Eventually, with the help of Alonso, King of Naples, and the King's brother Sebastian - inveterate enemies of Prospero - Antonio usurped the dukedom for himself. Prospero and his baby daughter Miranda were put to sea in a rotten boat and eventually landed on a distant island once ruled by the witch Sycorax but now inhabited only by her son, Caliban, and Ariel, a spirit.

For centuries, theatre companies have sort to solve the staging issues of Ariel, who takes on various forms.

the_tempest_marketing_image_2016-social-tmb-img-820

One of the unique parts of Ariel is that the character features more stage directions than almost any other in Shakespeare’s bibliography, making it one of the most complicated to stage.

"All hail, great master! Grave sir, hail! I come
To answer thy best pleasure. Be 't to fly,
To swim, to dive into the fire, to ride
On the curled clouds, to thy strong bidding task 
Ariel and all his quality". - Ariel, the Tempest: Act 1, Scene 2.

rsc_2
Rehearsals

In this speech above, Airel states that 'your wish is my command' and that he will do whatever is wanted. In effect: if you 'want me to fly, to swim, to jump into fire, to ride the clouds in the sky' then Ariel will do it. Hence for this performance to work, Ariel needed to fly, "walk in space and collaborate with other performers in the moment" explained Ben Lumsden of Imaginarium Studios, (the production company founded in 2011 by filmmakers Andy Serkis and Jonathan Cavendish). “Without unrestrictive performance capture technology like MVN, Ariel would have been just another landlocked cast member in a costume.”

During the performance, Ariel morphs from a spirit to a water nymph to a harpy. This transformation is achieved by RSC and The Imaginarium Studios capturing the movements of actor Mark Quartley through Xsens motion capture sensors placed within the actor’s costume. When Ariel transforms into something more than human, the actor’s movements are projected onstage as a digital avatar. The flexibility of the Xsens technology enables the actor to interact directly with cast members in human form, whilst being able to transform live on stage every night during the play’s run.

To achieve this transformation, The Imaginarium Studios and the RSC used an Xsens MVN system to track the actor’s performance. The data is run through Autodesk’s MotionBuilder software, and from there into the EPIC UE4 game engine. The video output is then sent to d3 servers with Intel Xeon processors, connected to the RSC lighting desk, which in-turn controls 27 projectors located around the stage.

the-tempest-production-photos_-2016_2016_photo-by-topher-mcgrillis-_c_-rsc_207270There are 17 sensors in the Xsens suit.  Mark Quarterly (Ariel) wears the suit beneath his costume.

"The audience can see Mark/Ariel in costume and projected in the performance. There was a lot of discussion around this from the outset in 2014", explained Ben Lumsden Head of Studio at The Imaginarium Studios. "We toyed with the idea of Mark being offstage in a capture volume, driving the avatar projection on-stage. This would have been a more controlled environment and so less risky, but would have lessened the intimacy of the Prospero-Ariel relationship. Greg decided he wanted to peel back the curtain and show the puppeteer and puppet at the same time. Over the course of the rehearsals, Mark and Simon (Prospero) played with when Ariel should call his avatar spirit into life. It could be to illustrate a point, to tell a story, or to express a strong emotion". 

The Imaginarium are using their own facial capture system for this production. "In general, Mark (Ariel) has some looping and randomised animations to drive his facial expressions. However, for the Harpy soliloquy (You are three men of sin…), Mark's facial performance will drive the Harpy's facial performance" explained Colin Davidson, Chief Scientist at The Imaginarium Studios. 

"My role in the company is to develop the next generation of performance capture technology. A major challenge of that has been that all of the existing technology is closed" Davidson explains. "So our approach has been to develop our own platform, GRIP, for calibration, tracking and retargeting". A key part of this development has been to make algorithms and components that are real-time capable and which are also scalable. "In this case, we're deploying it to capture Mark's live facial performance and map it onto the Harpy rig right in front of the audience".

"Mark is wearing a head mounted camera (HMC), which in this case is tethered, meaning that we have a cable at the back through which the video stream is sent rather than wireless transmission". The HMC also illuminates Mark's face as the capture environment is fairly dark and the team want the audience to be able to understand what they're seeing. This meant not using an IR approach. "We ingest the video data at 720p50 and feed it into our real-time facial tracker. A nice feature here is that we don't need to apply any special make-up for this step (Mark is wearing his Ariel stage make up). The tracker has been trained on images of Mark captured in several sessions over the last year so it's fairly robust to variations in the image between shows". The team annotated the training images with the position of facial features like specific points on the lips, jaw, nostrils, eyebrows, eyelids and pupils to produce the facial training data set. "We compensate for small variations in the point of view of the helmet by identifying stabilising points at the temples, on the bridge of the nose and where the nose meets the lip that don't move too much. The rest of the tracked points are interpreted as animation of the face" he adds. Those animations are retargeted onto the Harpy rig controls and streamed to the rendering engine (Unreal/UE4) where they get combined with the body controls, rendered and projected. "We have written a blueprint in UE4 that receives the animation data over the network and writes it onto the rig".

tempest_pic

The retargeting is therefore learned from hand-tuned expressions that are mapped to facial expressions that Mark gave in a facial range of motion training set (FACS ROM). From there The team made any corrections to key frames to improve the result.

The system does not use timecode for synchronisation. All of the systems - face, body, render, projection - are running real time, at around 50fps. Of course there's latency, and "more than I'd like, but the objective here is to be as close to live as possible. Normally I'd like to run the face at a higher frame rate, but we're limited by the choice of hardware suited to the HMC form factor" explains Davidson.

The face is primarily a blendshape rig. That makes it easier to be certain of uniformity across the various platforms where it exists (Maya, Unreal, GRIP). "We're looking at Fabric Engine as a potential solution to that going forwards. The animation isn't going to be comparable to a Hollywood blockbuster where it's been perfected by animators in post production. It's enough for now to be opening the door of this new world of real-time performance capture and I'm thrilled that Intel and the RSC have been bold enough to want this and very proud of the work that our team at the Imaginarium has put together to deliver it" he explains.

rsc_4

In terms of the suit, the “inertial motion capture is changing how far productions can push their craft, bringing high-end digital characters into live shows,” said Hein Beute product manager at Xsens. “With The Tempest, the RSC is creating a real-time application that is both immediate and novel, something audiences always want to see on their night out.”

Theater is now entering an era where characters and scenes can be presented in ways that are more visually engaging, and in many cases, far beyond what even the authors originally imagined. Or as Sarah Ellis, the RSC’s head of digital development, told The Guardian in September: “To be able to create digital characters in real time on [this] scale in a theatrical environment is a huge achievement.”

 

 

The RSC production of The Tempest runs at the Royal Shakespeare Theatre in Shakespeare’s hometown of Stratford-Upon-Avon. Performances begin on November 8, 2016, and continue until January 21, 2017.


Thanks so much for reading our article.

We've been a free service since 1999 and now rely on the generous contributions of readers like you. If you'd like to help support our work, please join the hundreds of others and become an fxinsider member.