It has been an eventful week for Epic Games. The company has taken a position in industry juggernaut SideFX. It has and also acquired Hyprsense, a move that builds on Epic’s Digital Human program powered by both research and expansion. Meanwhile, the VFX industry facilities and houses continue to make their own advances using UE4 for projects outside normal VFX projects, such as Digital Domain’s move into digital assistants.


Kim Davidson

Epic Games earlier this week took a minority investment in SideFX, the makers of Houdini. The press release from SideFX points to the intent to work alongside Epic, as a partner. Sci-Tech Oscar winner Kim Davidson remains the majority owner of SideFX, as well as President and CEO. The company stated that, “he continues his strong, unwavering commitment to SideFX’s staff, customers, and partners”. SideFX and Epic are both committed to SideFX continuing its work with other industry partners – including all other content creation applications and game engines. This new development will have no impact on the Houdini development roadmap, as SideFX will continue to define its own path as an industry-leading procedural 3D platform for the film, TV, advertising, and games sectors.

It does bode well for more real-time simulation work in UE and for a tighter path for bringing in assets from Houdini to the Unreal Engine. In recent years Epic has shown a major commitment to real-time simulation work, first with the Chaos high-performance physics and destruction system, and most recently Epic’s strand-based hair and fur system. While Houdini is vastly more than just a simulation tool, it is widely considered to be ‘best of class’ in 3D VFX simulation. Houdini is the defacto standard for high visual effects simulation used in most (if perhaps not all) major visual effects pipelines. The most impactful collaboration may be in creating simpler data and asset roundtripping between UE and Houdini, which would aid game, VFX, and virtual productions worldwide.


Hyprsense was founded in 2015 by Jihun Yu, Jungwoon Park, and Kenneth Ryu. Hyrpsense’s aim is to provide real-time facial motion capture solutions for animation, 3D avatars, and digital humans. The company first developed their facial tracking system for VR HMDs, and they then introduced the 2D-webcam based generic facial tracking solution for mobile, PC, and embedded platforms.

Animating convincing digital characters is clearly a key part of Epic’s strategy. Hyprsense’s technology and team joins other major players who have become a part of Epic Games, most notably 3Lateral and Cubic Motion.

Epic hopes to empower developers to use digital humans in a range of applications beyond just what one might traditionally call a ‘game’.  While Hyprsense has grown in the last 5 years to become a leading company in the gaming industry, with the technology being adopted by the world-leading AAA game companies, Epic is also interested in a broader audience. With this acquisition, Epic again furthers its work to make digital human content production more accessible. This acquisition will help accelerate Epic’s path to building new tools and also give Unreal Engine users the ability to deploy and drive the advanced character assets, on almost any platform.

“We are proud and excited to bring our character animation technology into the Epic Games ecosystem. Joining Epic gives us the opportunity to deliver new solutions and experiences on a massive scale,” said Jihun Yu, co-founder, and CEO of Hyprsense. “We are so grateful to the Hyprsense team for their tireless work to get us here, as well as our customers, partners, and supportive investors.”

Kim Libreri

“Bringing on the Hyprsense team enables us to continue pushing digital character innovation even further and approach the goal of giving all creators full control over expressing their vision down to the smallest nuance,” said Kim Libreri, CTO of Epic Games. Since joining Epic Games as CTO six years ago, Libreri has worked closely with founder Tim Sweeney to bring film industry tech to the games community and make Epic’s tools available to a wide audience. He has also been central to Epic’s advances in Digital Human technology across all the company’s many industry segments.

Digital Domain Douglas.

Digital Domain who had already shown Digital Doug as an avatar or digital puppet, is now expanding its work into realistic digital assistants.  “Douglas,” is at the forefront of a host of new realistic real-time autonomous digital humans. Douglas is a real-time person, using the latest UE4.25 software. It is still in development, but the aim is to deploy the digital human agent as a new User Interface (UI) for of Human Computer Interface (HCI). Douglas is different from Digital Doug, as the latter just emulated or matched the facial responses to the actual Doug Roble (Senior Director of Software R&D,) as we outlined in our original fxguide fxpodcast about Digital Doug in 2018.  The new Douglas has to do a lot more than just look good. This new Digital Agent has to connect with Natural Language Processing (NLP) to produce conversations that feel natural. It has to both make sense of the question being asked and produce a plausible response. Douglas is also a chameleon-like character as it has the ability to switch faces, providing future customers with even more flexibility when it hits the market in 2021.

“Everywhere you look you see virtual assistants, chatbots, and other forms of AI-based communication interacting with people,” said Darren Hendler, director of Digital Humans Group at Digital Domain. “As companies decide to expand past voice-only interactions, there’s going to be a real need for photorealistic humans that behave in the ways we expect them to. That’s where Douglas comes in.”

Douglas uses numerous types of machine learning and Digital Domain R&D to reproduce the most common mannerisms people expect to see in a lifelike digital human. By focusing on language processing, expressions, vision tracking, and more, Douglas can do learn from conversations and, importantly, remember people. Many current voice-only assistants, such as Siri and Alexa are produced in such a way as to not visually identify the people interacting with it. Douglas’ actual response rate is currently the same as Alexa and Siri in terms of natural conversational flow, but it demonstrates a memory for its previous interactions and tries to remove the long ‘processing’ pauses that slow down many other autonomous digital humans.

Digital Domain (DD) is also creating and emulating new human voices. Using much of the same underlying advanced software approaches, a new digital human can be built with only 30 minutes of audio or 10 minutes of video. This is dramatically lower than many other approaches, including earlier versions of DD own original technology from just 2 years ago.

DD will be offering the core Douglas technology as a service to companies who need a digital agent to answer questions or help customers with repetitive tasks. The current version is already designed to connect to most other chatbot or assistant system. What will be key is making sure the realistic face can deliver an emotionally intelligent response during real-time interactions. Starting next year, DD hopes that this technology can be deployed online, in meeting platforms or in kiosks around the world.

Douglas is the latest UE4 digital human-built off the likeness of Dr. Doug Roble, the Digital Doug program at DD has already led to several advancements in both real-time digital humans and AI facial capture. By comparing Douglas to the real-life Roble, DD has been able to advance the realism of their design as they prepare the technology for wider use.

To create Douglas, Roble submitted to over a hundred hours of performance capture, including a live book reading that logged his voice and expressions. A neural rendering tool was also trained by taking photos of Roble in a variety of lighting conditions, as we outlined in this fxguide story on DD’s Masquerade. From this data, the tool can now deliver levels of realism that would have been impossible to achieve before with traditional techniques, including replicating the mannerisms of another person with only a few expressions. In recent months, the team has begun swapping out faces

Digital Douglas joins a range of projects that visual effects companies are using by leveraging off to expand and diversify. Such projects are increasingly using the Unreal Engine as we see a trend to real-time applications.

Copy link
Powered by Social Snap