Synthesia, the AI video company, has announced another round of investment, a $12.5M Series A financing round. The funding will be used to fuel both Synthesia’s rapid enterprise user growth and product development of their AI video generation platform. Synthesia is focused on reducing the friction of video creation and making it possible for anyone to create videos, directly from their browser using Neural Rendering technology.

The company was foundered in 2017 by technology entrepreneurs Victor Riparbelli, Steffen Tjerrild, and two of the world’s leading researchers of computer vision Matthias Niessner and Lourdes Agapito. Fxguide test drove the company’s Studio product, which allows someone to produce a digital representative and then drive that digital human with either a simple audio file or just a typed script, (see above).

In the test demo above, fxguide’s Mike Seymour introduces two fully digital versions of himself. The videos were authored in our offices, on a laptop, and were based on training data we shot a couple of days ago. For the training clips, Mike was filmed delivering the same short script several times. The actual production of any new neural rendered clip is then inferred from this training data in a couple of minutes. Backgrounds, animation, and even music can be added as part of this process. The final output clip is pre-comped and ready to use immediately.

The company aims to replace cameras with code and scale its AI video generation platform. Their focus is not on high-end M&E but rather they are aiming at producing vast amounts of video content without needing to film actors. Right now the market is internal training and corporate communication, but in the long term (under ten years), Synthesia’s Danish CEO Victor Riparbelli, believes narrative films and pre-production films could all be explored without leaving one’s desk. This long-term plan requires solving many vast problems of producing actual performances, with sub-text, and emotionally valid interactions, but along that ultimate path, the company aims to release a series of progressively more realistic digital human tools and SDK platforms.

In this second example above, one of the company’s pre-set and pre-trained presenters is used. Note that the same presenter can ‘speak’ in any of a large set of real languages and with various different accents. Here the voice and face are digitally generated.

Four years ago the founders of Synthesia set out to build an application layer that turns code into video, allowing for video content to be programmed with computers rather than just recorded with cameras and microphones. Once video production is abstracted away as code it has all the benefits of software: infinite scale, close to zero marginal costs and it can be made accessible to everyone. While it may sound impossible to ‘film’ without a camera, Riparbelli points out that not that long ago it seemed equally impossible to “play music without a musical instrument and there was a time when writing required a pen and paper.”

The company is now cash flow positive and this latest round of funding will enable the company to grow further. One of the current investors is Shark Tank’s Mark Cuban, who first invested in the company in 2016. Cuban, who is an American billionaire entrepreneur, television personality, was the first investor in the company. The original founders of Synthesia had only just formed their company and were running it their personal savings and credit cards not long after Sony Pictures got hacked and their files were dumped. Founder Steffen Tjerrild happened to be looking at the Sony data dump and saw that it contained Cuban’s then private email. With nothing else, – they emailed him. At this time Matthias Niessner had published his first research but most of the world was yet to know of GANs and certainly, no one had Synthesia on their planning horizon. But Cuban had not only heard of GANs and Niessner, but he was actively personally experimenting with GANs himself. Riparbelli recounts that “we had an email exchange with Mark that lasted 16 hours ’til about 5:00 AM. All this time we would send us questions and we would answer them. And then at 5 AM, he said he wants to invest a million dollars!” There is clearly a reason Cuban is a successful billionaire, believes Synthesia, who were both extremely impressed with Cuban’s technical understand and appreciative of his support and input. “The fun thing, which I think people don’t really know about Mark is that: Mark’s actually crazy technology savvy. He knew everything about GANs. He knew everything about this type of technology.”

The UI of Studio on a Mac Laptop

This funding round is led by FirstMark Capital, the NY-based early-stage fund, and also adds two new angel investors: Christian Bach (CEO, Netlify) and Michael Buckley (VP Communications, Twilio). All existing investors are also participating in the current round: LDV Capital, MMC Ventures, Seedcamp, Taavet Hinrikus, Martin Varsavsky, TinyVC. 3 and of course Mark Cuban.

Today the company has offices in Europe and now New York. The company operates on a COVID distributed model, but the team is rapidly growing and hiring. “.We launched our SaaS product, Synthesia STUDIO, just 6 months ago,” explains Riparbelli. “It’s still in beta and is the world’s first and largest platform for creating AI videos. Our text-to-video technology allows businesses to produce professional-looking videos in minutes instead of days. We have essentially reduced the entire video production process to a single API call or a few clicks in our web app.”

Synthesia will use the new funding to invest heavily in product and IP development, starting with its personalized video API. The team is soon to release a version of Studio that has expressive non-verbal aspects such as nods and smiles. This new version, which is due in the next 7-8 weeks, also allows the digital human to return to exactly the same start frame, thereby allowing the seamless editing or transitioning of clips. This is vital for digital assistants, where the user would experience a visually seamless flow as the digital assistant responds to a branching narrative.

The company is also working with sites such as unsplash.com and pexels.com to provide royalty-free imagery directly in the next release of the authoring and editing capabilities of Studio. To aid with syncing with background animation, users will also be able to time the delivery of digital humans to the timeline of a background animation or PowerPoint slide deck.

Riparbelli has been impressed with the corporate response to its SaaS offering. “We have been overwhelmed by the response in the last six months since our beta launch: we now have thousands of users and our customers range from small agencies to Fortune 500 companies. They use Synthesia Studio primarily for internal training and corporate communications. But now we are seeing more and more companies starting to use it for external communications, incorporating personalized video into every step of a customer journey.”

While Synthesia has been in existence for the last four years, and already worked with some of the world’s top celebrities like David Beckham, Lionel Messi, and Snoop Dogg, CEO Victor Riparbelli says they are still early in their journey. “We’re building foundational technology for the next iteration of the internet. CSS enabled beautiful layouts and graphics. Javascript gave us interactive websites. Smartphones made everything accessible at all times and equipped us with sensors to capture the world around us. ML analyzes and personalizes our digital experiences at scale.”

Sample pre-trained actors or digital humans
Copy link
Powered by Social Snap