YouTube Originals has released a new series The Age of A.I., hosted by Robert Downey Jr. The all-new learning series has eight episodes and it takes a deep dive into the world of transformational artificial intelligence. How is artificial intelligence reshaping the world? Can machine learning enhance human experience? Can artificial intelligence help level the playing field for people with disabilities? Increase people’s physical performance? Or provide creative collaborative partners? These are just a few of the questions explored as The Age of A.I. looks at the technology that will impact our world for years to come.
Robert Downey Jr. seems genuinely enthusiastic and curious to take an immersive look at artificial intelligence and its potential to change the world. The Age of A.I. launched December 18 on YouTube.com/Learning.
The first episode features the work of Dr. Mark Sagar and the team at Soul Machines, both in terms of their BabyX research and a project to make a digital will.i.am. We have covered the work of Dr. Sagar before on fxguide and we again spoke to him in New Zealand to get some background on the incredible research he is explaining in the documentary. Some of this work, such as digital will.i.am, has never before been seen, although the core of that work was completed almost a year ago.
The premiere episode follows co-founder of Soul Machines, Dr. Sagar, an ex Weta Digital, Oscar-winning visual effects artist who has created some of the most sophisticated avatars, as he discusses research with BabyX and the team builds an autonomously animated digital version of Grammy-award winner will.i.am. Future episodes will feature prominent figures, including former NFL linebacker Tim Shaw, who is battling ALS, as he works with a team at Google to help restore his ability to communicate, testing the prototype of Project Euphonia for the first time.
Dr. Mark Sagar got involved in the documentary when he met will.i.am at a conference in Japan. “I had just been driving around Tokyo in a Mario cart (tourist attraction), I had on a cookie monster outfit and when we first met there was a Japanese version of the Beatles playing and Stella McCartney was also there! Will.I.am was there with another member of the Black Eyed Peas and they were both aware of BabyX, so we just started talking” Dr. Sagar recounts.
Will.i.am is somewhat of an AI futurist, in addition to being a major creative force in music and the arts. Will.i.am knew of Dr. Sagar’s research with BabyX. As will.i.am already had projects involving synthetic voice-based agent versions of himself, the idea of making a real-time high fidelity digital will.i.am was born. As will.i.am also happened to know Robert Downey Jr, the producers decided to incorporate will.i.am’s digital journey into the first episode.
Will.i.am (the real one) talks in the documentary about data identity and “I am my data”. Soul Machines carefully produces such digital versions of real people with a clear eye on protecting personal data and also making sure that even people who contribute to their extensive training data are suitably compensated. Soul Machine makes both ‘new’ digital people as well as reproducing actual people. The digital will.i.am is a simulation or replication of both the artist’s face and voice. The likeness was captured with a photogrammetry session from a portable 10 camera Soul Machines rig. The capture session is fundamentally a FACS based approach, as Dr. Sagar is a pioneer in applying the FACS approach to animation, dating back to his work at Weta Digital. But the team is constantly improving and refining its pipeline. The team had previously replicated, for example, Cate Blanchett’s voice for a different project. This had required some 20 hours of training material gathered carefully with the actress. For the new project only 2 hours of will.i.am was recorded with the artist in the USA.
“BabyX” was created as an autonomously animated psycho-biological model of a virtual infant. The simulation covers not only her appearance but a runs a dynamic cognitive architecture model sensing the environment and driving her behaviour and emotion in real-time BabyX is designed as both a stand-alone research project and as an expandable base to feed into more industrial or commercial complex computer agents.
She is rendered in real-time by Soul Machine’s own BL( Brain Language) Renderer, and she is fully autonomous and not an avatar. The primary GPU computation is done in the cloud for the general Soul Machines implementation, which means when it is deployed as a real product it can run just as easily on cel phones to high-end desktop computers.
While BabyX has her own biologically based AI backend driving behaviour and learning, the core platform BabyX runs on (HumanOS) can also be used for commercial application integrating autonomous animation with other more standard and curated AI components, for example, dialogue systems for conversation.
As the documentary touches on, Soul Machines is a successful company that produces real-world autonomous agents for companies ranging from ANZ Bank to Procter & Gamble. Soul Machines has currently has over 120 staff and is looking to hire more. Unlike some start-ups, it is actively producing major paid projects with companies around the world, while simultaneously growing its research unit.
BabyX, by contrast, is a child simulation, aged around 1 to 2. It was designed exclusively for research and ‘she’ allows Soul Machines to not only explore how to make digital people but actual child and developmental psychology. In the young in particular, the face reflects a brain state or a human ‘thoughts’. Because the behavior of the face is affected by so many factors—cognitive, emotional, and physiological— BabyX provides a highly-detailed, holistic and biologically based approach, far greater than has previously been attempted in facial animation simulation.
In the documentary, Dr. Sagar outlines his research agenda to explore human co-operation with machines and digital consciousness. Which begs the question, what is normal consciousness and do we even understand it enough to set a target to aim for in this sort of AI work?
Dr. Sagar says there are many different definitions of consciousness, dependent on how the word is contextualized, and he cherry-picks his thinking from leading thinkers. “For example, a really useful conceptual model of key aspects of consciousness has been developed by a Princeton neuroscientist, Dr. Michael Graziano. It basically posits a limited version of consciousness in informational terms as a description of attention. Because you can describe what you’re aware of, therefore, at some level it has to exist as information – because you can access the description and express it.” One function of this definition is to attribute awareness to others, to compute that person Y is aware of thing X. In Graziano’s theory, the machinery that attributes awareness to others also attributes it to oneself, self-awareness. Dr. Sagar is keen to explore these issues over the coming years. “The reason I like Graziano’s ideas is that he shows how his model can be used to explain a lot of phenomena related to conscious experience with it… it’s probably the most practical theory about consciousness I’ve come across and they are compatible with the general direction we have been taking to date,” he adds.
Dr. Sagar is also very interested in the issue of creativity. While much of the documentary focuses on AI using machine learning, the even more fundamental question is can machines be creative?
To this end, BabyX is also involved in a research program to connect her with a painting robot. “I’m fascinated with the creative process…we have another project where BabyX is going to be controlling a robotic painting machine using real paint on a canvas and there’ll be an interaction between the baby, the painting and people.” This is designed to explore the whole creative loop as an ‘artist’ creates something, from the artist having an idea, or perhaps just from playing or doodling. “The artist Kandinsky said, ‘it all begins with a dot’,” Dr. Sagar points out. “You’ve got a blank canvas and a dot. What happens next is based on the interaction, the physical interaction of the paintbrush, the canvas.” As BabyX will be engaged in the loop as ‘her’ own painting evolves, the team is hoping to explore serendipity, playfulness and thus the very issue of creativity. “Can we make a computer which is able to do that? And if we can, then working with a person makes them an art director in a way. We might be able to create all kinds of things that we have never imagined.” This leads even further into issues of beauty and can a computer appreciate or sense beauty in an aesthetic?
Both in the case of BabyX and will.I.am the researchers are also exploring embodiment and providing accurate bodies to match their digital faces. The notion of motion, exploration and inhabiting a space are all key questions of what it is to be a person. Identity is very much expressed non-verbally and often times in very effective body language alone.
The range of tasks undertaken by Soul Machines’ researchers is constantly expanding. For will.i.am, for the first time, the team had to model a beard, sunglasses and produced much more complex eyelash and eyelid models. “We have actually also captured the other band members and so what that project finally ends up being is still in flux” Sagar concludes.
It is still fairly early days for Soul Machines, – even if the current version of BabyX is v6.5 and the real daughter of Dr. Sagar, who BabyX was initially modeled on, has grown up far beyond her digital counterpart.
Episode one of the series covers both Soul Machines’ work and other advances in AI. The second episode tells a moving account of an ALS sufferer being given a new voice in the world. These first episodes will release weekly starting this week. YouTube Premium subscribers will have access to binge the first 4 episodes starting December 18 and will be able to binge episodes 5-8 on January 15, 2020.