The Promise of Digital Doubles
Just to prove that I’m not always uncovering rough spots in our industry, I want to share something very exciting to me. As a virtual human designer, I’ve had a long standing interest in various kinds of digital doubles (D2s). Before getting into this discussion, I’d like to clarify what a D2 is. It is a reasonably exact duplicate of an actor, either in a role or as him/her self. A believable D2 is one that fools us into thinking the actor is the real live deal.
Note, characters like the amazingly realistic digital baby in Lemony Snicket, and the believable Davy Jones, in Pirates of the Caribbean are more accurately classified as virtual actors and are also simulacra, but are not D2s. Believable virtual actors are relatively new on the scene and are extremely difficult to achieve. It requires both a technical tour de force and great artistry.
There are a few examples of reasonably good D2s used in film. I first got seriously interested in D2s back in 1997, when Eileen Moran and her team at Digital Domain created the virtual Andre commercial where a reasonably good Andre Agassi D2 starred. It was an impressive feat. I was told that they actually degraded some of the mo-cap on the real Andre so people would realize that it wasn’t really him. I’m not sure I believe that.
My hat is off to all the teams who have been working to solve this complex problem at many companies and universities. It involves so many 3D techniques from performance capture to geometry perfection, to texture capture, to super realistic skin rendering and more, with many places to not get it quite right. This brings us to a term called the “Uncanny Valley,” the creepy feeling we get when we see an almost right D2. Viewers will accept characters that intentionally are not supposed to look realistic, as we are able to suspend disbelief. But once characters cross the threshold where they are supposed to look like a real person our subconscious minds are sensitive to the slightest imperfection in them and warn us, keeping us from suspending disbelief. The term Uncanny Valley itself refers to a graph chart where viewers reactions are charted on how much they accept a representation of a character. The graph plots along with positive results for obvious animation, even as characters get more and more realistic, but once they cross a certain point where the characters are supposed to look like real life, yet due to imperfections does not fully achieve this, the graph plummets, thus the Uncanny Valley. Getting past that hidden protective mechanism is very difficult indeed…requiring near absolute perfection.
If we could only get it rightBecause of the creepiness factor, true full-body hero doubles really have not been as successful as virtual actors. We can forgive extremely minor imperfections in Davy Jones, but not in a digital Brad Pitt. One might classify the Conductor in Polar Express, or the Angelina Jolie character in Beowulf as D2s because they were designed to look very much like the real actor in a role, but they were never intended to fool us into thinking they were actually those actors. I still have difficulty accepting them.
If we could create a D2 that was truly believable the implications would be broad and deep. For example we could digitally clone big stars and have them work in several movies at once. They could have archives of performance data that could be edited and reused to correct shoots or even create new roles that the actor never actually played. Yet they would still get paid for the use of their persona.
An actor could record dialog in their spare time between live takes, that would be applied to their D2’performance. Aside from the possibility of over exposure, they could enjoy greater fame and income by acting in several movies at once without even being there.
On the flip side, I understand producers can negotiate a lower payment for D2s than for the real-flesh actor. If you want to think really weird, imagine young Daniel Craig when he turns 80 starring in an action adventure via his archived D2. It could happen.
Until this year, in my opinion, a truly believable, full motion D2 had not ever been achieved. A few previous attempts were close and included such invisible VFX as head replacement on Spiderman and Superman and shots of a virtual actor as Superman flying. They were believable, but not full motion. There were a number of pretty good attempts, but to my knowledge none of them were 100 percent believable full motion, full body performance captures.
Then one of my editors told me about rumor circulating about The Curious Case of Benjamin Button using a new kind of digital double, actually a series of believable doubles for Brad Pitt aging backwards. The word was that they will be like nothing we’ve ever seen before. I searched around for more information, but no one could tell me anything about it. I was extremely frustrated at the time. Now you can see some awesome trailers here. These are actually virtual actors, more than digital doubles because Brad Pitt is clearly in heavy makeup, which impacts how his face moves and how he looks.
The Spanish ConnectionMeanwhile, I had to run off to Spain to give some talks at Mundos Digitales, an animated film conference that I co-chair with Manuel Meijide in A Coruña, Spain. There, I ran into Dr. Paul Debevec who was giving a talk about his collaboration with Image Metrics on “The Emily Project,” named for he actress Emily O’Brien who was their test subject. My talk was on virtual human design and Paul’s talk lead perfectly into mine. I was fascinated by the detail that was going into making a digital double. Over a glass of wine, Paul told me he thought the final results might leap across the Uncanny Valley, which is VFX geek-speak for “achieving a fully believable virtual human.” At the time I didn’t actually believe that was possible.
Enter Image MetricsHowever, just in time for SIGGRAPH USA, 2008, Image metrics announced that they had created a truly believable D2 and they were going to demo her at the conference.
I was more skeptical than usual so I asked if they could show me a preview. They sent me a confidential clip, just a few seconds, of their early work. I thought I was looking a video of Emily O’Brien chatting with one of the researchers. I contacted their PR woman and accused her of either trying to fool me or sending me the wrong clip. I was quite wrong about it. Even in this very short early clip, I was completely fooled. Later when they sent me the final copy I was pretty much astounded, and that’s very rare for me.
“The Emily Project” has been an intense effort both in Santa Monica and in Manchester, UK, where the capture process and specialized software were developed. It turns out that the amount of data needed to create such an accurate D2 was extraordinary. Capturing facial information at that level of detail resulted in a torrent of data that needed to be organized, analyzed and compressed to a usable flow – no easy task.
To create a true D2 takes a lot more than simple modeling, motion capture and rendering. It takes in-depth digital analysis of the specific actor’s skin, and the way it moves, in extremely high resolution.
The process is threefold:
First, the team needed an actor with the patience sit perfectly motionless for about 3 minutes, inside a totally Sci-fi looking light sphere with OVER 400 hundred carefully aimed LED light sources focused on her. They then captured the information needed to make a near perfect ultra high-resolution model of her.
Next they created a 3D model of her face in unprecedented detail in geometry, motion, texture and rendering.
In part three they used their proprietary performance capture solution to record behavior and then apply it to the model virtually on a pixel-by-pixel basis. The result is spectacular…