Just to prove that I’m not always uncovering rough spots in our industry, I want to share something very exciting to me.  As a virtual human designer, I’ve had a long standing interest in various kinds of digital doubles (D2s).  Before getting into this discussion, I’d like to clarify what a D2 is.  It is a reasonably exact duplicate of an actor, either in a role or as him/her self. A believable D2 is one that fools us into thinking the actor is the real live deal. 

Since Plato’s time there has been a word for such human duplicates: simulacra.  It’s still used, but simulacra don’t have to be exactly faithful reproductions, so I think “Digital Double” or “D2” is the more accurate term for what I’m discussing. Taking this a step further, I envision that in the not-too-distant future, virtual human brain technology will be combined with D2 technology to create life-like interactive virtual humans with intelligence and personality.

Note, characters like the amazingly realistic digital baby in Lemony Snicket, and the believable Davy Jones, in Pirates of the Caribbean are more accurately classified as virtual actors and are also simulacra, but are not D2s.   Believable virtual actors are relatively new on the scene and are extremely difficult to achieve. It requires both a technical tour de force and great artistry.

There are a few examples of reasonably good D2s used in film.  I first got seriously interested in D2s back in 1997, when Eileen Moran and her team at Digital Domain created the virtual Andre commercial where a reasonably good Andre Agassi D2 starred.  It was an impressive feat.  I was told that they actually degraded some of the mo-cap on the real Andre so people would realize that it wasn’t really him.  I’m not sure I believe that.

My hat is off to all the teams who have been working to solve this complex problem at many companies and universities.  It involves so many 3D techniques from performance capture to geometry perfection, to texture capture, to super realistic skin rendering and more, with many places to not get it quite right.  This brings us to a term called the “Uncanny Valley,” the creepy feeling we get when we see an almost right D2. Viewers will accept characters that intentionally are not supposed to look realistic, as we are able to suspend disbelief. But once characters cross the threshold where they are supposed to look like a real person our subconscious minds are sensitive to the slightest imperfection in them and warn us, keeping us from suspending disbelief. The term Uncanny Valley itself refers to a graph chart where viewers reactions are charted on how much they accept a representation of a character. The graph plots along with positive results for obvious animation, even as characters get more and more realistic, but once they cross a certain point where the characters are supposed to look like real life, yet due to imperfections does not fully achieve this, the graph plummets, thus the Uncanny Valley. Getting past that hidden protective mechanism is very difficult indeed…requiring near absolute perfection.

If we could only get it right

Because of the creepiness factor, true full-body hero doubles really have not been as successful as virtual actors. We can forgive extremely minor imperfections in Davy Jones, but not in a digital Brad Pitt.  One might classify the Conductor in Polar Express, or the Angelina Jolie character in Beowulf as D2s because they were designed to look very much like the real actor in a role, but they were never intended to fool us into thinking they were actually those actors. I still have difficulty accepting them.

If we could create a D2 that was truly believable the implications would be broad and deep.  For example we could digitally clone big stars and have them work in several movies at once.  They could have archives of performance data that could be edited and reused to correct shoots or even create new roles that the actor never actually played.  Yet they would still get paid for the use of their persona.

An actor could record dialog in their spare time between live takes, that would be applied to their D2’performance.  Aside from the possibility of over exposure, they could enjoy greater fame and income by acting in several movies at once without even being there. 

On the flip side, I understand producers can negotiate a lower payment for D2s than for the real-flesh actor.  If you want to think really weird, imagine young Daniel Craig when he turns 80 starring in an action adventure via his archived D2.  It could happen.

Until this year, in my opinion, a truly believable, full motion D2 had not ever been achieved. A few previous attempts were close and included such invisible VFX as head replacement on Spiderman and Superman and shots of a virtual actor as Superman flying.  They were believable, but not full motion. There were a number of pretty good attempts, but to my knowledge none of them were 100 percent believable full motion, full body performance captures. 

Then one of my editors told me about rumor circulating about The Curious Case of Benjamin Button using a new kind of digital double, actually a series of believable doubles for Brad Pitt aging backwards. The word was that they will be like nothing we’ve ever seen before. I searched around for more information, but no one could tell me anything about it.  I was extremely frustrated at the time.  Now you can see some awesome trailers here. These are actually virtual actors, more than digital doubles because Brad Pitt is clearly in heavy makeup, which impacts how his face moves and how he looks.

The Spanish Connection

Meanwhile, I had to run off to Spain to give some talks at Mundos Digitales, an animated film conference that I co-chair with Manuel Meijide in A Coruña, Spain.  There, I ran into Dr. Paul Debevec who was giving a talk about his collaboration with Image Metrics on “The Emily Project,” named for he actress Emily O’Brien who was their test subject. My talk was on virtual human design and Paul’s talk lead perfectly into mine. I was fascinated by the detail that was going into making a digital double.  Over a glass of wine, Paul told me he thought the final results might leap across the Uncanny Valley, which is VFX geek-speak for “achieving a fully believable virtual human.”  At the time I didn’t actually believe that was possible.

Enter Image Metrics

However, just in time for SIGGRAPH USA, 2008, Image metrics announced that they had created a truly believable D2 and they were going to demo her at the conference.

I was more skeptical than usual so I asked if they could show me a preview.  They sent me a confidential clip, just a few seconds, of their early work.  I thought I was looking a video of Emily O’Brien chatting with one of the researchers. I contacted their PR woman and accused her of either trying to fool me or sending me the wrong clip.  I was quite wrong about it.   Even in this very short early clip, I was completely fooled.  Later when they sent me the final copy I was pretty much astounded, and that’s very rare for me.

“The Emily Project” has been an intense effort both in Santa Monica and in Manchester, UK, where the capture process and specialized software were developed. It turns out that the amount of data needed to create such an accurate D2 was extraordinary.  Capturing facial information at that level of detail resulted in a torrent of data that needed to be organized, analyzed and compressed to a usable flow – no easy task.
To create a true D2 takes a lot more than simple modeling, motion capture and rendering.  It takes in-depth digital analysis of the specific actor’s skin, and the way it moves, in extremely high resolution. 

The process is threefold:

First, the team needed an actor with the patience sit perfectly motionless for about 3 minutes, inside a totally Sci-fi looking light sphere with OVER 400 hundred carefully aimed LED light sources focused on her.   They then captured the information needed to make a near perfect ultra high-resolution model of her.

Next they created a 3D model of her face in unprecedented detail in geometry, motion, texture and rendering.

In part three they used their proprietary performance capture solution to record behavior and then apply it to the model virtually on a pixel-by-pixel basis. The result is spectacular…


Emily explains…

The Making of "The Emily Project"…

Enter Emily

They choose a wonderful young actress name Emily O’Brien, an optimal choice because not only is she rather pleasant to look at, she’s also a lot of fun and extremely talented at holding her position for 3 minutes while the computers flashed lights at her in rapid succession. 

Through the use of ingenious techniques and clever math, Debevec and his team were able to capture and extract all the information needed to build an extremely accurate model of Emily’s face…. and not just the way it looks but the way it works.  They built mathematical models of how her bones move, how the muscles slide over the bones, and how her skin moves and creases as she talks and makes emotional expressions.  

To do this they had to capture 33 face scans, each consisting of four digital images of 10 megapixels, one each for diffuse, specular, diffuse normal and specular normal maps.  In addition, each shot required a high-resolution geometry scan at 5 megapixels.  This yielded facial geometry right down to Emily’s individual pores. Previously this level of detail was only obtainable through face casting, which is extremely uncomfortable, time-consuming and expensive.

But if they had to capture all that data for each frame, it would take probably all the computing power in the known universe to do it in real-time. By building an accurate mathematical model of Emily, the guys at Image metrics were able to then use their proprietary performance capture solution to record the real Emily both in performance and in casual conversation.  They then applied that performance data to their model of Emily and brought it to life, creating the first true D2.  This is of course an oversimplification what they did, but you get the idea.

Just for fun I talked with Emily O’Brian about her experience and she told me: “When I first saw the demo for Project Emily, It thought it was just me there on video capture.  But then they started removing my skin, it was creepy, but I realize that was the Digital Me!  I was really amazed.”   I asked her if she thought going into this gig if it would help make her world famous: “No, not really at all.  I had some time and I learned about it through Craigslist.  It sounded interesting. I thought I’d have to put on all those markers you usually see on actors for performance capture, but I was pleasantly surprised that I didn’t have to use any markers at all.  Now with all the publicity and interest…I just had no idea what a big deal this was going to be.”  

I suggested that she and her D2 had become an important milestone in CG history.  She appeared to be pleased at that.  As I type this, Emily is in Germany talking with students at SOS (Summer of Suspense), about her experience in creating the first true digital double.

All this made me look a little bit foolish since I’d just gone on record at Mundos Digitales saying I believed it would be another two years before the world would see a believable digital double…I was wrong.  I guess Ray Kurzweil’s suggestion that progress is accelerating at a double exponential rate is pretty much the real deal.

Image Metrics reel…