Image Metrics Makes a Human Hybrid for Splice
A Post-Sync'd Performance for a Biohorror Creature Feature
Image Metrics, with a resume that includes cutting-edge film work (The Curious Case of Benjamin Button) and envelope-pushing videogame efforts (Red Dead Redemption), worked with Toronto VFX facility C.O.R.E. Digital (which shut down in March) to merge Chanà©ac’s facial performance seamlessly into those shots. The scenes Image Metrics worked on depict Dren learning to read Scrabble tiles, her attack on the brother of the character played by Adrien Brody, and a scene in which she is nearly drowned in the laboratory. (For more background on how they do what they do, take a look at Peter Plantec’s 2008 piece on Image Metrics for StudioDaily.) Film & Video talked to Christopher Jones, Image Metrics performance capture supervisor and project manager for Splice, about working with Natali and Chanà©ac to create a new performance in post.
Christopher Jones: We were told the young actress didn’t quite get the emotional read that Vincenzo was going for, so he wanted to use Delphine Chanà©ac [in scenes depicting adolescent Dren]. He felt she was really the embodiment of the character. We just did a top-half-of-the-face replacement. We animated the cheeks, the eyes, the eyebrows, and the forehead going up to the back of the head. C.O.R.E. Digital provided the rigs for us, and they were working in Houdini for this particular show.
How did you make sure that you got the right emotions across in important scenes?
I flew up to C.O.R.E. Digital’s studio in Toronto, where we sat down and did a kind of reverse-ADR session. Instead of recording audio played back to picture, we were recording facial performance played back to picture. We set her up on a stage and set up a camera in front of her. We put the camera right in front of a projection screen. She would watch the screen on a loop with ADR beats and come in and hit her emotional cues, if you will. Vincenzo was right there, directing her through the scenes to get exactly what he wanted from her. She could see what was going on and try to match her timing and get the expressions that embodied of the character.
We delivered the footage to C.O.R.E. Digital while we were there. They edited the selects down to the specific shots they wanted to match each plate, and we processed and animated those. As the rig kept evolving, we wanted to change the shapes a bit to closely match the actress’s intention, but there is still that disconnect between the facial geometry of a regular person and the creature. What was really nice about working with Core Digital, and Vincenzo in particular, was he really had a hands-on approach to working with the VFX team and with us as a partner of the VFX team. He would provide personal notes on every shot, down to the smallest minutiae. On a lot of shots, he realized he wanted us to hand-tweak and hand-change them. The Image Metrics technology works with an existing rig, so you can hand-animate over the top and fine-tune it to get whatever the director wants. That enabled us to go through notes and revisions. They took the animations we provided and did a quick overnight initial comp and render – not with full lighting, but enough that we could see our animation on the tracked head in the scene and make sure the eyelines were right and the intention was reading properly.
And what you do doesn’t involve markers on the actor’s face. You just use straight video, correct?
That’s correct. I think we shot it with the Panasonic HVX200 – just regular high-definition video for this show. Our technology takes a look at what the actor’s face is doing. It’s not like traditional mocap, where a point translates to a point. We run that video through a computer algorithm that extracts a sort of motion data set, numerical values for the movements of the face, so the animators can go in and set the poses we think equate. It’s especially subjective on a character’s face that doesn’t quite match. We set a few of those key poses, and then the motion data set extracted from our algorithm fills in the gaps. It’s not just interpolation but the real, true movements. The actor’s intention comes across.
But you were just replacing the upper half of this face. Did that make it any more difficult for you in terms of blending the animation into the live action - animation from one actress and a live plate from another?
There were a good number of shots where we did have to stray a little bit from the intention of the older actress’s movements because they didn’t quite match what the face of the young actress was doing. Those were shots where we spent more time working with Vincenzo over video chat between Toronto and our office in Santa Monica. We needed to finesse the timing here and there.
Was a lot of the collaboration done over a long distance? How did that work?
It actually worked quite well. At the time, C.O.R.E. Digital had a tool set where you could control a QuickTime from one side and it would play back in sync on the other side of the communication, and you could draw on the picture with a mouse or a Wacom tablet. If he wanted to add accentuation on the eyebrows, he could draw lines on the picture and scrub back and forth with us while we were talking. We did get to spend time with Vincenzo and get subjective notes before we started animation, and he came by the studio a couple of times while we were in the course of production to get more of a hands-on experience with us. He was one of the directors that really wanted to make sure all the pieces of his film matched up with what he had in his head. It was nice that he took that responsibility. He was one of the best directors we had who would really communicate ‘ he has this vision in his head and he knows he needs to talk to all the people involved to get it right. And there are certain things he just didn’t want to delegate because he knew, “This is what I want.”