Home / Creativity / Technology

Thinking Deep for Meet the Robinsons

Digitally Reinventing the 3D Landscape

When Walt Disney Pictures released its animated feature Chicken Little in 2005, 84 theaters could project the stereo 3D version. Now, more than 600 theaters can show Disney’s latest animated feature, Meet the Robinsons, in stereo 3D. Phil “Captain 3D” McNally was stereographer for Chicken Little at Industrial Light & Magic, and stereoscopic supervisor for Disney’s remake of Tim Burton’s Nightmare Before Christmas in stereo 3D. At Disney, his early involvement with Meet the Robinsons allowed him to push the state of the stereo-3D art into new dimensions. We talked with him shortly before he left Disney for DreamWorks Animation, where he will be stereographic supervisor on that studio’s first stereoscopic 3D movie, Monsters vs. Aliens.
F&V: When did you decide which shots in Meet the Robinsons would be stereo 3D?

PHIL MCNALLY: In layout. Traditionally, we consider only the x and the y of the shot. This is the first time we had a chance to look at the depth across the story arc of the movie as well. We looked at the whole movie with [director] Steve Anderson and identified which parts lent themselves to stereo 3D. We had time to think about the whole movie from the depth point of view.

How did you choose the shots?

We wanted to control [stereo 3D] in a way that supports the story. For example, during a big chase sequence, when the characters are flying over Future City, we wanted the buildings to feel like they’re 3000 feet high. I made a printout for the movie with red for excitement zones [in the story], and green for comfort zones. The audience works harder to see the shots with more depth, so those shots are more stressful to watch. It’s the same as in a mono movie when a bomb goes off. That’s uncomfortable for the audience, but it serves the story. But before the audience feels any stress, we get back out of it. It’s like fast cutting.

Will the mono version still have some of that same excitement?

PM: You still understand the depth in mono. But the benefit in stereo is that you feel the depth. There are two kinds of depth. Imagine a camera looking down a long highway in the desert. If you take a still photo of that, everyone can understand there’s depth, mainly from the perspective. They see the road disappear; they don’t think there’s a flat triangle in center of the picture. So, of course, you see depth in a mono film. But the way you experience depth in real life, with two eyes open, is that your brain takes the images from your left eye and right eye, those two camera angles, and mixes them to form the world. That’s very different from looking at a flat image with perspective and shading and haze. So, yes, you understand depth of the shot in mono, but when you see stereo, you feel the depth. You don’t have to imagine it.

Were the stereo 3D scenes modeled any differently?

No.

How do you control depth?

Depth is controlled by the placement of the cameras in the software. We have a left and a right camera. The separation between them, the interocular distance, puts the amount of depth into the scene. Imagine you have a shot of a ball floating in space. By separating the cameras by different amounts, increasing or decreasing the inter-ocular space, the ball becomes rounder or wider, or ends up being stretched like an American football.

Separate from that, the ZPS [zero parallax setting] controls whether the ball is in front of or behind the screen, but it doesn’t change the depth. The point of zero parallax is where something lines up perfectly on the screen, where the two lines cross. Some objects appear in front and some behind. So camera separation changes depth, and positioning the zero parallax determines whether characters are in front or behind the screen. We change the depth and then position the zero parallax.

Did you implement anything new for Meet the Robinsons?

Normally the black frame of the screen is a neutral mask; you decide whether you want it in front or behind. The Spottiswoode brothers [Raymond and Nigel, early 3D gurus and authors of the pioneering 1953 text Theory of Stereoscopic Transmission] optically put a frame into the image itself that was separate from the hard masking, and that allowed them to make it look like the screen was in different places. We used that technique to have the frame itself be part of the depth. It’s the first floating stereoscopic window that I know of in a feature film. The frame can be in front, behind, angled, or tilted. We controlled the depth, then the placement of the objects in front or behind, and then the frame.

Why do the stereoscopic images have a black frame?

When you look at a stereoscopic film, you see a picture on the screen, and a black edging. That frame becomes important because you’re no longer looking at a flat image; you’re looking at space through a hole. That’s the stereoscopic window. Normally the frame is at the edge of the picture, where the picture stops. But we’re not looking at a picture; we’re looking at space. So we separated that frame from the image. Imagine a shot of distant hills. If we move the frame closer to us, the hills look deeper.

How did you use this separation of the frame from the image in Meet the Robinsons?

We have a shot where Bowler Hat Guy walks through some doors with a box. He walks from far away up to the camera and skims past the camera. Technically, that’s hard to do in stereo because the character travels a long distance and comes close to camera. To make it feel like he’s moving towards us, we move the frame from the starting point closer to us. The audience won’t see the window move, they’ll see the character move toward us. We’re using the frame to subconsciously change what the audience sees.

In the majority of shots, we position the frame so it doesn’t clash with anything at edges. But when we got to a sequence where dinosaurs break through glass and start throwing things around, we wanted the camera to shake. We used the [moving window frame] technique to create technical issues that make the sequence more exciting, as if the camera can’t keep up. We also used the window to reinforce the stereoscopic space. When the dinosaur starts chasing people around the buildings, we literally break the rules of stereo windows. We let the floating window fall apart. What literally happens is that it looks like everything is at a safe distance behind the frame, and then the dinosaur breaks through the frame. That gives the audience the subconscious feeling that the dino is able to get out into theater space. By removing the window, it makes you feel like the characters are out of control and you don’t know what’s happening next.

The best bit is that no one sees the window at all. Then they take their glasses off and they can see this thing moving around. It’s so obvious. But when they put their glasses back on, it disappears. What you feel is the point of the story coming across stronger.

Did you create the stereo version at Disney?

A team of about 10 of us controlled the creative work of setting up the stereoscopic cameras. We approved the Open GL gray renders. When the director approved the depth, then we sent the data for the cameras to Digital Domain. We had Digital Domain make the color version of the right-eye image. They contributed to the look, though. They weren’t just a render farm.

What did Digital Domain do?

They took the RIB [rendering format] files and re-rendered true left and right eyes for us. The left eye, generally, was created for the mono movie. Then they needed to match that for the right eye and apply the floating window – the mask on top of the left and right eye. You could think of them as putting the shot back together for rendering and once rendered, taking the Shake comps and recreating the final image for the right eye and the floating window.

Why do you think stereo 3D is becoming more popular?

A lot of people still think of [stereo 3D] as red/green glasses, but if you go back far enough, things were projected full color and viewed with polarized lenses. The old digital projection system had some problems though: the left and right eyes ran out of sync, they were misaligned, or the bulbs in the projectors had different intensities. All those problems are gone now with the Real D single-projection system. The projector projects the left eye and right eye images so fast, at 144 Hz, that we don’t see them sequentially. They triple-flash each image: six flashes per frame. Screen technology has also improved and theaters are longer and thinner, which works to our advantage as well.

What do the glasses look like?

They’re designed to fit over existing glasses, so they’re slightly scaled up. The glasses will make you look like Roy Orbison.

No Comments

Categories: Creativity, Technology