Why "Fake 3D" Should be Better Than the Real Thing

Legend 3D started as a colorization company, doing work for most of the major Hollywood studios starting in the late 1980s and then, with updated processes for the digital era, into the 2000s. Five years ago, Barry Sandrew, Ph.D., and partner Greg Passmore began anticipating an explosion of interest in stereo 3D, largely tied to the release of Avatar. They began the process of adapting their colorization pipeline – which has handled as many as 135 titles over the course of seven years – to handle 3D conversions of 2D footage. You can see Legend’s 3D-conversion work in Alice in Wonderland, as well as in made-for-cinema spots for Hewlett Packard, New Balance and Fanta. F&V asked Sandrew about his scientific background, his conversion process, and the backlash against “fake 3D.”
Film & Video: What was in your background that led you to colorization and then 2D-to-3D conversion?

Barry Sandrew:  My first career was as a neuroscientist at Harvard. I had my doctorate in neuroscience. I did a post-doc at Columbia College of Physicians and Surgeons and went on to Harvard. I was there doing basic research for six or seven years. I was also doing a lot of work in medical imaging. Those were the early days of positron emission tomography, CT and that sort of thing. There was a need for improving the diagnostic capabilities of these instruments. The latest thing was going to be digital X-rays. I applied color to three-dimensional reconstructions of the brain using those techniques and that equipment.

Out of the blue, some entrepreneurs approached me. They were directed to me because of my work in image processing and color. They had three failed R&D attempts to invent colorization. They understood that colorization in the analog version was inferior. They wanted to invent a way of doing it digitally that would be superior and more accepted by the consumer. Besides that, anyone who colorizes a B&W movie gets a copyright on that colorized version for 95 years. So they saw themselves as potential movie moguls. It didn’t quite work out that way. But they asked me how I would colorize feature films digitally. As an academic, I wasn’t interested in compensation. It was just an interesting question. They came back and asked if I would implement it. I didn’t want to leave Harvard, but they made me an offer I couldn’t refuse – four times the salary I was making at Harvard and a lot of stock in the company. I had never been in business before, and it sounded very attractive. I still didn’t do it, but I took a leave of absence from Harvard and in five months I had the process up and working, and within six months Republic Pictures gave us The Bells of St. Mary’s as our first feature film. We finished that in November of 1987. We made a big hit and I never went back to academia.

Talk about your responsibilities on Alice in Wonderland.

We were contacted late in the game by Sony Imageworks. They had a large part of the film that they weren’t able to complete on time. Most of it was the hardest shots in the film. They gave us 640 of those shots – about one third of the number of shots in the entire film. We produced them using our process. They did the rest of it, but we did some of the more difficult shots – the castle shots, the tea party shots. We’re scattered throughout the film. We did entire scenes.

Much of this was green-screen work. Were you working with flat, live-action plates from a green-screen shoot?

Some of the more difficult shots were practical, not green screen. When Alice falls into the hole, and goes into the round room with all the doors, a lot of that was practical. Her hair and the walls were very similar in color and texture. It was extremely difficult to separate the hair from the walls and put that in dimension. We were able to do it, and it looked pretty flawless.

For other shots, we had some elements but not a lot of elements. They gave us clean backplates when they were available, but mostly we took composited frames and converted that. We work best when it’s finished so everything fits perfectly, as opposed to dimensionalizing it first and compositing it later.

It seemed to me that it might be easier for you if you had separate elements for a shot, so that you would have more freedom placing them in 3D space.

No, not at all. When we segment a frame or a shot, those segments can be placed anywhere. Everything blends perfectly, everything matches in depth perfectly. We’re treating it as one thing as opposed to a lot of different things we’re putting together. The clean backplates help when there’s extreme 3D, so we have some gap-filling to do – data doesn’t exist when you’re looking around somebody, so we have to put a background in there. For the most part, our system handles gaps algorithmically, but sometimes we need help and clean backplates are ideal.

What’s the actual process like? Are you going in and projecting textures on 3D geometry?

No. That’s the way IMAX and a lot of the other ones do it. The problem is you can’t get the detail that’s necessary for high-quality conversion. We’ve got some samples of Iron Man we did during the suit-up process. There’s so much detail in that outfit that you can’t possibly get it by producing primitives and projecting on primitives. We take apart everything in the shot and mold it within the 3D space without primitives, without any 3D modeling at all. It’s difficult to explain, except that it’s a more natural look and it provides us with an unlimited amount of detail. A lot of shots we produce have as many as 1000, 1500 different levels.

That’s all in the service of getting a convincing amount of depth information ‘ instead of the diorama look, you feel like you’re seeing into a real world?

Right. People talk about levels of depth. We have a continuous level of depth. There are no midgrounds, foregrounds, and backgrounds unless it’s been specifically designed that way. Everything is continuous.

Can this be automated to some degree, or are we talking about artists going in and identifying all of this depth information? It just sounds like a monstrous amount of work the way you describe it.

It is a monstrous amount of work. But our process is extremely intuitive, so we can take people who have had little exposure to this but maybe a little bit of training in visual effects from an art institute and we can train them in a relatively short period of time to follow the instructions. It takes a couple of weeks to become proficient. We’ve made it simpler, but it’s a very creative process. We have 300 people colorizing or dimensionalizing. I tell people we can’t really automate that. 300 people is 300 nodes on a parallel processor. Each node is doing something unique, and each node is doing calculations that a supercomputer can’t do. They look at a frame and they recognize it in context. They know what went before and what’s going to happen in the future. They can work it much faster than a computer could ever do. I try to find a balance between humans and machines.

Are proprietary tools part of this pipeline?

Everything is proprietary. We built it from the ground up. There’s not one off-the-shelf piece of code in it.

Do some of the same techniques and principles from colorization apply here?

Not really. Converting 2D into 3D is much more difficult than colorization. It’s more involved, more advanced, and more detailed than colorization. With colorization there are cheats you can do that actually improve the look but cut down on the work you have to do. For 3D conversion, there are no cheats. You can’t hide anything. You have to dimensionalize everything and mask everything in a detail that goes way beyond colorization.

But the pipeline is exactly the same. The way we take apart a movie, the way we create a database of everything in the feature film. The keyframing is done the same, although it’s much more involved storyboarding. The storyboarding goes into a depth script for the entire feature film. It’s like the music in a feature film – there have to be highs and lows. And then, when it’s all done, putting it back together again and making it look better than when you got it. Those are all the same processes.

Do you have to adjust your work for different screen sizes?

For IMAX we had to do a seven-pixel offset on everything we completed, but that was just for IMAX. The depth we used for Alice looked great on a 52-inch 3D-ready television and it looked fantastic on the big screen. It was the same imagery.

What’s the cost? Do you charge by the minute, or by the shot?

We charge by the movie. Others charge by the minute, but when we approach a project, we approach that project in total. Legend 3D would be producing the entire feature film. We give the studios a single price and there are no overages. The price we give the studio is the price it’s going to be at the end. We separate ourselves from a VFX studio, which does typically charge by the shot or by the minute. We consider ourselves a creative studio that produces entire feature films.

As opposed to coming in late on Alice, you want to be involved with the shoot early in the process?

We prefer that. For new feature films we want to be there right at the beginning in pre-production so we can help advise the director how to shoot certain things. Most directors don’t feel comfortable in stereo. They feel more comfortable in 2D. They need some input regarding the best shots. That’s where we would like to be.

Do you think there are an adequate number of theatrical screens now, and how do you see that progressing?

The number of screens is increasing dramatically. But the consumer electronics industry is going to be the mover in 3D. All of the TV manufacturers already have their 3D televisions ready for market. Blu-ray has already been modified so that we can see 3D. Gaming systems are already 3D. I think that’s going to be the biggest mover – home entertainment. It will be down the line a little bit, but not that far down the line. You’re going to be seeing a lot of televisions between summer and Christmas that are 3D.

I think just about everything is going to be 3D theatrically, and the home market is going to explode. You’ve got HDTVs that have reached a saturation point, and this helps the industry with product-replacement cycles. One thing that will help expedite adoption of the new medium is the fact that 3DHD TVs will be sold at approximately the same price as normal 2D HD TVs. Obviously, if people have 3D viewing capability in their homes they’ll eventually use it.

Where do you see your business moving in the near term?

We’re separating our company into three different studios, one handling commercials, one handling catalog titles, and one handling feature films. Commercials are going to be much more significant as we go forward. People will walk out of theaters talking as much about the Coke commercial they just saw as about the feature because they will be so dynamic. They will all be Super Bowl quality or better. 3D commercials are going to have their own entertainment value. They could even be put onto Blu-rays as a special feature.

Can you talk about – or give hints about – any other Hollywood work you’re doing right now?

We’re doing three feature films for one of the major studios simultaneously. They will be finished August 1. That’s all I can say about it.

Are they new releases or catalog titles?

They are catalog titles.

What about the negative buzz over the 3D conversion process and the difference between “real 3D” and “fake 3D”?

This is a massive visual effect. That’s all it is. I’m not making light of it, but it’s very significant to look at it that way. When people start talking about “fake 3D,” I tell them that Robert Downey Jr. really wasn’t in that suit [in Iron Man], and maybe we should protest that. It’s the same sort of ridiculous argument.

When you look at 3D, either shot or converted, it’s the same thing to the audience. The audience is focusing on the actual screen but converging their eyes in front of or in back of the screen. That’s an unnatural thing for a human being to do, but it’s done no matter how the 3D is created. It’s all an illusion. There isn’t that much of a difference between conversion and shooting in 3D, except for the fact that we have more flexibility in 2D-to-3D conversion. In many cases it actually looks better, because the way 3D is shot is still somewhat rudimentary. There are not a lot of people who know how to do it right, and only a handful of studios have any expertise in shooting stereo. So I discount those comments about fake versus real. It’s all real.