Behind the Scenes with the View-D Process and Clash of the Titans

In January, when word got out that Warner Bros. had made a last-minute decision to convert its upcoming spring tentpole, Clash of the Titans, into 3D stereo – pushing the release back one measly week in the process – eyebrows were raised across the Hollywood post community. The studio was, essentially, turning on a dime, giving one of its biggest releases for the year a complete digital overhaul in less than 10 weeks. To complete the work on that kind of schedule, the studio turned to Prime Focus, which had already been building a 3D-conversion pipeline with an eye on feature-film conversion, though it hadn’t yet ramped up to complete a full project. F&V talked to Sean Konrad, the designer of the Prime Focus 3D projection pipeline, about the hardware and software involved.
FILM & VIDEO: It’s been widely reported that there wasn’t a lot of time available to do this job, so I assume you had to build a very efficient, bullet-proof system.
Sean Konrad: We spent a lot of time before we started Clash of the Titans researching and planning how we would expand the studio to take this amount of work. We were fortunate that we had a lot of infrastructure in place in the former Post Logic building. It was a matter of organizing the technology and leveraging what we had.
When was that ramping-up process going on?
We started back in October. We had started doing 3D conversion tests for a lot of films. The idea was to go to studios and ask to test movies and footage, and we had great success with some of those tests. We wanted to expand this into a division that could handle an entire feature film. At the time, there was no conversion facility that could do an entire picture on its own. Alice in Wonderland was spread across six different facilities.

And when did the process begin of actually building it up for feature-length stereoscopic 3D work?
Before I came on board, I had been traveling through Asia for a year. [Prime Focus] had been using this technology successfully on another project, and they wanted me to start building the idea of doing an entire film project through the technology. We had to get our playback infrastructure up. I spent the first few weeks trying to get that sorted and evaluating products on the market. Mostly, it was a matter of sitting down with people in the facility and talking about what kind of volume we would expect and what the requirements would be from an artistic standpoint, and then coming up with solutions that would work for an overall pipeline.

Talk a little bit about the separate pieces of the pipeline you had to address – storage and bandwidth, playback and review, etc.
We needed to expand our switch capability and server load capability. So it was a matter of spec’ing out simple things, like more servers and more overall network capacity. In terms of our screening rooms, we wanted to make sure we could really quickly display images that were being worked on at the time. If we were doing a project in 10 weeks or 12 weeks, which is what we originally spec’d out, how much would we have to be reviewing in a day to make that effective? We put that number at somewhere around 10 to 15 minutes of footage a day, on average. We had a lot of systems in place – Baselights, Smokes, Nucoda. With all those solutions, we found that getting the data into the system took a long time and was cumbersome. The way our network was set up, it wasn’t ever going to be a fast process where we could push a button and walk down to our Baselight or Smoke and see it playing back within two minutes.

We evaluated a lot of Windows desktop conform applications like Scratch and Nucoda, and ultimately we ended up settling on the IRIDAS FrameCycler DI. They’ve been developing their stereo pipeline pretty much since Journey to the Center of the Earth 3D, so it has a lot of really useful things inside it, like on-the-fly convergence adjustments and automated stereo pair finding. It enabled us to keep our conform solution living on our network and access our SDI infrastructure, which was a holdover from the Post Logic building. We access all our video projectors through an SDI router, so we needed technology that would output to an SDI interface, and FrameCycler was one of the first desktop applications to do that. And we needed something that would live on our Windows network and play with that happily. That all fit together really nicely.

Also, we needed a versatile storage solution for local playback. We went with Fusion-io, which is essentially RAM-based storage [specifically, it uses NAND Flash memory -Ed.]. We had to play 2K stereo, and if you’re doing a client review session you want to get as much material into that session as is humanly possible. We would be copying data to our local storage array while we were playing back from it. We would be doing copies of eight simultaneous shots, potentially, going into the device. It’s faster, and it’s really low maintenance. With RAID, there’s still a chance the disk will fail or a controller will go down and you have to spend two hours with that device offline. The Fusion-io drives don’t really go down. They just function. And you can write data to them really chaotically. You don’t have to write frames contiguously. You can be reading and writing at the same time, and writing eight frames simultaneously. By doing that, you get better performance out of your network connection. It was really fantastic to have that kind of fast storage that we could do a lot with. Each machine had a Fusion-io 640 GB card, and we added a second one to some of the machines to play back two reels simultaneously on the same machine.

Were you using off-the-shelf storage components?
Yes, to an extent. Basically, we run Sun Fire X4540 storage servers and string them together with DFS, an old utility from Microsoft that allows multiple servers to exist through the same alias in the network. It allows us to seamlessly load-balance manually. We try to allocate certain parts of our network to be low traffic with a small pipe in and out. The idea we had was to try and emulate the Baselight cloud infrastructure by creating a separate area for editorial to grab from. Each of our FrameCycler machines would be pulling from one server, and all of our 40 artists would be pulling from another server somewhere else.

Was the work spread out across multiple facilities?
We spread a lot of the roto and selection stages of our pipeline across Vancouver and Winnipeg [in Canada] as well as Mumbai, Hyderabad, and Goa [in India]. A lot of the offices have very similar overall server and switch architecture. India functions slightly differently, but the principles are the same in terms of network structure, directory concepts and naming conventions. It’s easy for us to share work. Winnipeg and Vancouver are behind the same VPN, so we can do things easily like transfer data directly to machines through a VPN tunnel. Artwork and editorial work was happening entirely in L.A.

You must have had a massive number of roto artists working on this.
The total count was over 100 artists.

How were you doing review and projection? Was that taking place in L.A.?
All of the review happened in L.A. We had one review in London with Nick Davis, the VFX supervisor for the show. We were using Barco 2K digital projectors in three theaters in our facility being fed by SDI using FrameCycler through the NVIDIA Quadro SDI daughterboard configuration. I like it a lot. It’s one of those things that’s pretty low maintenance once you get the basic framework there. We started with two workstations doing editorial conform for the show. We quickly realized that wasn’t going to be enough and added another. In the final weeks we decided that we need to double the capacity. We found that we could do simultaneous review and conform work on six separate workstations in the facility. We ended up having one machine per reel to conform the final movie.

What about the pipeline for handling the actual 3D conversion?
That was a proprietary process that included the use of [Eyeon] Fusion, [The Foundry] Nuke, [Autodesk] Flame and [Adobe] After Effects. A lot of structuring this kind of work involved figuring out the standard set of problems. Hair is obviously a big challenge when you’re trying to do a stereo pipeline. Part of the job for myself and Tim Christensen, the View-D supervisor, was to take a selection of those problems and start creating solutions for them in a standard library.

Was there a conceptual key to the design of the overall technology pipeline?
It centered around the idea that review is the most important part of stereoscopy. It’s really easy to spend a lot of time sitting at the desktop and looking at the problem and not understanding what the problem is. Most people who are artists and working in visual effects are new to stereoscopy. The most experienced of us have only been doing it for four or five years. Getting it in front of people who’ve been doing it for 10 years as a professional trade is very important. It allows us to stereo-direct from that overseer’s perspective. Obviously, in a timeframe like we had, getting the sheer amount of data through the pipeline every day was very important, and playback infrastructure played a huge role in that.

What about the schedule you were working on? Would you expect future jobs be done on a similar schedule, or do you need more time built in?
More time would have been great. Toward the last week of the show, we were pushing 20 or 25 minutes of footage through our pipeline a day. When you think about that, an extra week would make a huge difference in terms of the overall quality. Ideally, we would have longer timeframes in the future. The way production editorial functions, they are cutting the movie up until two months before release, even if it is a big VFX show. That’s one of the things that’s been a great challenge for stereo conversion as an industry. We don’t want to interrupt the director’s filmmaking technique too much. Part of the reason a director might choose a post process for conversion is to use the tools they’re used to working with on set, in editorial, and everywhere else. If we disrupt that chain too much, we won’t be able to satisfy that condition, and the director would probably think twice about doing a conversion.

So, is [the schedule] ideal? No, not at all. But is it something we expect that we’ll have to do again? Absolutely.

And one of the keys is staying out of the filmmakers’ way.
And getting as much footage in front of them as humanly possible. We had some goals we’re still pursuing ‘ potentially having directors sit in the room with you and direct the stereo space with an art lead. We had that semi-functional on Clash, but we want that to be a regular part of the service in the future.

So for parts of Clash, was Louis Leterrier present for the stereography in the same way he might be present for, say, color grading?
Yeah. He was for a couple of sessions. Leterrier mentioned one day in an interview that he really appreciated that aspect. We had one scene where the typical stereo direction would be to make it deeper and add a really strong sense of dimensionality to the scene. It was a big, wide vista shot, and when we did that it ended up looking like the characters were miniaturized. We wanted to show him what would happen if we miniaturized it, or what would happen if we went more subtle with the depth. In the end it looked drastically better if we went subtle in that one particular shot. It’s all about choosing appropriate moments [for depth effects] and dialing it in so that the stereographers and the client are happy.

You can get sign-off on those decisions live.
Right. And with an artist in the room. It’s not quite the same level of interactivity you’d get with a colorist, but it’s something we’re working on.

Well, it’s still the primitive days for this process.
Engineering the pipeline, we really tried to view it more as a DI process than a VFX process. I think it helped us with conceiving the schedule. We were able to go to the people on our post side and say, “OK – you get a movie in and you’re doing color-correction and roto for large amounts of scenes throughout this movie. How do you manage that schedule?” Their feedback was instrumental. The way they organize footage, the way they conform, the way they leverage EDL technology, the way they get clients in front of the material as fast as possible – those were core concepts we wanted to incorporate. Our process is a bit more passive than that, but we wanted to have the core ideas there.