How a DP came to create a new mastering plug-in for Final Cut Pro, Motion and After Effects

Looking to get into 3D production but afraid of what it might cost you-both literally and on the clock? Stereo3D Toolbox, a new FxFactory mastering plug-in for Final Cut Pro, Motion and After Effects, aims to help users demux and demystify 3D formats and composite complex shots within the familiar apps they already know. DP Tim Dashwood, founder of Dashwood Cinema Solutions and the tool’s creator, explains why he got into the plug-in business and how much of this stuff you can easily handle inside FCP, AE and Motion.

You’re a cinematographer by trade and not a programmer. Why did you want to create a 3D plug-in?

I come from the independent world where a lot of the problem solving is just, ‘do it yourself and find a solution.’ This was one of those situations. I had tried some other products out there. One, in particular, was a little too expensive for what I was looking to do at the moment. If the budgets were to come through, that would be fine. But I knew I needed something under $1,000, and definitely less than the price of Final Cut Pro itself. I thought, ‘There must also be other people out there looking for something similar.’ The Final Cut Pro suite ships with no stereoscopic support yet. Avid does have limited stereoscopic support, and then the higher-end systems, like the Quantel Pablo, for example, has stereoscopic support. A lot of times those more expensive systems are out of the reach of independent-sized budgets, from music video production to hobbyists. Up until the point of me designing this plug-in, everything had to be done manually, as far as lining up your left eye and your right eye. It was just a whole lot of work. So I sat down one day, and said, ‘You know what? There must be an easier way.’

You had some help from Noise Industries’ FxFactory architecture.
Yes, I was using the FxFactory suite of tools for other projects. For those who don’t know, FxFactory Pro allows the end user to design plug-ins in a simpler way than a programmer would do it. Basically, you don’t write code because you’re using Apple’s Quartz Composer, their node-based visual programming language.

What’s the R&D schedule been like?
At this point, we’ve released version 1, which launched in August. I started out with quite basic features and I’m working on the next version now. We’re adding ten new features that are going to really bring the plug-in up to a very professional level.

When will that release be?
When I finish it! Seriously, it’s being beta-tested right now on a feature film at one of the major 3D studios in the States (they have studios in New York and Burbank). They are actively testing the plug-in in production. So I’m doing revisions to it almost on a daily basis right now and consulting with them on a daily basis. The eventual goal is to add compatibility with other finishing products, such as Pablo, or Nuke or whatever. We’re exploring those concepts at the moment. In fact, I just put the finishing touches on a muxing application to mux your left and right eye onto a single QT file that you can work with inside the host. So that way, you don’t have to do as much work inside the host and you can just prep all your files ahead of time, give them unique file names, and work with them within those hosts. We’re constantly improving and adding to it.

How is your plug-in different from CineForm’s Neo3D?
I like the Neo3D concept and I’ve played with it quite a bit. Their system really works outside of the constraints of Final Cut Pro, which is nice. It works off of QuickTime itself. The difference with my plug-in is that the CineForm tool has very basic convergence and disparity controls. I did test Neo3D, in its earliest version, and found that I needed more control in terms of keyframing, convergence values, etc. But what it really came down to was workflow. I thought, ‘Wouldn’t it be great to be able to do this from within Final Cut, or from within Motion or After Effects?’ Now, the huge advantage of Neo3D, because it works in the background, is it can provide real-time processing of the left and right eye into various formats. My plug-in works with Neo3D quite well. So if you pair your clips in Neo3D, and let’s say, set Neo3D for a side-by-side output, then from within any of the host applications-After Effects, Motion or Final Cut-you can just apply my plug-in to that footage, after you’ve done your edit, and do all of your fine-tuning. All you have to do is tell it that you’re providing an input from Neo3D that’s side-by-side, or over-under.

What’s the initial response been to Stereo3D Toolbox?
It’s been very popular. I get a lot of e-mails every day, with people saying how much fun they’re having with it and how great it is.

And what are they using it for?
It seems like we’re selling about 50-50 between hobbyists purchasing the plug-in-individuals, who don’t even have production companies, who have been stereoscopic hobbyists for years-and professionals. This other 50 percent mostly work for smaller production houses, and probably have corporate clients or are looking to get into 3D production. The plug-in is priced well enough, at $389, so they can try it and get their feet wet. It’s interesting: A lot of the support questions I’m getting aren’t really support questions for the plug-in but tend to be, ‘How do I set up my cameras?’

They know you’re a DP; maybe that’s a liability.
That’s right. It’s been a little difficult at times. Software development is new to me, so the support questions can be a bit overwhelming, especially when I see my e-mail inbox filling up day to day. But I try to answer all of them as quickly as I can. I am fascinated, however, by how so many people are interested in experimenting with 3D.

One thing that came out of those questions, in fact, was a change to how the trial and watermarking works within the software. Noise Industries made major changes to their software to accommodate that request. The end result is that you can download the plug-in and work with it indefinitely; it just has a watermark down in the lower left of the screen. I figured that a trial period of two weeks isn’t enough time for hobbyists to play with it. It may take someone six months to really feel confident enough to say, ‘I think I can now shoot 3D.’ It’s a pretty steep learning curve for stereoscopic shooting. There’re a lot of books to read. You have to completely change your way of thinking. I just had a meeting with a director the other day; we’re organizing a 3D film. I started describing a few of the 2D tricks you just can’t use any more in 3D. It’s really limiting.

But isn’t that exciting, because a new filmmaking dialect, in effect, has to evolve along with it?
Yes, it is. At it’s most basic level, what the plug-in does is set convergence. So for me, there are two basic ways of shooting 3D: converging in camera, or shooting completely parallel-toeing-in. It’s what the human eyes do. But that causes too many problems in post. So what I always recommend to people starting out is, shoot parallel and then you can use the plug-in to converge onto whatever your subject is that you want to be perfectly converged in the stereo window. Then the plug-in also adjusts for any disparities, if there are vertical disparities, or if you’re using two zoom lenses and one was zoomed in a little more than the other. You can adjust all of that with the plug-in, and get the two eyes perfectly matched, and you can also adjust the exposure, brightness, contrast and white balance between the two eyes. I even know of people who are shooting with two different cameras: one is a Sony EX1 and the other is an EX3. The plug-in can still match both views perfectly. And because it’s working within the host, you can keyframe everything. So if you have to change the convergence in the shot, say if you have something or someone moving towards the camera on the Z-axis, you can maintain convergence on them so they stay at the stereo window.

What about output modes?
We support all the standard 3D formats: over/under, side-by-side, interlaced, checkerboard and also have all the common anaglyph modes in there, with all the colors on the spectrum. With anaglyph, you have full control over, say, how de-saturated the red channel is before red is applied to it. You can also adjust the gamma and balance and put a little bit of red back in the right eye if you want to control the skin tone, for example. As I run into problems, I add new features! That’s the concept of what I do, and what most filmmakers do. All the best inventions in the business were made by filmmakers trying to solve their own problems.

Are you inspired by 3D developments on the display side?
Yes, definitely. All these 3D screens are popping up! I’ll probably head to CES in January to see what stereoscopic displays Sony, JVC and Panasonic have coming out. Actually, one of the other things that inspired me to create this plug-in in the first place was what I saw at NAB this year: JVC showed off their new 3D screen, one of the most amazing I’d seen at that point. It accepts a side-by-side format. It also accepts interlaced. The problem with interlaced, however, is once you subsample it from 4:4:4, you get ghosting, because suddenly your color information is spanning more than a single line. Side-by-side is a great way to encode your stuff if you have to go into an MPEG-2 or H.264 format. You can throw away color information, the sub-sampling, but it doesn’t interfere with your 3D. I needed a way, with a push of the button, to be able to output a side-by-side so I could show stuff on that monitor. Now I’ve found out that Sony and Panasonic are both coming out with frame-sequential versions. They’re taking the opposite approach. Where JVC’s TV has the polarized screen built in and you use passive glasses, the exact same ones you use in a RealD theater, Sony and Panasonic are using their current technology, at 120 or 240 Hz refresh rate, with a box that you will either buy and attach to the TV or will be built into the TV. That box sends out an infrared signal, and the viewer needs infrared active shutter glasses to receive that signal, in synch with the television. They flick the left and right eye on and off, so each eye is only seeing one image at one time. But at 120 Hz, you get a pretty smooth image, without flickering. That’s the concept: make the TV cheaper, and then if someone wants 3D, they have to pay an extra $50 to get the infrared glasses. JVC’s TV, however, is more expensive. But you could set that TV up and have a screening of your 3D film for ten or 20 people in a room, perfect for client screenings.

For more information about Stereo3D Toolbox, visit Noise Industries and Dashwood Cinema Solutions.