Mega Playground’s DI on Big Fan and Creating Color-Accurate Dailies
Terry Brown Discusses Red DI and Ã¢Â€Â˜DP Dailies' Workflows
To read the interview with director/writer Robert Siegel click here.
They decided to shoot in the 2K mode on the Red camera instead of the normal 4K acquisition mode. This does two things: it uses less of the imager, about one-fourth of the image sensor, which means you have less sensitivity so you have more noise and obviously it changes depth-of-field too.
So we did some post processing in our DI suite to handle some of the noise. There’s a tool that comes with [Digital Vision] Nucoda called DVO Tools. Digital Vision has many years of grain and noise reduction. They basically took their hardware solution and put it into software, which they now include with Nucoda. We used some of those toolsets to reduce some of the noise, not only from the camera but also to add sharpness to make it pop a little more. It’s also dealing with the rendering of assets. There are a lot of little things involved with Red. It is not in a normal RGB file format so you have to find an easy way to conforming and making sure you have all the right frames in a proxy resolution before you go and render out the full resolution. It does take some time to get a full extraction of the material. Sometimes it can take 30-1 running time to render the material.
Nucoda can read the RGB files directly. What we do when we start conforming is we use quarter-resolution first because all we are trying to do is line it up to a QuickTime reference file that comes from the offline system. Once you see everything is lined up correctly then you can get the 2K extraction, and it will create 2K DPX files which we then send to the film recorder.
I am not sure, maybe to save on storage. The one thing that shooting in the 2K mode does do is that because you are using that smaller imager it does replicate the look of 16mm a bit in terms of depth-of-field.
Red takes a little more to build a picture out of it color-wise. The colorist has to work a little harder to make everything work: add a bunch of saturation. It doesn’t fall into place as easily as some of the other digital cameras do. In the end we got some compelling images.
You did not do the dailies on Big Fan but talk about MegaPlayground’s “DP Dailies” service.
Whether we are doing film dailies or a final DI, you are looking through a window as to what a final film print projected on the best projector will look like. Any color decision you make is in context with what the final print will look like.
In dailies you want to be happy with the look but it is not necessarily always the correct one. One of the things we are doing to correct that is to create a system where you can actually produce dailies that will be exactly what your final film print will look like, or pretty close to it. And that is what our DP Dailies System we are offering seeks to achieve.
It’s something that I have been working on for a number of years. It’s a combination of color science and a product that was developed by Grass Valley called Bones Dailies. I was the project manager and initial architect of that when I was with Technicolor. It’s a non-linear dailies solution. When we do dailies for film we scan the film just like you do for a DI. We apply this color look-up table to it that will look just like the final color will look in a DI suite. So the goal is to have continuity in color throughout the entire process.
Typically what happens in a normal telecine process it that a normal telecine doesn’t understand how to make certain colors because it doesn’t understand the concept of what a film negative print does. Yellows and reds tend to be incorrect. The problem is that you do the telecine transfer and then you are editing with this footage, looking at these images for months and you get attached to that look. There ends up being a lot of battles between cinematographers and directors about how things are supposed to look when they go to film. I wanted to fix all that so right up front you will see exactly what it should look like when it goes out to film.
It also applies to digitally acquired projects. Even if you shoot digitally you are seeing exactly what the images will look like on a film print. There are no surprises in the end. We are looking to do this on episodic television.
How is it cheaper and faster?
First, we are scanning so the telecine is used efficiently. Normally in a telecine session a good portion of the time the telecine is stopped while the colorist is grading. In our case we just put up the roll of film and scan the whole thing without stopping. Once it is scanned the guys that are scanning can see and adjust the grade. Synching is automatic because there are algorithms to automatically find the sticks. Because it is all file based you can start grading material while you are still scanning material and even start outputting to tape.
The other big efficiency is that that scan really becomes your DI. Anything you do is in context with those files and the DI so you shorten the process of the DI significantly. We’ve done some analysis and a typical film production can save six figures in costs.
Again, it’s the way a telecine interprets color on film. It doesn’t understand the photochemical process. Kodak is very careful about the marriage between a piece of negative and a print to make them work together. Telecine doesn’t understand that so you have colors that are incorrect.
After the film is scanned and processed with Bones Dailies the colorist still needs to makes some adjustments to the dailies?
A colorist still goes in to do their magic but they only have to do printer-light offsets. The LUT does most of the work. So the colorist generally doesn’t have to spend a lot of time on it. Kodak has already developed the whole concept that when you scan into a Cineon file it will print correctly to film. Just using the LUT it comes very close to what the film should look like and then the colorist just does some offsets and some fine-tuning.