Tapeless Workflows and Gigabit Ethernet at Crawford Communications
How Flexibility and Ingenuity Keeps Clients Happy
Film & Video asked Heidt and Dave Warner, the company’s VP of post engineering, a little more about the balancing act Crawford performs across NLEs, acquisition formats, and programming genres.
Ron Heidt: One of the challenges is, what format is everybody using? We’re working on stuff that was shot using everything from SD DVCAM to Red 4K. In between, we have HDCAM, HDCAM SR, P2, XDCAM EX, and everything in the middle, plus the Phantom. We’re dealing with the combination of tapeless workflows and existing tape formats.
F&V: I’d imagine you get some jobs – documentaries, for example – that combine most of those formats in a single project.
RH: It can happen on episodic television as well. Everybody wants to mix and match their footage and make it all look consistent.
David Warner: Both of the network episodic shows we’re doing are shooting mostly on video cameras, but they’re both also shooting on the Sony EX cameras, and one is shooting to memory cards on consumer HD cameras for specific scenes. It runs the gamut. And everything’s got its own little codec. Is it going to open on a Mac, or on a Windows-based system? Those are more challenges in post-production: what every codec is, which systems can work with it, and what’s the best way to convert it so it integrates with other footage.
RH: Everyone’s ultimate goal is to get away from tape and film, and different manufacturers have come up with different ways to do that. We’re in the interim mode where we get Red, we get P2, we get XDCAM EX, and we get tape and film. It’s a struggle, but we’ve come up with tools that help a lot, and some of the manufacturers have really helped us in that realm. Avid has done a great job in opening up the architecture of their Media Composer to help us deal with these formats. We’re able to take P2 and XDCAM EX or AVC Intra cards and files transferred directly to hard drives and open those almost immediately in HD with an Avid system. The problem we run into is, what do we do from there? Do we output to tape for archive? Do we archive on drives? But we can start on the creative process right away, and that has helped our workflow quite a bit.
F&V: You share some of your hardware between Avid and Final Cut Pro, depending on the project. What’s your shared-storage set-up?
DW: We’ve got an SD Unity Media Server for some of our Avids, with six machines connected to it. We’ve also got a (Facilis) TerraBlock, and four or five machines access that. And we’ve got a production server where we store and manage elements for jobs that are taking place in multiple rooms.
RH: A lot of our systems are running on local storage with Gigabit Ethernet, so if we move from one room to another we can easily move the media from one system to another. So we’re not generally on shared storage now. But the nice thing is that the newer rooms we’re putting together are very similar. We can start a project in one room or the other and it’s basically the same room.
DW: We’ll do things like put extra drives on our machines, so we may have six drives in one computer, or as attached storage. That’s pretty economical to do, but it has its limitations. It gets down to a scheduling situation at times.
F&V: Does it slow you down if you do have to start moving material around from room to room?
RH: If we’re working on compressed products like P2, or if we’re working at DNxHD resolutions [in the Avid], we’re able to move stuff fairly quickly over Gigabit Ethernet because the data transfer rate is faster than the stored data rates. That makes it easy to move things. Most of our material will go into a telecine room to do a final color-correction or tape-to-tape or selects transfer and, because we’ve created a virtual environment in that telecine room where we’re able to play back files and color-correct as we go out to tape or files, we’re able to move media to that system pretty quickly.
F&V: Are you generally working at HD resolutions on your Avids?
RH: I’ve been doing HD on a DS system in DNxHD since it’s been available, because it’s so good. Typically we use the highest data rate. If I’m working at 24p, it’s 175 [Mbps]. On a DS it’s 175x. I try to stay in the “x” line so it’s 10-bit. We use a lot of DNxHD 36 for offline, and clients still don’t really know they’re looking at a compressed image. I can give you an example of a Red workflow where we convert all the red footage to DNxHD 36 and then offline on a Media Composer at 36, and then transcode the selects at a higher quality so we can do the conform at DNx 175x at 24p. We can convert the Red files to DNx 36 in just under real time and be working very quickly.
I did a spot recently – a commercial for the Bounce agency for a bank called Carolina First – that had four or five hours of Red footage for a 30-second commercial. That four or five hours was converted to DNx36 in about five hours, or a little over. I set it up to render overnight, and I was working with clients the next morning at DNx36 on a Media Composer 3.5. I did my offline over a day or two and set it off that night, but it took only an hour to render the selects at 175x, and that’s doing a full de-Bayer on it and getting a high-quality image from the Red files. I sent those files for scene-by-scene color-correction and then did the final conform in a DS. That was a fairly efficient workflow. And it’s amazing how good the DNx 36 and the lower de-Bayer looks for offline purposes. It looks much better than anything we were doing three or four years ago at 14:1.
F&V: While we’re talking about newer technology, what about stereo 3D? Have you dipped any toes into that water?
RH: We’re talking to some clients right now. The biggest thing that’s holding us back is display. One client is waiting for the [3D-capable] Blu-ray systems to come out so he has a way to show it. That client adopted Blu-ray and widescreen presentation very quickly, but they don’t want to adopt servers just to play back 3D. So they’re waiting for Blu-ray, and it just isn’t there yet. There are great inroads in working with stereoscopic containers on the DS and the Media Composer, which will make it much easier than when I’ve done stereo 3D in the past. You offline in the Media Composer and go into the DS to do conform, and it will be great to have tools in the DS that can do effects and color-correction while keeping those containers together.