GOB+AnnLead

Home / Article / Creativity / Post/Finishing / Project/Case study

Randy Coonfield on Grading the Red-Shot Revival of Arrested Development

When Managing Looks and Eleventh-Hour Changes Across Interwoven Episodes, Resolve Gets the Job Done

For die-hard Arrested Development fans, the long-awaited new season of Netflix's "Semi-Original Series" wasn't so much a blast from the past but a delectable smorgasbord of known but newly mutating parts. The beloved cast has gotten older and wilder, show creator and executive producer Mitch Hurwitz has wound his intersecting plotlines even tighter, and the season's central device—one long story arc told per episode from each character's point of view—is the perfect foil for the running gags, twists, surprise reveals and hilarious performances that made the previous three seasons so memorable. To industry watchers, Netflix's release of all 15 episodes at once was a bold, new (and by recent reports successful) paradigm in digital content delivery.

Making such a finely calibrated comic engine work consistently from start to finish—or any order in which you choose to watch each episode—required lots of preparation on set and just as much attention to detail in post. This time around, the show was shot on Red, edited in Avid and graded in Blackmagic Da Vinci Resolve. We asked Randy Coonfield, a colorist at Shapeshifter in Los Angeles who worked with Dakota Pictures and started grading the new season in April with episode 6, how he managed looks across scenes with so much in play at all times.

StudioDaily: How did you come to work on the new season?

Randy Coonfield: We were a referral from Blackmagic Design. Initially, the folks at Dakota Pictures were following a pretty traditional path of converting all the Red files to DPX, but that was causing them a ton of overages because a lot of the material needed to be blown up, a request from creator Mitch Hurwitz. They chose to shoot Red partially because it would allow them to do some of their coverage in post and also allow them to create some really complicated handheld [camera] effects. In order to maximize quality, and make it easier, they wanted to keep the Red files throughout the process without converting [the footage]. Dakota Pictures decided to bring a Resolve into their facility after doing some research and concluding that it would be the best fit. They eventually were referred to me, however, to help them make better use of the Resolve. They ultimately decided that Shapeshifter would be a good fit to just finish the shows. They ended up finishing four episodes with another facility, one in house, and the last 10 episodes with us.

What was your turnaround time like on those 10?

We had about a six-week period to do them, but because of various delays in getting shows locked and finished, we ended up having to do the last five episodes in six days.

Whoa. How did you ever do that?

Well, Resolve helped because we were using the original media and we weren't doing any extra converting of files. We also had more than the Red files to deal with—they shot some things with GoPro, for example—but Resolve supports those formats. There's also about 50 effects in every show, and those were being delivered to us as TIFFs. In their original workflow they were converting their Red files 1:1. They were converting their TIFFs and MXF and DNxHD media. We said, 'Wait a minute. We don't convert anything. Resolve supports it all natively, so our timelines typically had three and sometimes four different types of media on them. And the Red raw sensor data helped because being able to manipulate that data all the way through post was critical to the success of this show — and most shows where raw files are dealt with. There were many instances where, in order to match previous episodes or just to get the look we were after, I would actually go in and manipulate the metadata and change the ISO or tinting at the metadata level. If you convert to other formats from the raw, that manipulation is lost, and I used it a lot on this project.

What was your typical workflow at Shapeshifter, and what happened to it toward the end when the schedule got tight?

We would get a cut of the show and start conforming on the Resolve first and then do color-correction. We would then determine which of the handheld effects didn't work. There would be a handful in each show that the Resolve couldn't properly interpolate out of the edit system. We would have our Avid editor here at Shapeshifter link again to the original media and recreate those effects and send those to me as MXF media. It was AAF'd from Avid into the Resolve and AAF'd back to an Avid for titling and finishing touches. Most of the handheld effects you see in the show probably weren't in-camera and were created by one of their editors. The vast majority of them translated beautifully via AAF. But a handful of them had certain types of moves that we knew the Resolve wouldn't do. So we just used an Avid Symphony to recreate them. Now, remember that a lot of these had a 200% to 250% blowup, so we noticed, in comparing the work that we ended up doing on the final 10 episodes with those done on the first four, that the work was really exceptional, probably because of the resolution of the media we were using. I'm not sure what they were converting to at the other facility, but I suspect it was at HD resolution, which is essentially throwing away half the media. That's a common workflow, a workflow that most large facilities still sell to this day. For us, that just doesn't make sense. And thank goodness we were set up to handle such fast turnarounds, because it got crazy at the end there. We had multiple systems going at that point to keep up.

Did you look back at the past seasons, even though they were shot with a different camera, to prepare for the job?

Absolutely. I sat down to watch the whole thing to get a feel for it, but in this new season, all of the episodes are intertwined. Many of the situations the characters find themselves in are common situations to other episodes from a different angle. Hurwitz was adamant that everything we did that could show up in another episode had to match. So we were constantly referencing earlier episodes from this series as we worked to make sure that when I was grading a location in a scene we had seen before, I matched it. That way, if someone were to assemble the 15 episodes together out of order, they won't find anything that's mismatched, or more specifically, return to a location that doesn't work. I actually carried around all of the episodes with me once I graded one.

That's more like a longform way of working. When did it get really challenging?

It got challenging when I was matching back to color that was done by somebody I don't know in a place I don't know. Matching someone else's work is always more challenging. There's a particular scene in that well-known Bluth family penthouse, a longtime location, that was hard because those yellow walls were difficult to match. Also the big scene at the "Cinco de Cuatro" festival (seen at top), where almost every episode ends up at. That was just difficult because it showed up in so many episodes and from so many different angles. But the biggest issue was when changes would roll in. That was the unique thing about a show like this running outside of normal broadcast. The shows were never tightly locked. Changes could happen right up to, and sometimes after, delivery. That's part of what makes this show just genius. They weren't under any major restrictions except time itself. The script changed, of course, and they were writing and rewriting all the way through post-production. So sometimes something was shot for a certain time of day and they'd want to change it. But they would send me AAFs, and all of the work I had done prior to that would just relink. We didn't have to re-color-correct the shows. We could get completely new cuts, and any media that was used in the prior cut was color-corrected. The corrections would just trace straight across. Without that capability there is just no way possible we would have finished those last five episodes on time.

What is your Resolve setup?

All of this show was done on a Windows-based Resolve. Not a lot of people are using Windows, but ours has three high-end GPUs and an expansion chassis. It's fully loaded and as fully built as a Resolve can be. We use the DaVinci panels. At crunch time, when we were doing episodes simultaneously, I was switching back and forth between our Mac-based system and our Windows-based system. There were many times in those last few days where I literally had two shows going at the same time. I would color-correct on one and do notes for the visual effects that came in for another episode on the other system. I would just switch over to the other computer, bounce off those notes to the editor, and then come back. We use a Draco system to toggle back and forth between my Mac and Windows machine, including panel control, in about 15 seconds. I usually use those two systems alternately but this time it was simultaneous and it made a big difference. I also have a strong assistant, Alexander Schwab, who did a lot of the visual effects coloring for me and there were times when he would be in another room doing notes the editors were giving him on an episode I'd already finished.

This is a show known for its ensemble set pieces. How was it different this time around?

It was a difficult production because it's an ensemble cast and all of them have moved on to other things. They had very few scenes and maybe only two days together on set.

Did you have to nuance or soften anyone around the edges, since the cast is aging up with the series?

There were occasional touch-ups we were asked to do, but most of that kind of thing was done with filters in camera. I was told to keep an eye out for certain characters who may need a little treatment. But sometimes we were asked to sharpen segments of scenes that zoomed in and weren't perfectly in focus. We would sharpen those a little bit.

Were there any shots that snuck up on you or made you do a double-take?

One of the executives at Dakota Pictures, Troy Miller, is a creative genius in his own right and directed some of the episodes. He had a Steadicam mounted on a Segway. There are shots that I noticed very early on and I asked the producer, "How did you do that?" That's one of the trademarks of this show. There are some very interesting moves. Some were created in editorial, like the handheld stuff, and some were shot on this Segway. There are other photographic and production tricks, but then there are just those gags that repeat across episodes. I soon realized from grading that anyone who watches an episode three times is going to discover something new in each viewing. Mitchell even expects people to watch it out of order, if they choose. He understands that because it's delivered all at once, and because this fan base is so dedicated, they will find interesting ways to piece this all together.

7 Comments

Categories: Article, Creativity, Post/Finishing, Project/Case study
Tags: , , , , , , , ,

  • diego

    I would like to know what´s the advantage of the MXF over other codecs, anyone?

  • diego

    oh yes, you right, so what would be the advantage over other wrapper´s?

    • Alexander Schwab

      .MXF is the wrapper used by Avid editing systems. Lots of workflows are designed around Avid systems making .MXF a commonly used file format. Not only that, it comes in a variety of bit rates, so its great for your standard offline/online workflow.

      In our case with arrested development, .MXF media was used to round trip from Avid Media Composer to DaVinci Resolve and back to Avid Symphony.

      • diego

        Ok, thanks Alex, now i get it, i´m from Chile and we dont use Avid very much…so what would it be the codec you use for those MXF?

        • http://www.syndasein.com/ SYNER GIST

          MXF is not only recognized by AVID. The big advantage of MXF is that it is non propriatary and regulated by the SMPTE and CCIR. The down side is that it is variously interpeted by different manufactures.

          Codecs are different for acquisition, production and distribution as these three tasks have different requirements. Highest quality at lowest bit rate requires an interframe codec. Intraframe codecs are easiest to work with in post. It is not only the codec which is important but also the bit depth and color coding. One codec does not fit all applications.

          • diego

            Thanks again Syner! what is a good resource to learn more about what´s the best codec workflow for post, taking in account bit depth and color coding.
            Im usually working on DPX, but is quite cumbersome to manage the sequence size´s.

          • http://www.syndasein.com/ SYNER GIST

            Whatever you shoot with maintain all the information if possible. Make sure you have all the required plug-ins for your post production systems so that they can handle the information without quality loss. As Alex says if you are shooting RAW and your PP systems can handle the RAW data without slowing down this is the best way to go. I tried to direct you to a pdf which explains the tradeoffs in more detail but i think the filter here stops website addresses. maybe this will work: go to SYNDASEIN and press journalist then look at the pdfs for codecs and wrappers