Mega Playground's tech guru explains that "compression" is not a dirty word

Terry Brown helped pioneer technological breakthroughs in the 24p high-definition format, post-production workflows and high-definition digital laboratory services during his 21-year tenure with LA-based Laser Pacific. In 2001, he shared credit for a Scientific and Technological Emmy awarded to Laser Pacific for its contributions to the creation of 24p HD technology.
Brown recently joined production house Mega Playground, where he has helped develop video dailies, color correction, and offline/online HD video workflows for feature film and TV clients. The facility ships and installs Avid systems for use on-location, and employs a full variety of PC-based Avid Adrenalines, Avid Nitris, Mac-based Meridian systems, and Avid Unity shared-storage systems in house.

Q: Data management has been a big issue with facilities handling multiple HD projects. How does Mega Playground approach this?
A: Data management is certainly a headache for a facility these days, as historically storing images on tape was extremely efficient and archiving simple. We would simply box up all of the clients’ tapes and give them back. You could instantly enable upwards of a terabyte of storage simply by shoving a cassette into a tape machine. If it became necessary to quickly switch to another project, all that was needed was a quick eject and insertion of a different tape.

With “spinning disks” storage, life gets complicated as there are typically multiple projects on one volume and, as the amount of storage is finite, we are now faced with developing intricate data-management practices. It becomes much harder to give clients their assets as not every one may want the same archive format. The various manufacturers are starting to develop file-system databases that can aid in tracking the thousands, maybe millions of files stored on a server. Most systems that have real-time HD/2K/4K playback can only do so by generating temporary renders of effects or color changes. If these files are not tracked they could quickly fill up your storage device.

One saving grace is the advent of some high-quality compression algorithms like Avid’s DNxHD and Apple’s ProRes 422. At the higher data rates these systems are, in most cases, visually lossless and the space savings is considerable. Avid’s main advantage is that DNxHD is based on and conforms to SMPTE VC-3. This allows easy integration with other manufacturers’ systems. Apple’s advantage is the overall system cost.

Serial-ATA drives have certainly helped make storage more affordable, but there is a performance hit. Near-line, cheap spinning JBOD systems can also be a big help when having to temporarily move a project off a real-time storage system to work on another one. Storage costs have certainly continued to drop, but storage is not that cheap and it is important to keep up on unused files and delete them.

Q: The latest release of Media Composer allows editors to capture ASC CDL information and exchange DNxHD media for color grading. How does this help your workflow?
A: Historically, with “electronic” post-production workflows, dailies are timed/graded on a color-corrector in a video color space and recorded to the video format de jour. The controls on the telecine and color-corrector are not easily saved, so they could have any meaning during the final timing/grading of the project, especially when a different color-corrector is used.

So the DP works with the colorist to achieve their look for dailies, but by the time the project gets to the DI stage (or final color stage in the case of broadcast projects) the DP may no longer be involved and thus not able to be sure his vision is preserved. With the ASC CDL and current practices using look-up tables to simulate film prints, the DP’s vision can be carried through the entire process.

The ASC CDL structure is XML-based and is very flexible in implementation. In our case, both Avid and Nucoda (Digital Vision) have worked together to provide a mechanism to carry the CDL through the process. Companies like Evertz and Thomson Grass Valley are also working with Avid to provide solutions for getting CDL info from Telecine or on-set digital acquisition systems to offline editorial. This will ultimately allow the basic creative intent of the DP to be carried through the entire process.

Q: How much storage is necessary for an uncompressed HD project, and can you use compression without affecting image quality?
A: Compression is one of the most misunderstood issues in the industry. At times the issue inspires almost religious fervor. Compression is not a bad word. Depending on the compression system and source material, end results can be visually, even mathematically, lossless. Whether or not to use compression should be decided on a case-by-case basis. For instance, if you are using a software-based image processing system, employing compressed source material requires uncompressing to a baseband format before applying the effect and then recompressing.

Obviously this adds considerable processing to the operation which normally means sacrificing real-time performance. The trade off is storage size and bandwidth. The real answer to the “to compress or not” question is found by looking at the big picture and considering: 1) what is the final delivery/distribution system (or is this an offline operation?), 2) what are the economics regarding the amount and cost of storage for the project, and 3) how effects-intensive is the project, requiring excessive un- and re-compression.

An uncompressed HD (1920×1080@24p/10-bit 4:2:2) project that has 50 hours of source material could easily exceed 25 TB of storage. When you start thinking about multiple projects working at the same time, it is easy to see that large amounts of storage could be necessary. Otherwise it’s back to the data-management issue, where large blocks of data must be archived and restored in order to minimize investments in spinning disks.

Q: With HD technology changing so rapidly, how can a facility future-proof itself when it comes to technology decisions?
A: Manufactures like the concept of planned obsolescence, while facility owners want their equipment to last as long as possible to maximize profitability. Reality is somewhere in between. Facilities need to really study and understand their desired customer base with regards to their (customers’) end-product requirements and expectations on costs. In the early days of post-production there was a much larger piece of the revenue pie available for services. Today, those potential dollars have dwindled and it is critical to design facilities with maximum efficiencies while minimizing the numbers of employees to get the work done.

Q: What are the biggest misconception clients have about HD post?
A: The biggest misconception clients have about HD post is that it’s not “rocket science.” I once gave a paper at a conference where I started with a quick joke by saying, “I happened to overhear a few NASA scientists discussing a problem with the Space Shuttle when one of them commented, “It’s not that hard, guys – it’s not HD post-production!”

With the advent of HD we now have to be concerned with various raster sizes, temporal rates, aspect ratios, recording formats and compression systems. Clients will frequently come to a session with a bag full of just about every possible format and say, “I want to generate a 24p master with a 60i delivery”. What makes it worse is the advent of HDV. We now have a so-called standard where nearly none of the various manufacturers’ equipment will properly play the others’ tapes. There is no true 24p record mode and no common playback machine with all of the necessary interfaces that will play all content. Yet this format is becoming very popular, and the clients have no idea what it takes to integrate the material into their projects.

For more information, visit www.mega-playground.com.