Like the industry itself, NAB is always evolving. Trends in content consumption drive decisions in hardware and software engineering, and the reverse is also true — the priorities of equipment vendors and service providers help dictate what consumers can and can’t tune into at home. Consolidation has been a neverending story in the industry — most recently we saw Snell Advanced Media merge with Grass Valley — and that, too, has an impact on the tools available to creators and the breadth of their capabilities. Some of the trends shaping NAB are evergreen, while others wax and wane in different years, but they all have the potential to impact the way media is made. Here are five practical issues that will drive innovation and generate conversation on the show floor this year.

Red Monstro 8K sensor front view

Red Monstro 8K sensor

4K and Beyond. There’s an interesting dynamic in today’s multichannel environment. Disruptive OTT providers like Netflix and Amazon have embraced image-enhancing formats like UHD and HDR, while traditional broadcasters have been reluctant to move beyond HD. Why is that? Money, of course. The broadcast industry as a whole is still working to recoup its investment in the transition from analog SDTV to digital HDTV, and is in no real hurry to embrace another big upgrade in image delivery. That means vendors are starting to offer more affordable transitional gear, like lower-spec (but still high-quality) cameras and lenses or lower-capacity (but highly scalable) storage systems, designed to help ease the transition. That’s not to say there’s anything like a shortage of high-resolution cameras — Red sensors go up to 8K, Sony’s new Venice is billed as a 6K camera, Panasonic has one that goes to 5.7K, and even ARRI, always reluctant to get into the resolution race, has extended its Alexa line-up to include a 4K large-format sensor. And those cameras are being put to use, whether it’s to future-proof an HD deliverable with a 4K master copy, to provide ample options for reframing the image in post, or just to eke out a little bit more picture quality for VFX or for downsampling to a final resolution. The larger pictures strain post-production budgets, which is both a challenge and an opportunity for vendors who have to figure out how to service the expanded needs without compromising quality or compressing margins beyond the point of sustainability. But customers have more affordable workflow options than ever before.

Axle AI Hub product shot

Axle AI Hub

Artificial Intelligence. Machine learning is coming on fast in the content production market. We’ve  heard for a while now about AI projects that use complex pattern-recognition and neural networks to enable computer-generated prose, imagery and animations — interesting conversation-starters, to be sure, Ian or legitimate art in their own right. But now, as those same techniques are making their way into tools meant to help human creators rather than replace them, the implications are huge. At Nvidia, GPU-accelerated AI is dramatically speeding up ray-tracing processes by using machine-learning techniques to fill in missing information from a render in process, rather than waiting for the actual, rather arduous, computation to finish. AI is also driving noise-reduction techniques to generate cleaner results without touching meaningful detail in images. In the editorial suite, speech-to-text technology (and, sometimes, the reverse) is more accurate than ever, and facial-recognition technology is being leveraged to make color-correction faster. And Axle launched something called the AI Hub, an integrated media-cataloging solution with 4 TB of storage for proxy media and face-, object- and speech-recognition technology available through a monthly subscription. AI can help you quickly and cleanly remove all evidence of your camera rig from a 360-degree video; it can also dig through hours of documentary footage to find that one last quote you just realized you need to tie an argument together.

Cloud. Cloud tiers for data storage are looking more critical than ever now that AI is becoming more useful. Machine-learning services can be run at massive scale in the cloud, allowing a content owner to upload entire libraries of footage for metadata enrichment — the AI can churn through clips and images to identify people, buildings and locations, and trace other connections between superficially disparate clips. This is good news for news organizations and others who can always make good use of repurposed content, and it’s even better news for anyone seeking to monetize a collection of otherwise uncategorizable clips. That will help drive new users to the cloud alongside existing customers, such as VFX facilities who like to spin up massive render farms in the cloud on a pay-by-the-minute basis, studios that scale their capacity up during peak period, and content owners who have a permanent tier of object storage on premise or off. Of course, those who are already in the cloud or considering making the move will be looking for new storage workflow solutions that allow the process of moving data to and from the cloud to be largely automated, easing the administrative burden of efficient-but-complex multi-tier workflows.

HDR. Relatively few movie theaters have the projection equipment required to screen films at the eye-popping brightness levels demanded by the Dolby Cinema format, but HDR is already a fixture on the home-theater scene, where Netflix and Amazon both offer high-end original programming in HDR UHD, as do high-end streaming boxes and UHD Blu-ray Disc players. So most studio tent poles — and a few other titles, including some older films — are being graded in HDR, though the process isn’t ubiquitous, and many producers, directors and DPs still have questions about the best way forward. One difficulty is the challenge of finding affordable ways to monitor HDR on set under less-than-ideal lighting conditions. And broadcasters have to worry about providing HDR in a live environment, sometimes with SDR elements in the mix. These problems have solutions — AJA has a nifty FS | HDR box that adds SDR/HDR conversion to the usual functions, and Sony has developed an “instant HDR” workflow for quick-turnaround HDR production — but they are first-generation solutions by nature. This should be the show where we start seeing expanded options for basic HDR workflow. As just one example, see Apple’s announcement last week of a raw version of its widely used ProRes codec, with support from Atomos’ Shogun Inferno 1500-nit monitor providing an out-of-the-box monitoring option for HDR acquisition from cameras including Panasonic’s AU-EVA1, Sony’s FS5 and FS7 and Canon’s C300 Mark II and C500.

IP Video. Even if 4K broadcasts can be put off for the moment, the IP transition can’t be safely ignored — this may be the single biggest change in the industry taking place at the moment, bringing the promise to accommodate more flexible, scalable and agile production techniques. That means more users than ever will be at NAB this year looking for ways to transition their SDI equipment to a hybrid IP/SDI environment — or to build a new studio entirely around IP technology. Now, with the SMPTE ST 2110 standard in place, you can expect options for production-ready real-time IP workflow to multiply. NAB will once again host an IP Showcase, where more than 60 manufacturers and eight industry organizations will demonstrate a live, all-IP studio producing two hours of live programming every day of the show as formerly divergent approaches to IP video come together under the 2110 umbrella. NewTek recently released NC1 Studio, an I/O module that it said makes its IP Series products and TriCaster TC1 systems compatible with SMPTE ST 2110, and Sony said its NMI system for IP video also complies with the new standard, while legacy products will also be updated to support 2110 via firmware upgrades. “If you’re building a facility,” Sony Senior Manager of IP Production Technology and Sports Solutions Deon LeCointe told reporters at a recent press briefing, “IP is the way you truly want to go.”