From Tape to Disk to Flash, How Flexible, Multi-Tier Storage Systems Can Meet Ever-Increasing Performance Requirements
Few companies are as invested in the building blocks of media workflows as Quantum. Its StorNext file system is at the core of its Xcellis workflow storage systems, and its technology now includes all-flash options for accelerating performance and a tiering system that incorporates public and private cloud access with existing workflow. We spoke with Quantum's Dave Frederick, senior director of media and entertainment, about the state of the art in high-resolution workflow.
StudioDaily: I'm getting a sense that 4K is becoming the new baseline for post-production workflow, despite the fact that most work is still being delivered at HD and 2K.
Dave Frederick: You're correct in identifying the difference between production and delivery. I think some people were taking a wait-and-see approach, recognizing the question was not if 4K would become necessary for their workflows but when. It is a major shift, and most of the market was delaying their reaction to it. But at this point, everyone is pretty much resigned to the fact that, even if they're not delivering in 4K, they're going to start capturing in 4K, especially with the price of cameras. It would cost you almost as much to buy a non-4K camera as it would a 4K camera.
What trends are you seeing in the way Quantum's customers are adapting to these new, heavier capacity and bandwidth demands?
The first thing they realize is that it depends on how they want to work. Uncompressed 4K is a significant jump in performance requirements. Even lightly compressed 4K formats can be a bit of a struggle for disk systems that handle HD just fine. So the existing infrastructure starts to fall down, especially where you have multiple users on a shared storage system. We've just done a series of 4K performance tests that we're going to be publishing before the NAB show. There were over 1,000 tests in that matrix. And we found a tremendous roll-off on the number of streams you can get as you move up from compressed 4K to uncompressed 4K. The world flattens out at one or two streams for most normal disk systems that could easily handle many streams of HD. So you have to do something. Many storage manufacturers will say, "Just buy flash. That will solve your problem." In fact, our finding is that flash may not solve your problem if you have to work in 4K uncompressed. Flash is pretty good for compressed streams, and you can get a high number of streams, but the trade-off is capacity. There's the unholy triangle of 4K — you're constantly trading off capacity, cost and performance. As you move up into flash, obviously your cost goes higher and your stream count goes higher, but at a significant capacity hit. And with 4K, those files are naturally bigger.
The other approach is to get more spindles behind the problem and get more aggregate performance out of lower-cost spinning disks. We have good results in high-capacity systems for up to 7 or 10 streams of uncompressed with no more than 96 disks. That's like eight RU of storage arrays, and you can almost get into the double-digit stream count. We can scale out to dozens if not scores of arrays and generate a system that can handle as many streams as people need. So that's the challenge — flash can solve a problem, but it's not a one-for-one switchout. If you have three disk arrays, you can't just substitute three flash arrays and have the same capacity. A lot of people will be going back to almost a transfer model, in some cases. They might put direct-attached flash on an editing or coloring system and transfer files into that system to work on and then transfer them back out when they're done. It's a step backward in workflow.
That's an interesting dilemma. If people want that performance without upgrading their entire facility, they're going to start thinking about workarounds to get it only in the place where they need it. But then, all of a sudden, they're spending time moving data around in a way they thought they said good-bye to ages ago.
And that doesn't surprise me. Clients are going to demand the best quality pixels you can create for them. Whether it's a [film] post-production facility or broadcast television production, everyone wants to capture the best possible pixels and work in as close to full resolution as possible. So there may be a period of time where companies move back to a proxy workflow and do conforming again. But there are ways you can work in a disk-based system and still get all the performance you want, and there are ways to employ flash for the right things and still remain in a shared environment. We're solving those problems for customers all the time, and moving forward to 8K is a natural extension for us. We've already won contracts for 8K and are deploying those systems for use today. And we're doing research into what it's going to take to go further than that — to the next multiplier, which would be 16K.
It sounds a little bit crazy to be talking about 16K at this point. But that's what you've got to be thinking about. 8K is not common, but it's more than a science project. We've seen successful field tests.
People no longer laugh at you when you say 8K. They used to say, "Oh yeah, 8K. I'm not even using 4K yet." Well, 8K came right on the heels of 4K, compared to the jump from HD to 4K. I have a feeling that we're on a slippery slope of increasing resolution.
You were talking earlier about the fact that a lot of attention is paid to flash because of pure performance, even though spinning disk can get you to the same place. Does that also apply to 8K or even 16K?
Ar some point, it becomes impractical. If somebody said to us today, "I want 10 users to have three streams each of uncompressed 16K," we could come up with the number of disks it would take, and it would work — but is that really what you want to buying at this point in time? We're very close to the point where flash becomes a better value for the type of work we do. We're almost there. The only issue is capacity. But we're seeing larger SSDs coming along, and we're seeing faster SSDs. Will it happen before everyone feels they have to do 16K? I think it probably will. At some point, you'll have to have a special reason to outfit with spinning disks. But they will continue to be an affordable way to provide accessible archive, instead of tape. That's what object storage is — by aggregating a lot of spinning disks that aren't necessarily high performance, you can get a very scaleable, self-healing lifetime archive. The world of primary, which is currently on fast disk, and the world of archive, which is frequently tape, will shift to flash and disk, respectively. Will it happen before users feel they have to go to 8K? It's going to be a close race. Sports tends to lead the pack in terms of format adoption, so we will see sports head there first, and they'll do what they have to do to get there.
We've been looking at 4K for a while, but are we approaching a breakthrough year?
The earliest examples of 4K workflows at NAB were three or four years ago, but for the previous two years we've been showing 4K on our systems. There was a noticeable increase in adoption of 4K workflows at the last NAB, and I think this year is when people who didn't do it last year have to get it figured out. The whole move toward SDI over IP is driving it as well. If you can move 4K around your studio or facility more efficiently by using IP, that's part of the equation. It's not just when things are files — it's even before they get to be files. Those things are coalescing, and customers at NAB are going to have a serious 4K agenda. We're going to be showing systems — no surprise — that are more efficient and can produce more streams from smaller configurations. We've always been a shared storage platform, so that helps organizations who have a heavy collaborative workflow. We'll have that and the full range of disk options, both spinning disk and flash.
What else do you see requiring consideration by customers this year?
Well, let's talk about the cloud. I think there was a bit of cloud euphoria when people first started using the cloud. Then they got their first bill and realized, "Maybe the cloud isn't going to work for everything." Companies are now using the cloud judiciously, depending on what their requirements are. They're not expecting to be able to do everything in the cloud, but picking what the cloud is right for. Early this year, we introduced a product called FlexTier, which is the ability for StorNext to automate the tiering of data either to customers' own public cloud accounts at Amazon, Microsoft, or Google, or to existing private cloud infrastructure they might have from third-party object-storage systems. That means they no longer have to create a separate workflow for submitting or retrieving content from the cloud if they're StorNext users. So that's been a good conversation for us to have with customers. They can procure capacity from whomever they like, and we give them an on-ramp and data-management path to take care of that. And by the way, we have already petabyte-scale customers using FlexTier to move content to the cloud and object-storage systems
Object storage in general is going to become more interesting to people. We were talking about the transition from primary storage on disk to primary storage on flash. There will be a similar move to object storage. The adoption has been slower than many people expected in media and entertainment (M&E), but that's because the applications haven't been there. In the IT industry, they're just writing unstructured data and they don't have much of a workflow around that. In M&E, you're moving data in and out, and you're depending on it being in a certain state to move it to the next phase. Some MAMs and systems understand how to talk to object storage, but customers haven't fully embraced it.
What do you think is going to drive adoption of object storage?
I think it's the ecosystem. The ecosystem has always been the driver of adoption for all kinds of infrastructure. And M&E is uniquely application- and workflow-focused. Software applications, MAMs, editing systems — those are the primary decisions that the customer makes, and then they look to deploy infrastructure that lets those systems work together. With object storage, it's the same thing. Customers must have a use for object storage. As a content repository in CDNs, or OTT delivery, or broadcast playout staging, object storage is doing well. In production and post workflows, it hasn't gotten the same traction. But we'll see more and more of it. And when flash becomes primary storage, object-based storage may take its place as the secondary tier.
What's going to be different about NAB 2017 when it comes to storage workflow?
As we continue to go forward, storage capacity will become almost a given. It will just be available. It will cost money, but it will be such a small part of the equation that people will consider it a given. At that point, what matters is the management of data you put into that capacity. When it comes down to it, the value of the data is far greater than the cost of the capacity. That's why, at this NAB more than others, you're gong to start to see a swing toward data management solutions as opposed to data storage solutions. When capacity outpaces the amount of work that can be done on the device, the questions are more about "what do I do with my data" rather than "where do I put my data?" We're basically talking about factories — media factories — and factories are always looking to improve efficiencies. Inefficient processes cost money that doesn't need to be spent.
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.