Shortly after releasing its first storage appliances back in 2009, Avere Systems products caught on with visual effects facilities, who saw big benefits in placing the company’s high-performance storage tiers in between their existing storage architecture and the render farms that needed to quickly access large amounts of data. Avere’s FXT filers provided easy scalability for render farms while taking pressure off other parts of the network. Last month, Avere Systems said longtime customer Sony Pictures Imageworks was deploying Avere’s new FXT Edge Filer 5600, improving throughput by 50 percent without replacing any of its existing storage architecture, as part of a recent 20 percent expansion in rendering capabilities — with plans to expand by another 20 percent over the next year. We spoke to Avere VP of Marketing Rebecca Thompson to get some background on how Sony is using the new hardware, the difference between filer clusters on premise and in the cloud, and how smaller studios can use the technology to spin up serious rendering power on demand.

Avere FXT 5000 Edge filer series

StudioDaily: In Sony’s case, the FXT filers are basically being used as a high-performance layer between the render farm and the storage infrastructure, correct?

Rebecca Thompson: The primary purpose is to accelerate the performance of the render farm and be able to scale out effectively. But at the same time, they want to make sure the artists’ workflow doesn’t get disrupted. The artists are accessing the same storage servers, but the renders are so resource-intensive that if you don’t think about the architecture carefully you can end up starving out your artists — the renders go on in the background and the artists can’t access anything. Render farms don’t pick up phones and call and complain. Producers will complain if their stuff’s not getting done on time, but artists will pick up the phone and complain if they can’t get their editing and compositing done.

We love the Sony story because they have been a long-term customer of Avere’s. They were one of our first production customers in the media space back in 2010, and as they’ve grown we’ve grown, too. Their render farm was probably about a quarter of what it is now, but all along the way they have been a repeat customer on a pretty consistent basis. I know they are excited. The last one they put in was our new hardware, the 5600, which was our high-end model with a 4x improvement in SSD. We went from 14 tb of ssd to 29 tb of ssd in that box, and it went from close to 7 gigs in read throughput up to 11 gigs.

That’s fast. And it’s nice that you can put this in without completely reinventing your architecture.

That’s one of the things that we are conscious of every time we come out with a new model. Our models work in a clustered fashion, so a customer can have anywhere from three to more than 25 nodes in a single cluster. Let’s say you have a cluster of 10 boxes. You want to put in three new nodes. You don’t have to take anything down. They will just auto-join. They don’t have to be the same models. And that’s really nice for customers. They can keep their older Avere gear and make use of that, and then drop in the new stuff and get the advantages, and everything works well and plays well together.

If customers are using a mix of on-premise and off-premise storage, or are using some cloud capacity for storage or rendering, can they also take advantage of this technology to increase their throughput when they need it?

Absolutely. Sony has an infrastructure that’s probably typical of larger VFX studios. They have a large data center in Culver City, but a lot of their production work is done up in Vancouver. They have Avere clusters on their remote sites as well as within the data center. And the remote sites are WAN-caching. You have all the data local to Vancouver, but you also have copies back in the L.A.-based data center. That’s the way they’re using it.

Now, we have other customers, particularly in the rendering space, who do something that we call cloud-bursting. That’s where they want to use cloud compute nodes rather than cloud storage. We have customers who work with both Google and Amazon Web Services [AWS], and they are probably split evenly — rendering is one area where I think Google has done better and made more inroads in the M&E space. So we have a virtual version of our product. Instead of the physical product, it’s our Avere operating system in a virtual format, residing up in a cloud on a platform that we specify. We have a couple of different flavors in each cloud provider, to say we require X amount of SSD capacity and X amount of memory and it resides on those and acts as a caching layer. That allows people to keep their data on premise. Let’s say you have yourself Isilon or NetApp or whatever storage hardware. You can send to the cloud only the amount of data you need to render, render it, and send it back on premise. A lot of studios are reluctant to store data in the cloud over the long term. Sony is very vertically oriented, making their own movies and doing their own VFX work. But a lot of the VFX studios are doing contract work on projects like the Disney Marvel movies where there are a lot of restrictions in place around security. You want to make sure the movies don’t leak out before release. So we actually have customers who have physical nodes of ours for use on premise, and then they’ll spin up more [in the cloud].

Interaction between Avere FXT and storage on-premise and in the cloud.
Source: Avere FXT Data Sheet

The great thing about Avere vFXT nodes is they can be spun up on an as-needed basis. We have some large special effects provider who do it just for overflow. They don’t have enough render nodes in their data center and they don’t want to buy more for just one month of extra capacity they may need, and they will be looking to use cloud providers. And we have some little studios who don’t really have any infrastructure — they’re all cloud-based, and they’re spinning up on an on-demand basis as well. For example, Eight VFX is a small VFX shop that primarily does commercials, and the majority of their infrastructure is in the cloud.

It’s interesting to see facilities on the smaller side figuring out they can simply spin up instances of what they need virtually and then spin them down again rather than keeping their own systems.

A lot of the smaller guys feel that this allows them to be competitive, especially when they’re bidding on a job. They might be bidding on a movie that has to be done in a tight timeframe and before, if they didn’t have the infrastructure, they’d have to rent it — somebody would have to drive over truckloads of servers. But this allows them to be confident that they can handle the workload. Sony clearly states that one of the things they love about Avere is the fact that they can use the cloud if they want to. But they’ve done the math and, because they have such huge physical infrastructure and own so many resources, the cloud actually is more expensive for them at this point.

Yeah, for a lot of the bigger guys it is not delivering the cost savings that were originally promised.

You look at Sony, and I’m told they’re running their render farm pretty much 24/7 at 95% utilization.

So they’re already making good use of their resources.

Exactly. And a lot of these smaller shops would have more idle time, so it wouldn’t make a lot of sense. But we work with AWS and Google and their costs are coming down all the time. I don’t imagine it will stay that way forever.

It’s always interesting to think about the kind of calculations that have to be made. Does on-premise stuff get cheaper as quickly as cloud stuff gets cheaper? And is the equation impacted by how big your deliverables are? 

And what kind of control do you want to have over them? And there are also labor and real-estate costs. But one of the things the cloud people can’t offset is tax credits. You don’t get tax credits for using a cloud provider, but you get tax credits for the people who are running your data center.

Media is really interesting. We sell into other areas as well. If you look at life sciences, a lot of people are using cloud, but they’re not bidding on movies and they don’t have crunch production times, so they’re doing more of a steady-state analysis. They have a lot of sharing among research institutions, so it makes sense to put data in the cloud where other people can access it without coming behind your firewall. But in media and entertainment it’s a whole different set of considerations.

Avere Systems: www.averesystems.com