Imagining the Body in all its Gory Glory for Trauma Drama

Jellyfish Pictures, a London boutique VFX house, has been around since 2001, quietly working on broadcast projects. But the company, which was founded by Philip Dobree and William Rockall, got recognition in the U.S. among VFX colleagues last month, when it snagged the Visual Effects Society’s award for Outstanding Visual Effects in a Broadcast Series for its work on Fight for Life. Jellyfish, whose prior credits include work for BBC Television, Channel 4, The Walt Disney Company, Endemol, Universal and the Cartoon Network, provided all 300 visual effects for the six-part BBC One/Discovery/DCT co-production. The series focused on the body’s ability to fight life-threatening health issues from infancy through old age, and Jellyfish’s work enabled producers to go into the body to examine trauma, all the way to the microscopic level. Jellyfish ramped up to as many as 30 people to create the visual effects, based on real traumas filmed over a two-year period in British hospitals, in a four-month time frame. F&V‘s Debra Kaufman spoke to Dobree on the eve of the company’s VES win.

Click below to see a clip, or check the work out on the company’s website at www.jellyfishpictures.co.uk, or at http://www.bbc.co.uk/health/tv_and_radio/fightforlife_index.shtml. Warning: It’s not for the squeamish. And watch for the May issue of Studio Monthly, which will include a tutorial by Philip Dobree walking through a shot from the series.

F&V: How did you prepare for Fight for Life?
PHILIP DOBREE: What was important was that we spent a month or two upfront in R&D, which in TV is a long time. We used that time just developing techniques, such as shader pipelines, so we knew what we’d do in compositing for every look. We created three different looks: trauma-vision, an X-Ray kind of look; the photoreal/endoscopic shots; and the microscopic shots, which were the hardest.

With this in place, once we were given the shots in the heat of production, we knew how to set them up. We had all the models done and the textures set up. We used Softimage XSI for the complete workflow, using its modeling and animation with a 3D paint tool for all the babies and organ elements. Syflex provided us with the realistic secondary movement for fleshy areas. We also used Next Limit’s RealFlow [which just won a 2007 technical achievement award from the Academy of Motion Picture Arts and Sciences] for fluid-type watery movement to get the microscopic images really wet-looking.

What were the logistics of working with such a big production – and one based on real events?
To make the story succinct and clear and be truthful was the battle. Four camera teams were on call for over two years to capture stories. In the field, some producers were so afraid of missing something in a trauma situation that they kept the camera on [throughout the trauma]. We might have 300 hours for one story that had to be edited down to 10 minutes.

We were also dealing with some producers who didn’t have a lot of experience with CGI. They didn’t understand that they had to commission their shots as early as they could. But that was hard for them, because they didn’t know what they were going to need exactly. It’s an actuality, so the script is constantly changing. The producers were dealing with all the footage they were trying to edit into a story, and we were trying to get commitments out of them as soon as possible.

Dovetailing in the CGI in those stories was our challenge – and their challenge too. The CG had to be part of the narrative flow, so that when we got into the body people had to believe that there could be a camera in there. That was the primary aim. We always started outside of the body and traveled in. We’d get those in and out clips and then we’d have 23 seconds or 15 seconds to fill here where the shot would go in.

We would start by building a wire-frame animatic – sometimes a very basic one – so they could get it into editing and see if it was working. They could make their changes at that point on the animatic when it was relatively accessible. No way do you want to go to a lighting/rendering stage before they’ve committed for sure.

Then they’d come back, and hopefully we’d get some commitment. But because it was a co-production and there were doctors and hospitals involved, there was a whole other layer of approvals. The producers had to send the images out to the hospitals, to the executives, to the co-producers, to make sure that things were all right.

Another challenge was that we were doing all six shows at the same time in four months – shot by six different directors and being edited at the same time. That was the crunch. So we needed that pipeline we had set up to make things work for us.

What resources did you use to make it as scientifically accurate as possible?
We looked at a lot of horrible stuff on the Internet. Then we went to the butcher’s and bought stuff like lungs, hearts, livers, which we put in the freezer or the refrigerator. We took high-res pictures and even messed up the scanner with cling film (which didn’t quite work). We used all of this especially to get reference for the light, the feel, the wetness, which was important to making it so real.

What was the pipeline for lighting, texturing and rendering?
We rendered with Mental Ray, and our compositing tool was Shake, a good workhorse. Using the techniques we’d already developed, we broke down the shots into layers so that every shot we rendered had 18 different passes, among them specular, sub-surface scattering, global illumination, lighting, shadow, depth-of-field, reflective, a lot of shiny and wet passes, ambient occlusion, color, texture, an RGB lighting pass, and so on.

TD/visual effects artist Matt Chandler created some invisible software related to techniques of putting the passes together. We knew how to put the passes together so it feels like there’s a kind of wet, slimy surface on top. It was a genius way of devising a texture map, but it’s an in-house secret. The shaders also rendered very quickly, so we didn’t have the normal and expensive amount of render time.

[Compositing multiple passes] is pretty much the method people use nowadays, but it’s much less common on TV. We thought that working with so many passes would give more flexibility to the compositors and actually get a better result than a beauty pass with one or two additional passes. It means you’re putting more work on the compositor but the benefit is that the compositor can also change the image instantaneously. Once the animation has been approved, the director can ask the compositors to make that a bit bluer or a bit more blurry, and they have all the tools to do that. If it’s just a beauty pass, there’s nothing much they can do with it.

What were the most challenging images to create?
The microscopic world was very challenging. Normally, you see it through an electron microscope but we weren’t supposed to create that look. The microscopic world we created had to be photoreal, to feel like the camera was going down another layer deeper. But no one could tell us what it looked like, because the only way to see it is through a microscope. There was no reference. There was a huge amount of guesswork and creative license.

After Fight for Life, have you been typecast now as the go-to VFX house for inside the body?
Yes ‘ that always happens. You can’t avoid it. We’ve got the assets, and we’ve had numerous inquiries.