Rushes Creates 400 VFX Shots of the Human Body Inside and Out

For the Discovery series Human Body: Pushing the Limits, which examines the intricate processes of the human body under a variety of ordinary and extreme conditions, Rushes in London created all the CG and VFX to allow the project to transition seamlessly between live action actors and realistic versions of the actors’ bodies, sans skin, as well as pushing into the muscle, veins and cells of the body. With over 400 VFX shots comprising 80 minutes that all needed to be perfectly biologically accurate Rushes created a highly complex master rig with controls and flexibility that could be applied to all the many characters in the series. We spoke with VFX supervisor Hayden Jones and VFX producer Louise Hussey of Rushes about the tricks and techniques employed to deliver all these shots in HD under the tight time and budget constraints. Rushes used Autodesk Maya, Apple Shake, Eyeon Fusion, Renderman and a host of in-house shaders.

We spoke with VFX supervisor Hayden Jones and VFX producer Louise Hussey of Rushes about the tricks and techniques employed to deliver all these shots in HD under the tight time and budget constraints.

What was the approach for this project and to the models?
Hayden Jones: We created a Human anatomy model that we could place under the skin of live action actors and transition back and forth between live action and this animated anatomy. At first we weren’t sure how realistic it was going to be. We did a lot of research and development into the look of the physical human. There were lots of look tests, sending them to Discovery to get the level of how visceral it was going to be. What they didn’t want is something that would put off viewers, something too gory. So we calmed down the level of detail a bit and went for a slightly transparent look, which at HD it looks fantastic because it increases the amount of physical detail in the models.

Louise Hussey: They wanted us to create a sense of wonder at the beauty of the human body and for that to be the first thing that people got when they looked at it rather than have them recoil from the screen because it was too lifelike.

Explain the pre-production process and how you worked with the production company to develop the shots.
LH: Before shooting the live action we spent a lot of time working with animatics so we could explain the best ways to achieve what they wanted in-camera so that would give us the most flexible plates.

HJ: That was a really important process. Obviously we had to design the shots for maximum visual impact. Shots like this can become horribly massive shots that eat up budgets rather quick. So each animator here was given a sequence of shots.

LH: The pre-vis was there for multiple reasons. So that they had a sense of what we could do with the models, what angles worked best and what would be extresmely difficult to do in the animation. The animatics also provided a guideline for the edit so that they had something to use as placeholders for their cuts. So the supervisors went out on set and worked with the director. We also sent them out with a kit of tools so they could gather reference for us.


What sort of reference data was collected on set?
HJ: A lot of the reference taken on set wasn’t just to do with the actors themselves, although that was a huge part of it. Initially we were tapped with taking just one person and transforming them into this visible human. But as production went on it quickly became apparent that there were many actors and we’d have to fit the man model to all of them. So the supervisors took detailed measurements of all the actors that we were going to transform: arm lengths, leg lengths, height, skull size, so that we could craft the CG model directly to the specific actors.

Also one of the things we had to do was capture lighting reference, and this was very important to to making our High Dynamic Range lighting pipeline work, and that was a first for Rushes.

Talk about your HDRI pipeline.
LH: It was a pipeline we developed in-house to allow us the most efficient use of the material we had. There was a relatively short time schedule and we had to turn this around effiiciently without spending days and days getting the lighting setup for each shot. So we developed a pipeline that would allow us flexibility within it and also streamline the last stage of animation into the shots. So those lighting reference was integral to that whole pipeline.

HJ: On set one of our supervisors go out with a digital SLR and a chrome sphere. He’d take multiple exposures of the chrome sphere from very underexposed to very overexposed image and everything in between. When that comes back we put that into some in-house software that combines it all together and spits out an HDRI image. So we get back about 20 stops of dynamic range. So that goes into our Renderman shading pipeline patched to custom lighting shaders that we have written in-house. And that process nails the lighting really quick. It was especially important for this where we were dealing with transparent animated objects because there are so many variables and unknowns. But with this you achieve about 80 percent of the lighting work in virtually one button push. You convert your HDRI plates in the system and it gets really close and then the lighting TDs tweak it and gets some extra work in for artistic purposes.

This was all hand-animated? Was there any motion capture?
HJ: All hand-animated. The reason there was no motion capture is there was such a wide combination of locations and actors that there was really no time to pull all the actors together on one continent let alone a motion capture stage. We rotoscoped what movement we could off the plates and the rest of it was hand animated.

We had one master rig that had all the controls. There was quite a lot of scripting to keep muscles bound to each other. That took a considerable amount of time. Then every other model was a simple rescale of that rig. It took a bit of time to get the rig to scale perfectly to different body sizes and the muscles and organs to scale relative to each other in space.

LH: One of the other things we did with the rig was to introduce parameters that we designed in house that would flag up and start to flash if an animator was trying to push the body into a position that was physiologically impossible. On occasions because of the dire circumstances that these poor actors found themselves in, the body had to be pushed in a certain direction. So we didn’t want them to be unable to do it but we wanted them to be alert them that they were doing something that wasn’t quite right so we kept in the boundaries in the animation. That was fantastically helpful so the animators weren’t having to constantly refer to biological books to see what they could and couldn’t do, how far a shoulder would bend back.

HJ: One of our TDs was put on the job to create the master rig. One of the things we used to drive our muscle system was a plugin called Comet Muscle, which is now one of the official plugin for Maya. That is a fantastic plugin that helped with speed for simulating large muscle groups. The other thing was a lot of custom scripts. The one problem with having one rig is that it becomes really slow really quickly and it is slow to animate with. So we built controls into the master rig to have numerous levels of detail so animators could switch off many of the different systems of the body they wanted to get realtime feedback and then when it came to rendering we switched all the systems back on again.


How did you handle the shots of inside the body?
HJ: We went into the body and used the same model you see on the exterior. We came up with a neat trick of turning electron microscope views into pseudo-3D as well. We worked with the Science Photo Library, UK, that has a fantastic collection of electron microscope photography. We got some 4K scans and in Fusion we created displacement maps for them to create the illusion that they were 3D with camera moves. It’s a really nice effect that stops a still from looking like a still.