Effects Supervisors Talk about The Scenes That Made Them Sweat on Summer Blockbusters
Summer is high season for showcasing effects work, the kinds of stunts
that rarely make it onto the radar of critics. This summer is no
exception, so Film and Video pans right by the
sweaty (and/or grean) leading men to zero in on the teams that did
major heavy lifting on the key scenes in
The Day After Tomorrow creates a devastating picture
of a global-warming catastrophe. We look at the techniques used for two
shots that set the stage, the five-minute opening sequence and the
Tokyo hailstorm. Both shots were created at Hydraulix, a young,
innovative studio that believes no artist can have too many machines.
PDI's head of effects Arnauld Lamorlette describes techniques used to
wrestle state-of-the-art computer graphics into production-manageable
tools for creating fairy-tale human characters and storybook
environments in PDI/Dreamworks' Shrek
And for Universal Pictures' The Chronicles if
Riddick, Mike Wassel, visual effects supervisor at Rhthm
& Hues, reveals how the studio worked with fluid simulations to
generate natural, albeit alien, environments.
Big CG at a Small Studio: The Day After Tomorrow
When The Day After Tomorrow, Torque, Terminator
3, and Looney Tunes. "I initially gave
them the ice-shelf sequence, but then I kept throwing more and more
work their way," Goulekas says. "They ended up with about 109 shots.
The ice-shelf sequence is the longest fully digital shot in a film, I
think, and it’s beautiful. It’s the first fly-over the icebergs. The
ice cracks, a crevasse forms, and a chunk falls off." That starts a
chain of events that results in the dramatic climate changes through
the rest of the film.
The sequence begins with two minutes – 3800 frames – of entirely CG
fly-over of Antarctica, then cuts to images of a base camp covered with
snow that was extended to the horizon with CG. The entire sequence ran
nearly five minutes. During filming, the on-set base camp was
surrounded with blue-screen; a blue-screen tarp on the ground
substituted for the crack that would rip through the middle of the
camp. "The cloth showed the actors where not to step," says Greg
Strause, co-founder of Hydraulx with his brother Colin. Hydraulx used
Discreet’s Combustion for rotoscoping and cleaning up tracking marks,
and 2d3′s Boujou for tracking.
Hydraulx first built the ice shelf and the crevasse as a foam miniature
to match the pre-vis and in-camera shots. The miniature was then
scanned with a Polhemus scanner. The resulting point-cloud data was
converted into polygonal data with Headus’ CySlice software, then
imported into Maya. Because they would render beauty passes in Mental
Ray and matte passes in Maya, two programs that calculated displacement
maps differently, the modelers put all the physical detail for the ice
shelf and the crevasse into the model rather than displacement maps.
Animators, however, used simpler geometry when they caused chunks of
ice to break off the shelf and fall into the water.
"Because the cut was already locked when we started working, we had the
overall timing," Strause says. "If we had used dynamics to make the
pieces fall, they would have been different every time and not what the
director wanted, so 85 to 90 percent of the animation was done by
character animators. They could cheat the speed of gravity."
To create the look of the ice, Hydraulx composited numerous render
passes in Discreet’s Inferno. One of the most important of those passes
added subsurface scattering. The shading technique is most often
associated with creating believable flesh on CG characters; however,
the same algorithms can add translucency to any object. "Normal CG
rendering made the ice look plastery," says Colin Strause. "The
subsurface scattering gave it depth."
For this, the team used a plug-in for Mental Images’ Mental Ray from
LightEngine3D called SubScatter. "We worked with the company to get the
shader to work on Linux," says Greg Strause.
Other passes included diffuse, specular, a matte pass to isolate snow
on the top, and passes for the crack and the grooves. "For every shot,
we had to render between 10 and 15 layers in 3D," says Strause. "We
bought an entirely new render farm for this movie and even so, it took
between two and a half to three hours per frame with 3.2 GHz dual
processor Xeons with 6 GB of RAM,"
Creating the crevasse was particularly tricky. "It’s 10 to 15 feet
wide, 300 feet deep and curving," says Strause. "When you’re trying to
light that, no matter where you put the light, you get it only 20
percent of the way down into the crevasse, which makes the crevasse 80
percent black. Normally, where light hits, there’s where it dies. But
we wanted to see all the way down, so we used Mental Ray to bounce the
light from one side to another, playing a bouncing game as far down as
we could. It looked natural, but it was a fake."
In addition, the team had to add snow to all the ice chunks by adding
particle emitters to between 500 and 1000 animated objects. "We had
streamers, blowing snow, snow swirling around the crevasse [and]
dripping out of cracks on the crevasse walls," says Strause. "There
were 80 different scene files with particle sims in them."
To render the particles, Hydraulx used Kolektiv’s Stroika software. "We
had one guy babysitting particle renders on five or six interactive
machines," says Strause. The team also used Maya fluids to create misty
snow on the surface and Inferno’s 3D particle system for more snow. For
the sky, Hydraulx created matte paintings in Adobe’s Photoshop using
11-megapixel stills shot with a Canon 1DS camera. For water, they used
SyFlex’s cloth-simulation package.
The final shots were composited in either Discreet’s Flame or Inferno.
"One of the big things that made Roland [Emmerich] happy is that we
could show him the edit of a sequence with all the shots put together
with the sound," Strause says. "We could pull out a shot, color-correct
it in Inferno in realtime working at film res, uncompressed, and drop
it back into the edit. There was no jumping between the Inferno and an
edit bay, so he could sit with one artist, look, change, edit,
composite and color-correct all at the same time. The artist had the
edit for the entire sequence."
This was possible because each Inferno had 2 TB of storage. "Our
philosophy is different from other studios," says Strause. "I’d rather
have three guys with 50 machines than 20 guys on 20 machines."
In addition to the opening sequence, Hydraulx put its muscle behind
three other sequences- a hailstorm in Tokyo, a storm seen from the
space station (with The Orphanage and Dreamscape Imagery), and freezing
ice inside the library.
The hailstones were hand-modeled chunks about the size of a grapefruit.
"We ran a dynamic simulation of eight pieces shattering and bouncing
around and then attached the eight pre-made sims to an expression
system," Strause says. The hail was put into the scene by animators
using lines to represent speed and direction and circles to represent
shatter area. "We put random dynamics on 70 to 80 percent of the
hailstones, but the hero pieces – the hail hitting a guy on the head-
were hand animated," he adds. "Everything was tracked in Boujou, and we
built simple geometry to represent objects in the shot. From that
point, we would project actual footage onto the geometry from the
camera view so when we rendered the transparent 3D ice, it would
refract the film footage as if it was there. That was the easy stuff.
The hard part was the roto."
For rotoscoping, the crew again used Combustion. "This was a busy city
street with a traffic jam," says Strause. "There were cars, people,
out-of-focus chain link fences. It was a roto nightmare. It doesn’t
look like a hard comp, but it was a tedious pain in the ass. All those
pieces of ice on the ground, the actors kicking as they’re running. The
set was alive and dynamic in camera. But that’s why it looks like a
For the giant storm cell, the trick was a simple sphere with detailed
displacement maps. "The matte painters spent weeks building texture
maps that were 8000×8000 pixels and bigger," says Greg Strause. The
sphere was rendered using the SubScatter plug-in. "90 percent of the
look was from the subsurface shader," says Strause. "It gave it a
cloudy look." The renders were output without color; color was added in
Work such as this has turned Goulekas’ head. "I was a big-facility snob," she says. "I’ve done a 180-degree turn."
Reducing Render Time by a Factor of 10: Shrek 2
PDI/DreamWorks’ Shrek was a huge success, earning
more than $480 million in box office revenues and winning the first
Oscar for Best Animated Feature. Shrek 2 brings back Mike Myers as
Shrek, the green ogre, Cameron Diaz as his bride Fiona and Eddie Murphy
as Donkey. This time, however, rather than traveling through story-book
landscapes, the trio spend most of their time in the land of Far, Far
Away. The crew at PDI applied the same types of sophisticated lighting
that effects studios use to create photorealistic graphics that blend
into live-action films. PDI, however, applied such lighting techniques
as global illumination and subsurface scattering to thousands of
characters in entirely synthetic environments to soften shadows and
create translucent skin for the entire 105 minutes of the film.
"Our breakthrough for global illumination was in determining how to
simplify it without losing visual quality," says Arnauld Lamorlette,
head of effects. "We couldn’t afford to take 100 hours a frame to
render images." Global illumination is achieved by bouncing light rays
through an environment to produce a softer, more natural light than is
possible with spot lights; however, the calculation time can be
enormous when the environments are filled with complex geometry.
The first shortcut was to agree that one bounce would be enough.
"Usually, you have lighting bouncing from, say, a window to a wall to
the ceiling to the floor, but with every bounce, it loses at least 50
percent of its energy," says Lamorlette. "The quality you’re looking
for is already in the first bounce. It contributes 90 percent of the
information you want."
The second shortcut was to simplify the calculations for specular
lighting, which adds a sheen to surfaces. "Instead of sampling the
entire environment to determine the direction of light, we stored one
general lighting direction and computed the specular once each for red,
blue and green," says Lamorlette. "Even though it is totally physically
inaccurate, it’s totally visually acceptable."
Last, the crew developed a technique for ray tracing that uses
simplified representations of geometry — an environment made from four
million polygons might be represented by 4000. The system automatically
detects when the results are inaccurate. "By combining all these
shortcuts, we reduced rendering times easily by a factor of 10,
sometimes even 100," says Lamorlette, who notes that the techniques are
described in a paper that will be presented at SIGGRAPH this year.
Similarly, the crew used a method of simplifying subsurface scattering
that was presented in a SIGGRAPH paper last year. To take advantage of
these lighting techniques and upgrade its rendering capabilities, the
studio improved its rendering tools, including a particle renderer that
efficiently creates fire, water and other particle effects. "It’s like
impressionism," Lamorlette says, describing a process that sounds like
a Georges Seurat painting. "Rather than using pixels or scan lines, we
render little dots, and by adding millions and millions of dots, we
create a visual image. It’s a very nice way to represent volumes with
The software PDI uses is largely proprietary – the studio has won
scientific and technical awards for its fluid simulation software and
its muscle-based animation system. However, Maya also plays a role. To
create a system that allowed artists to paint 3D foliage rather than
using mathematical formulas to grow plants, the crew created a tool
based on Maya PaintEffects. And the studio also used Maya’s
cloth-simulation engine to animate flowing clothes and long hair, like
that on the horse’s mane and tail. For tight clothing and most of the
human characters’ hair, PDI used proprietary software. "We created a
whole new hair system that allowed us to run hair simulations on crowds
of people," Lamorlette says. The software runs on Hewlett-Packard
workstations and servers.
"I think the biggest difference between the two films is the
sophistication and richness of the characters; they’re more believable
now and also the environments are more art directed," says Lamorlette.
"Having art-directed tools contributes the artistic results of the
Making Fire and Rain: The Chronicles of Riddick
The Chronicles of Riddick could be the most
game-like movie of the season, but no game machine can quite yet handle
the simulations created by Rhythm & Hues for this science-fiction
action-adventure. The film finds the titular antihero exiled in an
underground prison on a planet where temperatures range from icy cold
to 700 degrees.
The first simulation the studio created was a waterfall inside the
prison. "There’s a sequence where the hell hounds are released to cull
the population," says Wassel. "One particular old hell hound pushes
through the water and stares Vin down eye-to-eye. The waterfall and the
dog are completely CG."
Building a synthetic waterfall was no small stunt, but building one
that could be pushed aside by a hell hound was particularly
challenging. To create the effect, the studio used proprietary software
for the fluid simulations and to model and animate the hound. Side
Effects’ Houdini helped out with the particles.
The trick to creating the waterfall was starting with an inner core, a
"laminar flow sheet," which was a thick piece of undulating geometry
that acted like a lens, reflecting and refracting objects seen through
it. The movement of this water sheet was driven by the fluid
simulation – that is, the fluid sim changed the shape of the sheet from
one frame to the next. On top of this undulating geometry, layers of
particles, also driven by a fluid simulation, gave the waterfall its
texture – the spray and fine mist. Finally, the hell hound, a creature
covered with 9000 overlapping scales, pushed its head through the
middle of all these simulations. Thus, the waterfall had to interact
physically with the creature pushing through it as well as refract the
dog, a background live-action set and a miniature set, which took
painstaking, step-by-step work on the part of the animators and the
technical crew. Having the laminar flow sheet meant that the distortion
from the refracted light could happen on the sheet of geometry, though,
rather than solely in individual water drops.
When Riddick escapes to the planet’s volcanic surface, he faces a new
threat – an enormous storm called the visible thermal front, or VTF.
"It’s a massive, hot, gaseous cloud that travels across the surface of
the planet once the sun heats the surface and the air around it," says
Wassel. "In two shots it kills characters in the film."
Seen from space, the storm was a matte painting animated and distorted
with the help of Shake. "Our compositor Jimmy Jewell worked on the shot
for eight months to develop a methodology to warp the painted image
without having the distortion cause the effect to fall apart," Wassel
says. Rhythm & Hues created 75 matte paintings for the film.
In other shots, the VTF is a 3D simulation. "When the storm is
human-scale, it’s a fluid simulation that interacts with our 3D
geometry on the surface of the planet," Wassel says. It chases Riddick
and crashes into a cliff wall almost as if it were a character. The
sculpted landscape elements act as collision objects, causing the
simulation to change shape as it travels across the planet.
Although Rhythm & Hues created a ridged terrain for the opening of
the film – "a topo map designed by a psycho map-maker," says Wassel –
the majority of the studio’s work happened during 18 minutes in the
center of the film.
"We’ve been doing character animation for a long time," Wassel says. "I
think the beautiful work the guys did on landscapes, matte paintings
and simulations broadens our horizon in terms of effects animation…
and the hell hound is not a friendly character. He’s no Scooby-Doo."