More Bad Guys, Bigger Fight Scenes, and a Man Made of Sand

Peter Parker picked a peck of trouble this time: woman trouble, trouble with his best friend Harry Osborn, trouble with two villains, trouble with his girlfriend Mary Jane, and – as if that weren’t enough – he and an alter ego fight an internal battle that causes all the other troubles to worsen. It could have meant trouble, too, for the team at Sony Pictures Imageworks, which handled the bulk of the effects.
“We had more characters, more villains,” says visual-effects supervisor Scott Stokdyk, who received a visual-effects Oscar nomination for Spider-Man and won the Oscar for Spider-Man 2. “That made everything we were doing a level harder. We had to make a photoreal stunt double and two very major villains who were extreme combinations of effects and character animation work.”

Stokdyk’s team of 270 people worked for more than two years to create 950 shots, and they were one of many studios that worked on the film. That list includes BUF, Evil Eye Pictures, Furious FX, Gentle Giant Studios, Giant Killer Robots, Halon Entertainment, Tweak Films and X1fx.

In the first sequence Imageworks created for the film, Peter Parker in street clothes fights his former best friend Harry Osborn, a “New Goblin” carrying out his father’s wishes while slicing through the air on a jet-powered skateboard. “We started on this shot on day one because we had buildings and finished it the last week of production,” says Grant Anderson, CG supervisor. “It’s a long shot and it kept changing.”

The shot chases Parker and Osborn through a New York City alley. Imageworks created the alley entirely in CG using an evolution of the studio’s Assembly Component System developed originally for the first Spider-Man film. Working in Autodesk’s Maya, modelers constructed 13 unique, modular buildings that the team re-assembled into hundreds of variations. “We have software scripted in Maya, a whole architecture on top of Maya that lets us pull in components and save out an assembly,” Anderson says. “We’re not duplicating any geometry; the metadata and a directory structure keeps track of every building and every piece of the building.”

During the chase, Imageworks used live action elements of Maguire (Peter Parker/Spider-Man) and Franco (Harry Osborn/Goblin) filmed on rigs, digital doubles for the actors, and a mixture of the two.

“We did a lot of animation with 2D elements – what we call ‘tiles,'” says Spencer Cook, animation supervisor. They might start, for example, with a blue-screen element of Franco performing an action as if he were riding the glider. “We’d put that 2D footage into a Maya city and animate it to jazz up the motion,” says Cook.
The footage played on a 2D card that the animators moved left, right, up, down, forward or backward within the 3D scene. “We had only a 30-foot track on the blue-screen stage, and we pushed the camera toward him to make it look like he’s coming from a distance,” says Cook. [See sidebar, below.] “But, we can put the 2D card farther away and animate it coming toward the camera.” To put the Goblin even farther back, they used a digital double.

Because Peter Parker is not wearing his Spider-Man costume, Imageworks needed to create a photoreal digital stunt double for Maguire. Although the studio had used Paul Debevec’s LightStage system previously for Spider-Man 2 and Superman Returns, the crew devised a new system for Spider-Man 3. “We used every trick in the book that we could to get pieces of real photography into the scene in addition to Spencer Cook’s animation,” says Stokdyk. “The [new] process required a lot of planning and a commitment to animation early in the shot, but what we lost in flexibility later down the road, we gained in getting the real actor’s face on the screen.”

Before, the studio captured the actors’ faces in various lighting environments and applied that texture to 3D models animated with facial expressions based on motion capture. For Spider-Man 3, Imageworks worked with Simon Wakely and Camera Control to develop a complex setup that utilized a motion-control rig. “The production department – editorial and Sam Raimi – put a lot of pressure on us to come up with a way to use footage of the actors rather than a CG version,” says John Schmidt, plate lead, who shepherded the process at Imageworks.

The process started with an animated scene – say a humanly impossible fight scene between the digital doubles for Parker and Osborn created in Maya using complex camera moves. The goal was to put the actors’ faces onto the CG bodies. To make that possible, the crew filmed the actors on a blue-screen stage using a motion-control camera and turntable. Compositors layered the resulting 2D element ‘ footage of the actors’ faces ‘ onto the digital doubles’ bodies. Getting that matching footage, however, was complicated.

“We’ve done face replacements before, but the shots were usually pretty simple,” says Schmidt. “For Seabiscuit, we had shots of Tobey [Maguire] from the side. But, the shots in Spider-Man were crazy fight scenes. They’re tumbling, spinning, flipping.”

To imitate that motion, Imageworks put each actor in a chair on a mechanized turntable and used a MILO motion control rig to move the camera. Data from the animation files drove the camera’s motion so that it replicated the virtual camera’s complex moves. To determine where the actor’s face needed to be relative to the camera; that is, how the turntable needed to move, Schmidt worked from the Maya file.

“Let’s say that at the beginning of a shot, we’re looking at the actor’s face,” Schmidt says. “At the end, we look at the back of his head. We programmed the turntable to do that move.” Meanwhile, the motion control camera moved up and down, in and out, closer or farther away, but the head stayed centered in the frame.

“We maintained the relationship between the animated head and the actor,” says Schmidt. “The animator character might be moving 100 feet through space. We subtracted that out and slaved the camera to the actor’s head. Then, when we kept the head in the center of the frame, the perspective looked right.”

The actor sat still and reacted only with facial expressions while the camera moved around him to capture the scripted view. “We could have asked the actors to turn their heads, but there were so many motions and it was so chaotic and difficult, it was better to have the machinery do the motion,” says Schmidt. “Also, rather than asking these expensive actors to rehearse for 45 minutes to do the shot, by preprogramming the motion into the machine, all they had to do were expressions and eyelines.”

While the turntable rotated, the actors reacted to laser pointers to know where to look and verbal cues to know when to grimace, sneer, wince in pain, smile and so forth. Often, they needed to react in slow motion ‘ to grimace slowly, for example. “Sometimes the animation file had huge acceleration or deceleration,” says Schmidt. “We used Camera Control’s MILO rig because it’s faster than most, and Simon [Wakely] wrote his own software to drive it, but we wanted speeds you just can’t do with a moco rig.” Slowing down to accommodate the rig also helped the crew shout out the cues: Some shots were so complicated that it would have been almost impossible to give cues fast enough.

To finish the shot, the team sent a CG head from the animation file rendered full screen along with the scanned footage of the actor filmed on the motion control rig to compositors working on a high-speed Autodesk Inferno system. “They picked frames they thought lined up with the CG head,” says Schmidt.

Because the crew had lit the actors with flat lighting, the compositors painted shadow maps to match the CG lighting in the final shots. “If we had known what the final lighting would look like, we might have tried to match it,” says Schmidt.

Although the compositors put the live action footage primarily into CG shots, they also used it to replace stunt doubles’ faces with the actors’ faces. In that case, Schmidt’s team used match-moved camera to position the motion-control rig.

“In some cases we’d blend form the live action person to CG in the same shot,” says Stokdyk. The two main villains, however, Sandman and the gooey Venom, were always CG.

Built on Sand
In the film, an experimental particle physics site zaps the character Flint Marko (Thomas Haden Church) and binds his atoms to sand ‘ sometimes he looks like Flint Marko, sometimes he looks like a sand man, and sometimes he’s a cloud of sandy dust flying between buildings.

Therefore, animating Sandman involved creating a performance for a shell shaped like the character that contained sand, and moving individual grains of sand inside the shell and falling from the character’s body.

To move individual grains of sand, Imageworks took several approaches: They used particle simulation, a new simulation engine called SphereSim, fluid/gas simulation, and RenderMan plug-ins, sometimes in combination.

They might fill an envelope, the shell of a character, with particles in Houdini and then reduce gravity to zero, which caused the particles to fall. By catching the falling particles before they landed and moving them into SphereSim, the falling sand collected into sand piles. SphereSim, a blend of simplified rigid body simulation equations with research into the dynamics of grain stacked in silos and of sand transported on trucks, stacks up spheres that collide against each other and other geometry.

To generate dust roiling above the sand pile, they used a gas simulator and, similarly, a fluid simulator moved Sandman when he disassembled himself into a dust cloud and swirled through the city. Because Imageworks translated all the geometry in the sand pipeline into particle level sets and used common file formats for the simulators, data moved easily from one type of simulator to another. Lastly, lighters could coat specified surfaces with sand particles during rendering via a RenderMan plug-in.

To vary the look of the sand and for efficiency, sand particles ranged in size and complexity from individual pieces of textured geometry instanced onto particles, to geometry with displacement, to tiny points. To enhance facial expressions, a RenderMan shader scaled the grains of sand around Sandman’s eyes, nose and mouth based on the curvature of the mesh.

“Sandman was the ultimate combination of particle effects with character animation,” says Stokdyk. “It was difficult for the pieces to come together. The effects animators and character animators had to work closely and feed off each other’s work.”

Venom
The same was true of Spider-Man‘s second CG villain, a gooey black symbiote that arrives via meteor and attaches itself first to Peter Parker’s Spider-Man suit, coloring it black and giving Parker extra powers tinged with evil, and then to Eddie Brock, turning him into Venom.

Because the goo needed to crawl along the surface of its host, at least until the end of the film when it separates into an amorphous shape, Imageworks developed a system that, in effect, allowed animators to build the shape as they animated it. The fundamental building and animation unit was a curve with attributes that rendered it as a shape with a specified length. Each curve could attach to other curves.

Animators saw tubular shapes on screen that represented each curve’s final look. Using simple rigs, they could place curve after curve to crawl the goo along an arm, for example, or build claw-shaped rigs that reached out for a host like a hand. After the character animators placed the curves, effects animators added complexity by inserting geometry between the curves. (It turned out that some of the software tools developed to add complexity to the goo also worked beautifully for creating the web balls that Spider-Man flings.)

To give the animated goo a secondary motion, the effects teams used cloth and hair simulation on the curves to help the goo ooze and jiggle. For rendering, the goo team converted the curves into implicit surfaces and then, via Houdini, into a mesh.

“The symbiote goo part of Venom was incredibly challenging,” says Stokdyk. “Some character animators caught on and got into it more than others and in some cases goo effects animators crossed into character animation. It took very different skills to animate those pieces and some people got into it more than others.”

Battleground
(Story continues following FX progression, below.)
In the final battle, Imageworks brought together all of the pieces, Peter Parker in his Spider-Man suit and with part of his mask ripped off, Harry Osborn/Goblin, Sandman and Venom.

For that battle, which takes place in Manhattan’s Chase Plaza, Imageworks exactly replicated 15 buildings. “It’s something I’ve always wanted to do,” says Stokdyk, “to create a real location with a mixture of photography and CG.”

Grant Anderson and his building team spent two weeks of very long days in New York photographing all the buildings they’d replicate, from the sidewalk to the roof. “We’d shoot from across the street,” Anderson says, “going up every 10 floors all the way to the top. We didn’t want to give the modelers and painters a perspective nightmare, so we stayed as orthogonal as possible. And we tried to plan the shoot so the sun wasn’t casting shadows on the buildings.” Modelers used the photographs as reference and painters used them for textures.

They did this for each side of every building when possible. They also took wide-angle shots and hired a group of surveyors who took detailed measurements of the buildings. Modelers then worked from the survey data matched into the wide-angle photos. “We didn’t solve angles from the photos,” says Anderson. “We just gave the modelers a template so they had a good idea of the scale.”

The most difficult model, according to Anderson, was the Trump building. “Nothing is symmetrical,” he says. “Absolutely nothing. It took a modeler three months to model that building because it needed so much detail and was so unique.”

For buildings behind the 15 heroic replicas, the team developed a photogrammetry system that utilized PhotoModeler software. They moved data from PhotoModeler into Maya with proprietary code that also flattened spherical and planar projections orthogonally for texture painters. In addition, the site builders pulled in buildings from previous films. Behind those buildings, the team created a pan and tile environment using a series of 360-degree tiles shot at every 100 feet with a SpyderCam and then stitched together.

“I knew from the start that this is where the final battle would take place,” says Stokdyk. “A big part of my job is making sure Sam [Raimi] has flexibility later in the movie. With this environment, he could redesign shots. We took the limits off.”

“That was the theme for visual effects in this film,” Stokdyk adds. “Use as many tools as possible to get a desired result. We didn’t have just one pipeline; we had multiple pipelines so that we could have any combination of CG and live action anywhere.”

Beyond Previs: Imageworks’ Technical Shot Planners Make the Moves

Because many of the bluescreen shoots for the action in Spider-Man 3 were so complex, Imageworks’ John Schmidt and Nick Nicholson helped plan the shots.

“We call our job technical shot planning,” Schmidt says. “Basically, we take the previz files, the animation files, figure out how to get them in camera, and where the camera and actors need to be.”

To film bluescreen shots for Spider-Man 3, the crew used semi-repeatable motion control equipment from Scott Fisher’s Fisher Technical Services, which builds systems for Las Vegas shows and other entertainment venues as well as for visual effects.

“These are effectively motion control rigs, but they’re not like a MILO, which is frame accurate,” says Schmidt. “They have cables that stretch, and a bunch of high speed winches that our stunt guys hooked up in crazy ways.”

For example, many shots called for the camera to zip quickly past an actor. “They set up a sled system, a long dolly track with winches that moved the sled,” says Schmidt. “They could move the camera 100 feet in a couple of seconds and the camera stayed pointed at the actor as it flew by.”

For these shots and others, Schmidt and Nicholson provided the distances, motion files and additional data they derived from the animatics.

“When you have storyboards, the DP plunks the camera where he thinks it looks right,” says Schmidt. “But when someone puts a lot of effort into previz, they want the footage to match and that requires figuring out dimensions and speeds.”

Although, a previz team or a visual effects supervisor might handle this planning stage, it takes specialized knowledge and it’s becoming increasingly complex.

“In a movie like this, the previz team is busy cranking out previz for the director,” Schmidt says. “The visual effects supervisor is so busy, he can’t be taking Maya files, pulling them apart and figuring out all this stuff. And, the more previz you have, the more necessary it is.”

And that’s fine with Schmidt. “We want to keep doing this from show to show.”