CIS Vancouver Uses High-Rez Models, Tons of Mo-Cap and Massive to Populate 1920s LA
CIS Vancouver’s solution relied on highly detailed modeling and tons of motion-capture data all fed into the Massive Software crowd-generating brain.
“We built a lot of CG environments,” explains Geoffrey Hancock, VFX supervisor at CIS Vancouver. “We then had to put in Massive pedestrians, animated cars and trolleys, rails on the ground, trolley wires. The Massive stuff was a good challenge. It wasn’t one of those jobs where you get a line [in the background] where you take over from there on. It was really creating characters that were intermingling with the existing extras. There weren’t enough extras and it wasn’t dense enough so you couldn’t just fill in the background, you had to blend in between the two. It meant a lot of roto-ing of the extras, but it was a great way to hide that line. In some shots the Massive people came right up to the very foreground of the camera because there were such long shots that the extras sometimes had wandered away and you were left with a big gap.”
“We’ve been approaching our Massive assets – the actual character builds – the same way we would any high-res foreground character. It’s all modeled and rigged in Maya. We do high-rez normal maps with Zbrush and do really detailed textures. We seek to make the characters hold up at about one-quarter screen height. So once we’ve created those versions, everything else is down-rezed backwards so the model geometry is down-rezed at render time for different levels of detail into the background. Any of the Massive characters are able to move forward approaching camera and have that full-rez detail. We’ve found that’s just an efficient way to work rather than starting low-rez and seeing how close they can get, seeing how much detail you need and then having to go back and up-rez. So we just decided to build the full detail in and then reduce from there.”
On top of the models CIS Vancouver mapped photographs of faces onto the digital faces.
“Even that is tricky. We got photos shot with the flattest lighting possible because of course all the lighting is going to be re-introduced. You are really just looking for the base color and everything else is coming from shading and lighting. All the fabric and clothing comes from shaders in Maya. It really comes down to great lighting to blend into the realism and match into the plate. Even with the faces there’s a lot that goes into subsurface scattering shaders and specular highlights to fit the lighting.”
“We have had success with that in the past with limited motion. This time because there were a lot of people walking on the sidewalks together it made it a much bigger challenge. For one, walking is one of those things that you can easily pick out if it doesn’t look right. If you just have characters sitting in a stadium waving and clapping you can get away with those movements.”
Hancock drew on lessons learned working with motion capture and Massive in the past. VICON House of Moves provided motion capture survices fo CIS Vancouver.
“Traditionally Massive has been used with limited library of mo-cap, which is then modified by the artificial intelligence in the brain of Massive, where you are modifying the speed or the size, moving their heads around or changing the width of their steps. We’ve found that when we did that in the past you really lost a lot of the naturalism that comes from the raw mo-cap.”
But it wasn’t simply a case of more mo-cap is better, though they did shoot about for times as much as they had in the past for similar projects.
“We captured very long segments and captured with nine different actors. We wanted to be capturing as much variety as we needed. That allowed us to play out mo-cap loops that were less altered and longer so that the loops weren’t as noticeable. That really helped buy that believability in the foreground. We could go through and pick and chose the right walk and make them so they were different from the other characters in the shot.”
CIS Vancouver also had discovered that it wasn’t just a matter of motion capturing more variety of movement but having a variety in the bodies being capture as well.
“In the past we captured everything with one male actor, one female actor and then applied it to different body proportions. That introduces inaccuracies when you try to match the movements to different body proportions. This time around we worked with nine different actors with different body types – tall, short, wide. Then we made sure those motions captured with those actors were only applied to the digital characters of the same proportions so there was a lot less re-targeting happening,” notes Hancock.
“The buildings were modeled almost like little model kits that you could take different sections of buildings and rearrange them and double them up so you could come up with different styles and sizes of buildings. So the individual floors were models, windows, doorways and the architectural caps on buildings so that we could interchange them and stack them in different ways to create the different buildings and create the variety of the buildings while not having to build each one in CG from the ground up.”
Did you enjoy this article? Sign up to receive the StudioDaily Fix eletter containing the latest stories, including news, videos, interviews, reviews and more.