Constructing a 3D Rig with Silicon Imaging 2K Cameras

For the music video of Bjà¶rk’s ‘Wanderlust’ the directing duo known as Encyclopedia Pictura, comprising Sean Hellfritsch and Isaiah Saxon, decided to take the bold leap into the world of 3D. And this was not to be a simple 3D project, if there is such a thing, but included shooting miniatures, puppets and live action on greenscreen and then adding CG elements in post. While the video is not due out until February (and thus we cannot show any video clips until then) we spoke to the duo about the 3D process, from constructing the 3D camera rig with a couple of Silicon Imaging 2K MINI cameras, the techniques of shooting 3D, to editing and compositing the 3D images.
How did you get interested in 3D?

ISAIAH SAXON: Over a year ago I received a turn-of-the-century Opticon [stereo image] viewer along with a suitcase of stereo pairs and we became obsessed with 3D. Then we saw the film Deep Sea 3D and after experiencing that we decided we weren’t going to shoot 2D anymore if we could help it. So from here on out, it’s all 3D.

How did the Bjà¶rk video come to you and what is the basic concept of the video?
IS: Bjà¶rk had seen our last music video (Grizzly Bear) and gave us a call. The concept of the video is our attempt at creating mytho-poetic cosmology of a primitive world complete with water deities and the struggle towards the future. The main theme being nomadism since it is for the track ‘Wanderlust.’

There are a number of different elements shot, or created in post, that all have to be combined, There is a large-scale, pre-human Yak-puppet, about 7-feet long and 7-feet tall, then there is Bjà¶rk, then there is a version of Bjà¶rk that she wears on her backpack played by a professional dancer, a large river god/transcendental beast, the landscapes shot in miniature and the CG river. So each of those elements were manifested in a completely different environment and shot differently.

How did you decide to use the Silicon Imaging camera to shoot 3D?
IS: We had been researching 3D for a while before this project. We were going to need really small cameras in order to make it work. Basically we had a set of parameters for this project: We wanted to shoot really high resolution, have the ability to shoot high frame rates and keep the rig as small as possible. We went through the list of cameras that were available and there was only really one that fit in our budget range was the Silicon Imaging 2K Mini camera. From there we selected which lenses worked best with it and then designed our camera rig around our lenses. We had really wide angle lenses 5.5 Optix super 16mm lenses and Cooke 12.5 lenses. Those are both Super 16mmm lenses and are directly compatible with the SI-2K Mini.

Can you explain how the cameras are set up to shoot 3D?

IS:Basically you are recreating the distance between human eyes. So the general setting is keeping the camera’s 2.5 inches apart. But if you are shooting things closer than 10 feet you have to put the cameras closer together. If you are shooting scale models like we were the cameras must be closer together than physically possible so for tat you use a beam-splitter. The technology behind that is pretty old and we built our own beam-splitter. We worked with a couple guys that had some experience in fabrication. We designed the camera rig in Rhino (CAD system). We found a parts distributor that had extruded aluminum modular framing and downloaded parts from their library and designed the entire camera structure around our lenses and what size beam-splitter we would use.

The splitter is a thin, 2mm sheet of glass that has titanium coating on it that allows the visible light spectrum to pass through it and be reflected by it. If you look at one side of it and the pother side is perfectly dark then the other side will be perfectly reflected. So one camera is positioned above it shooting down into it at a 45 degree angle and the other camera is behind it shooting through it. And you align the cameras perfectly so they are seeing the exact same image and then you offset one camera, the right camera in our case, to get the interocular.

We shot scale models for our landscapes and one of the puppets and also for close-ups because we needed and an adjustable interocular. The beam-splitter allows you to have an interocular of zero, where you are seeing the same thing. So you can adjust it in fine increments.

What was the rig on set?
SEAN HELLFRITSCH: We had the [SI-2K] Mini’s mounted on arms that hung out in the right spot in relation to the mirror and then those were tethered over GB Ethernet to PCs that we built and then we had a monitor with polarized 3D display so that we could view the 3D in realtime while we were shooting.

One of the sweet things about the Silicon Imaging software is that it will output to two DVI monitors. So we had those computers set up so we could take the signal from each camera and have that on a regular 2D monitor for viewing but also take the second signal and run it to the 3D display. All you have to do is flip the image coming from the beam-splitter because it is upside down but other than that there is no processing required.

Were you able to playback in 3D?
SH: Well kind of. Since it was on two different cameras you would have to start both playback devices at the same time. Silicon Imaging didn’t have that quite figured out at that point. So we would just try to hit play at the same time on both computers. It would loop like twice in the same synch and then would tend to slip out. So we could do a ghetto 3D playback.

And what were you recording to?
SH: Recording onto internal hard drives of the PCs. The Silicon imaging software controls every aspect of the camera. From color correction, all the setting and al recorded as AVI files using the Cineform codec.

Talk about some of the factors you have to account for when shooting 3D. Do you have to account for focus, lighting, framing any differently?

IS: Focus is one thing. We used wide angle lenses to get away from any focus issues. And sometimes we’d have to cut our dolly moves short so that we were always in our focus range so it would hold up from close up to wide shots.

Lighting is only enhanced by using 3D as opposed to 2D. Framing is where you really have to take 3D into account. The carryover from one cut to the next so that there is not a drastically different jump into the 3D effect is something crucial in 3D. So it has much more to do with the blocking and taking a less flashy approach to the staging of shots than you would with 2D because 3D is comfortable with an object being plainly placed in front of you to behold more clearly than if it was staged in a flashy or dynamic way.

Does that mean centering things in the frame?
IS: No necessarily centering everything but not allowing extra setups, no really kinetic movement where things enter and exit frame. Our aesthetic approach to it was inspired by the Natural History Museum diorama’s where everything is contained in an isolated space and has a set-up feel to it. Each shot has its own context and there’s not a lot of coverage of inserts and stuff. We cut on action a few times just to save our ass. But in the storyboarding phase we tried to avoid it as much as possible.

SH: The one situation where it shooting in 3D does come into play is when we are dollying from a close-up into a wide or vice versa. In those instances you do want to adjust for a smaller interocular, otherwise you’ll have a certain depth reading based on how far the object is from camera. You can keyframe it to tweak it but with things like landscapes you can screw the shot up pretty easily.

Explain your post workflow.

IS: Using the Cineform codec you have to use Premier Pro, which we don’t usually use. So there was a bit of a learning curve but the integration between Premiere and After Effects is really nice because we could build each shot as a rough comp in Premiere just by stacking layers and then grab the files from the timeline and copy and paste them into After Effects which is huge timesaver,. You used to have to prep each clip and export a QuickTime and then bring it into After Effects. So being locked into a post workflow we weren’t used to was a little problematic at first but once we were in it worked great.

Do you have to approach the editing and compositing differently for 3D?
IS: In the edit you don’t address the 3D. You work with the one eye. In the composite you work on one eye first just like a traditional 2D film. Once your right eye has been built and set you then take all your left eye footage and make it mimic what you did with your right eye. It’s a lot of tedious copying and pasting of all the parameters and then a lot of tweaking for whatever discrepancy there was in the left eye. Then you feed those into two different comps, which creates the anaglyph of the red and blue so that you can view the 3D with the glasses and you can slip and slide and adjust the 3D perspective.

You can lock a comp in After Effects, so you lock one comp, your right eye and then go into the other, your left, and adjust it to slide the 3D perspective. So you can adjust it on the fly and you can change all the perspective of each layer. We used mostly Premier and After Effects though some people were using Imagineer Mocah for some of the roto work.

And are you finishing in 2K?
IS: No. Since the focus issues limited our camera moves in post we did a lot of digital zooms. Part of our desire to shoot 2K was not necessarily that we wanted to print to film but that we didn’t have the option of zooming with our lenses so we wanted to have the option of moving the camera around if needed. So we our finished resolution is going to be baby-HD (960×720).

You also mentioned a CG river. How are you creating that?
IS: We’re using a hair module Softimage XSI for a CG river. The look we wanted in the river was a rendering of this Japanese-type aesthetic of a river where it is small faceted strands of liquid movement all moving independently in a stringing way. That was the approach we wanted for the river in order to match into the heightened reality with the puppets and costumes and everything else. So we ended up using the hair module in XSI because it was the only thing that could move in that way and it seemed like a reasonable approach without having to hand-animate every string of the river. The hair gets moved by a real water simulation underneath it but it is actually water and hair and then Real Flow foam and splashes and bubbling on top of that.

And what about color correction?
IS: We haven’t gotten there yet. In the past what we’ve done is used a batching technique in Photoshop because that program we are more comfortable with and haven’t really found comparable tools. Specifically the selective color control in Photoshop has a really intuitive feel for us that we haven’t found in even the high-end color correctors. So we’ve always run our project on a shot-by-shot batch basis through Photoshop using he automator. We’re not sure whether we will use that on this project but if we can’t find anything that gives us that control we will.