Sony prototype 3D camera (via Engadget)Ever wonder why Panasonic has so far had free reign to talk up an all-in-one stereo 3D camcorder aimed at broadcast video production? Wouldn’t you expect rival Sony to try and get in on the action? (Well, this puppy pictured at right, reportedly a prototype featuring essentially two PMW-EX3s stuffed into one camera body, made the blog rounds a couple of months ago, but Sony hasn’t made much noise about it.) As it turns out, Sony has its own nascent strategy for run-and-gun shooters in the 3D space, and it involves new post-production tools that would come into play to address a lot of expensive-to-fix problems with single-body 3D solutions, including limitations on interaxial distance (the space between the lenses) and minor aberrations between lenses that become A Big Deal when you’re trying to fuse matched stereo pairs into a 3D image.

Perhaps courting some new partners in the engineering realm, George Joblove, executive vice president of advanced technology for Sony Pictures Technologies, dropped a few cards on the table at the first SMPTE International Conference on Stereoscopic 3D for Media and Entertainment, held yesterday and today in New York City. But don’t expect any big product announcements in Sony’s immediate future — in response to an audience question about whether these post tools were being developed as actual products, Joblove chose his words carefully. “At the moment, it’s a research direction for us,” he said. “[But] it’s something that we expect to be fruitful.”

(A few words of disclosure: Sony was the primary sponsor of the SMPTE program.)

Joblove said interaxial separation — the distance between the two lenses in any stereographic set-up — is one of the main challenges facing the makers of a single-body 3D camera. While the typical distance between two human pupils is often thought of as a good basis for stereographic image capture, that’s not necessarily true, Joblove said. “Perhaps counter-intuitively, the ideal is not the human interpupillary distance — on average about six or seven centimeters,” he said. “It turns out that shooting with this interaxial distance often creates imagery that is uncomfortable to view. In fact, for many scenes, the best results are yielded by an interaxial of 40mm [a little more than two-and-a-half inches] or less.”

Because you can’t get two full-sized camera bodies close enough physically to achieve a 2.5-inch interaxial, many shooters resort to complicated beam-splitter rigs that require frequent alignment adjustments. They feature cleverly positioned mirrors that help two cameras facing in different directions see the same image as though the cameras were packed more tightly side by side than is physically possible. It’s a very flexible solution if you have the time and expertise to work with it, but a rig that fragile would never work in a fast-paced newsgathering or documentary environment. Joblove essentially proposed adding the selection of an interaxial distance to that endless list of things that fall under the esteemed header of “We’ll fix it in post.”

It should be possible, Joblove argued, to synthesize a third camera view at an arbitrary position between the left-eye and right-eye views that were actually captured. This third-eye view would be generated using algorithms to analyze disparities between the two captured images and detect object borders within them. The algorithms could be created as standalone tools, or integrated into non-linear editing systems. That would essentially allow the interaxial distance — any interaxial distance, no matter how small — to be dialed in during post-production based on information from a stereo image capture.

Joblove ran down a list of other applications that could be developed for stereo 3D post-production, including corrections for color and geometry alignment and convergence adjustments. Handling issues of lens uniformity in post would reduce the overwhelming burden of precision (and presumably the cost) in the manufacture of optics and sensors for 3D cameras, he noted.

Convergence would generally be driven in-camera, he said, noting that an “auto-convergence” feature could work as a function of focus distance, and that convergence adjustments could be made either by physically moving the image sensors left and right inside the camera or by digitally cropping the picture on a slightly oversized sensor to offer enough wiggle room to achieve the correct positioning of the two images. But a convergence tool in post would offer different ways to check convergence and depth to make sure that objects are not positioned too close or too far away (beyond infinity) in stereo space.

A time frame for the arrival of all this technical wizardry was beyond the scope of Joblove’s presentation and, like many SMPTE presentations, it still had the feel of a science project about it. If Panasonic’s camera becomes a big hit when it debuts later this year, Sony may push a camera into the mix sooner rather than later, figuring the post tools will be here when they get here.

Panasonic AG-3DA1 camcorderFor its own part, Panasonic was also on hand at the SMPTE conference to describe the functionality of its upcoming AG-3DA1 single-body stereo camcorder. (Film & Video recently interviewed cinematographer Randall Dark about his experience as an enthusiastic early user of the camera.)

Michael Bergeron, strategic technical liaison for Panasonic Broadcast, emphasized that Panasonic repeatedly made the choice to sacrifice flexibility in favor of simplicity. The idea was to engineer a camera that would capture good-enough stereo imagery without requiring the camera operator to learn the new trade of stereographer. He chose a familiar videogame metaphor: the addition of a convergence control to the typical zoom, focus, and iris adjustments takes a run-and-gun camera from “Guitar Hero easy” to “Guitar Hero medium.”

Basically, the camera’s iris-control knob can be switched to adjust convergence instead, as the operator looks at either the left-eye image, the right-eye image, or a mixed image in the viewfinder. Panasonic remains coy about exactly how the convergence is changed in the camera, noting only that it’s a “fully optical” adjustment.

The lenses’ interaxial distance remains fixed at 60mm, a fairly short distance achieved by using 3-chip 1/4-inch imagers. “That’s a decision we made to keep the system simple for a lot of uses rather than complicated for every use,” Bergeron explained. The guts of the camera include systems for keeping the images from both lenses matched, Bergeron said, noting that it’s not all that much more complex than what Panasonic already programs into single-lens 2D cameras. And the camera’s design takes good stereo into account, encouraging DPs to shoot at focal lengths that are suited to the camera’s interaxial, “It’s not a long zoom range, but it’s appropriate for that interaxial,” he said. “This encourages you to place the camera where it needs to be to get good 3D.”

Finally, the camera shows users the distance to the convergence plane and offers guidance, calculated based on the current focal length and angle of convergence, about the appropriate shooting distances for any given set-up. “You may want to add a little padding into this, just the same as most people disagree with where the manufacturer puts the zebra point,” he warned. “But it’s a starting point.”

The bottom-line plan, Bergeron said, is to make 3D production workable with a 2D crew, a similar-to-2D workflow, and (with luck) similar-to-2D budgets. We’ll learn more about how that works out when camera deliveries, slated for August, begin.