Since wowing attendees at last year’s SIGGRAPH, news about the Mova motion capture system has been suspiciously quiet. Too quiet. Leading one to draw one of two conclusions: either the system was not ready for production and still needed R&D efforts OR it was being used on projects that required all involved to remain tight-lipped about it. With the braintrust behind Mova, it should have been obvious that the former was not a possibility. Watch the video of Mova taken at last SIGGRAPH.

According to president Steve Perlman, Mova has been used in a number of big budget films with A-list actors starting just after SIGGRAPH. Of course, those productions are still being worked uon so no one can talk about them. Rhythm & Hues, however, did provide Mova with some low-res examples of a project it used Mova on, though the images were carefully crafted so as to not let on who the actors were or details of production.

The news Perlman could talk about was the evolution of the system and the work with various partners. In terms of the speed of Mova, at SIGGRAPH last year the time it took from the actor being captured to the time the mo-cap footage could be viewed was a day. A month before last year’s SIGGRAPH the turnaround time was a week. Now a proxy of the capture data can be viewed within minutes with all the data processed and ready for the CG artists by the following day.

Mova has also been working with Vicon to find ways that they can combine forces to capture facial and body movements at the same time. (Mova can be used to capture the entire body, along with cloth and props, but it ends up being a tremendous amout of data and a bit of overkill for some projects.)

The other big news is the improvements in the makeup used. Last year they were using a very simplle type of makeup bought off the shelf. Now they have worked with professionals allowing them to capture data on the inner lips and teeth, as well as right up to the very edges of the eyelids and inner parts of the nose.

The benefits of the Mova system are numerous. First, it essentially provides basically millions of markers ont he face, which allows for true photoreal CG characters, either as the exact likeness of the actor captured or as a base for creatures, monsters and animals. By providing essentially a duplicate of the actor’s face, it virtually eliminates the arduous CG task of cleaning up motion capture data: there are no instances where the CG face breaks or distorts that you have in marker systems when the software is attempting to interpret the data and does so incorrectly, necessetating hand work. While other mo-cap systems put maybe 20-30 markers on a face and then take that data and build up a complex mesh that has thousdands of points, the Mova system has millions of points, which then need to be scaled back so that today’s softawer and processors can handle the data (about 5,000 points on Maya, some systems can handle a 10,000-point mesh.) “This is essentially like a live-action shoot,” says Perlman. “Mova is not an interpretation of movement, it is the raw movement.”

The other main advantage is that not only do you get a more real likeness of the character but a lifelike representation of their movements. The FACS (facial action coding system) is a system developed in 1976 to categorize the swath of human facial expressions. It has been used by animators as reference and in recent years in motion capture as a reference so that on the mo-cap stage they used it as a checklist to get all the different facial expressions from a person. The problem, according to Perlman, with taking a static shot of a smile, is that what makes a smile look real is not a frozen expression of a smile, but the transition of facial expressions into that smile.

Due to the secrecy of production we won’t see any glimpse of Mova used in an actual project until 2008, but it should certainly be worth the wait.

www.mova.com