Gamasutra: The Art & Business of Making Gamesspacer
arrowPress Releases
July 29, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Interview: The Tech Designed To Give  L.A. Noire 's Performances Life
Interview: The Tech Designed To Give L.A. Noire's Performances Life
July 22, 2010 | By Christian Nutt

July 22, 2010 | By Christian Nutt
Comments
    1 comments
More:



Unsatisfied with current game technologies for capturing actors' performances, Brendan McNamara, founder of Team Bondi (Rockstar's L.A. Noire) searched for a better way -- and ended up developing MotionScan with new company Depth Analysis, which he discussed with Gamasutra.

Utilizing 32 cameras, the process captures an actor's performance from the neck up, and automatically generates the geometry, texture, normal map, and audio at the same time from the same source at a higher resolution than is necessary for current game technology. This data can then be attached directly onto game character skeletons created in Maya or 3ds Max.

The first game to use this new technology will be Team Bondi and Rockstar's upcoming L.A. Noire. McNamara, frustrated by the range of performance current technologies made possible, instigated development of MotionScan to improve the verisimilitude of characters inside his detective game. "It's been a big benefit to us already," he says.

"This is an attempt to do realism and get over the uncanny valley. One of the things about this is that it makes storytelling and character empathy really believable. We think it's a little line in the sand... And it'll hopefully make everybody's games better."

The technology works differently from other solutions, says McNamara, which makes it the best for bringing actors directly into games. "We looked at lots of things. Mocap is trying to capture rotations, but in a face, the only rotation you get is in the jaw. What we're doing here is trying to capture muscle movements, tendons, etc."

In the past, using traditional motion capture, he says, "what I get back is close but it's not what I want." However, with MotionScan, he says, "people just relate to the characters. They don't think about what's wrong and what's broken."

And it's not just about the users, he says. "You'd spend forever trying to animate that stuff, and it's kind of self-defeating. I've never talked to an animator who likes to animate that stuff, because the results are never as believable as you want them to be."

Oliver Bao, Depth Analysis head of research and development admits there are similar, competing technologies. But he says, "Our core technology has been working since 2007, and we've spent the last two years doing polishing, proper pipeline and tools."

The actors' bodies are marked at the neck and chest for an "automatic" join with the character models in-game.

motionscan1.png
He also says that since the actors aren't bothered by markers on their face, as with some other technologies, you get better performances. And since their appearances are being captured directly to textures, they can be made up by film industry makeup artists -- "it's more realistic than character artists can model, these things have been done on the big screen."


The Future of MotionScan

"Initial [studio] setups are in Sydney (for R&D) and Culver City, Los Angeles for production. We aren't currently looking at selling rigs to [game] studios but it isn't out of the question. It's early stages on the commercialization front," says Bao.

image001.jpgMcNamara is already interested in ways to take the technology further, he says. "We've been thinking about how to capture the exterior." With this version of the technology, they'd be able to capture full shots -- and use all of that data.

In fact, he says, "The ongoing goal for this technology is to get full-body so we can capture people in costumes, which is an application outside games for films as well." The technology would allow for "one perfect take" rather than forcing filmmakers to relight and reshoot from different angles.

Currently, Depth Analysis is talking to investors about getting to the immediate next level of this technology: full body rig.

The Stats

MotionScan currently uses 32 cameras capturing at 30 frames per second, at a two megapixel resolution. The data can be joined to Maya or 3ds Max skeletons. Able to capture 50 minutes of performance a day, the system outputs 20 minutes of final footage per day -- though additional servers could take on some of that workload. The data rate for MotionScan characters in-game is "typically between 30kB/s to 100kB/s depending on quality level needed" and three heads will use 20 to 30 MB of RAM in-game.


Related Jobs

CCP
CCP — Newcastle, England, United Kingdom
[07.29.14]

Senior 3D Environment/Generalist Artist
CCP
CCP — Newcastle, England, United Kingdom
[07.29.14]

Development Director
FitGoFun
FitGoFun — Mountain View, California, United States
[07.29.14]

Unity 3D Programmer
Treyarch / Activision
Treyarch / Activision — Santa Monica, California, United States
[07.29.14]

Cinematic Animator (temporary) - Treyarch










Comments


Scott Mitchell
profile image
This technology seems very impressive at first glance. You may end up with a technically flawless performance, however wouldn't you be limited by the casting? How would someone efficiently alter the look or performance of a character after the data has been processed?

Personally, after being an Animator for the past 12 years, I have never found the challenge of trying to create a believable, and realistic animation "self-defeating". Actually, it's quite the contrary. I think you would be hard pressed to find any Animator who would give up that easily.


none
 
Comment: