|
Tool creator Havok has been at the forefront of physics engine development for some time now (as seen in games like Half-Life 2, Dead Rising, MotorStorm),
and has recently has branched out to include products like Havok FX for
special effects and HydraCore for multi-threaded optimizationan.
They've also released animation SDK and tool (Behavior) which, as
demonstrated to Gamasutra, uses intelligent and rather intuitive
scripting and naming trees to create a very clean interface for
animators, and also has a very simple system for blends.
On
the occasion of this tool’s unveiling, we spoke with Jeff Yates,
director of product management for Havok, and discussed the company’s
plans for the future. Yates also provided a host of information useful
for both developers and consumers interested in what goes on in the
bowels of the industry, including a comparison of the respective
strengths of the current crop of consoles, and a breakdown of a number
of points of popular confusion, such as the necessity of a hard drive
for data storage, or the difficulty of porting games from the 360 to
the PS3, from the technical side of the game industry.
Gamasutra: How long has Havok been dealing with animation?
Jeff
Yates: As a company on the SDK side, a good two and a half years. We
launched our SDK at the end of 2004, and a lot of that work revolved
around really good compression of the animation. What we do is to
maximize and sample every key frame, then we compress it down and play
it back exactly like you just saw. That's been going on for about two
years, and a full half of our sales now include animations. It's a
really good attach rate. In terms of this higher-level, broader view of
character animation, it's probably been about a year and a half to two
years we've been working on this. Some of us have been working with
other companies too.
GS: How far do you
think you're going to go with what Havok covers? It seems like it's
continually expanding toward being a full engine competitor.
JY:
We obviously like to grow, and our customers generally want us to grow
and stay healthy too, as long as we don't take our eye off the ball
with physics. What will make us unique is that we'll probably try to do
only one or two components at a time, and we'll make those the best of
their class. We'll really try to make those true components, in the
programmer's sense of the word.
We could take a
path where we say it's all one black box and we're going to grow it
constantly, but we're very careful to make sure you can say, "I want
this one or that one, and not all of it at the same time." A good
amount of effort goes into making these things first-class SDK
components, and I think that's the way we'll stay. I don't see us
becoming a monolithic solution provider. I think we'll provide related
components that know about each other, and will integrate well out of
the box.
GS:
Will it be possible in the future for scripted animations (such as idle
animations) to feature slight variations? Right now they’re generally
looped canned animations.
JY: Right now I
think there are a lot of clever things you can do with blends. There
are a couple of approaches to that. The extreme one is to have eight
different ways the character might idle, then support random selection,
so you can pick a different one every time and get randomization that
way. That's a bit brute force, but surprisingly effective, because you
know exactly how long it will take, and how much memory it will take in
advance. Another approach that is a little less explicit is to have
extreme ranges of motion, then blending those and having a variable
that modulates. For example, if the AI is tired, maybe his arms will
come down a little bit and then raise up. We try to enable that by
letting you put variables on these blends.
I
think full procedural stuff is even cooler. It's most doable for the
upper body. Full procedural locomotion for the base of the body and for
momentum and stuff like that is just now starting to come out into the
open.
GS: Is that going to be difficult to blend into other animations?
JY:
Yeah. I tend to sometimes think about procedural textures when I think
about this stuff. They held so much promise, and then people saw that
they had like twenty variables that all had to be tuned and simulated.
It really depends. I think that almost anything that you look at -
either through runtime or 3D authoring - the algorithms you choose have
to be very predictable. To be honest, we haven't really tackled that
part of the equation yet. But I think anyone can plug it in.
GS: It seems like that's the direction that things might want to go in the future.
JY: I think so, yeah. As long as the artists like it and can control it - that's key.
GS: It seems like when characters go into ragdoll state in games like Gears of War, they go completely limp. Is it possible to give it more rigidity?
JY:
Oh yeah. You can blend the amount of keyframe animation - say for
example if the keyframe pose says, "I want to be here at a particular
time," but the physics say "You're dead over here," we can blend and
have a certain stiffness between the two, and that can be animated. A
lot of times you'll see tech demos where a ragdoll character will hit
the ground and he'll be stiff for a second, and then he'll go totally
limp. There's that kind of death pose that he holds before he goes
down. That's totally possible.
The
way we do it in this tool is to blend between a certain amount of
physics and a certain amount of keyframe animation. We think that makes
up so much territory. It's certainly not be-all-end-all, but it
certainly makes ragdoll seem so much more alive. The ragdoll can serve
a lot of brief, high-value purposes. You can take a hit, then go a
little bit to ragdoll and then recover, and never go to the ground. You
get a lot more variability in response that way. The Ubisoft talk at
GDC had some great examples of that.
|