rendering engines is now a well-established practice, with great potential
cost and time savings over the development of a single game. As game developers
reach for new forms of gameplay and a better process for implementing
established genres, the wisdom of licensing physics engines is becoming
inescapable. Commercial engines such as Havok and Mathengine's Karma (at
press time, Criterion Software, makers of the Renderware line of development
tools, were in negotiations to acquire Mathengine) have become mature
platforms that can save months in development and test. Their robust implementations
can provide critical stability from day one, and their advanced features
can offer time advantages when developers are exploring new types of gameplay.
This sophistication does come with a cost. Physics engines do more than
just knock over boxes, and the interface between your game and a physics
engine must be fairly complex in order to harness advanced functionality.
Whether you have already licensed an engine and want to maximize your
investment or you're just budgeting your next title, gaining a better
understanding of the integration process will save a lot of trial and
error, and hopefully let you focus on better physics functionality while
spending less time watching your avatar sink through the sidewalk.
The bare minimum we expect from a physics engine is fairly obvious: we
want to detect when two objects are interacting and we want that interaction
to be resolved in a physically realistic way - simple, right? As you progress
deep into integration, however, you'll find physics affects your user
interface, logic mechanisms, AI routines, player control, and possibly
even your rendering pipeline (Figure 1).
Cyan Worlds, we're more than a year into our use of a commercial physics
engine, having integrated it with our own proprietary game engine. I'm
going to share with you some of the nuts and bolts of our integration
process. In the first part of this article, I'll talk about the fundamentals:
data export, time management, spatial queries, and application of forces.
Then, with an eye toward character-centric game implementations, I'll
visit the twin demons of keyframed motion and player control. In these
areas, challenges arise because both of them require that you bend the
laws of physics somewhat, and that means you must draw some clear distinctions
between what is physics and what is programming for effect.
Figure 1: Physics has many (inter) faces.
three categories of geometry supported by physics engines. The simplest
are primitives, represented by formulae such as sphere, plane, cylinder,
cube, and capsule. Some-what more expensive is convex polygonal geometry.
Convexity simplifies detection and response greatly, leading to improved
performance and better stability. Convex shapes are useful for objects
where you need the tighter fit that you can get from a primitive but don't
have to have concavity. Finally, there is polygonal geometry of arbitrary
complexity, also known as polygon soups. Soups are fairly critical for
level geometry such as caves and canyons but are notoriously difficult
to implement robustly and must be handled with care to avoid slowdowns.
Since these geometric types have different run-time performance costs,
you'll want to make sure that your tools allow artists to choose the cheapest
type of physical representation for their artwork. In some cases your
engine can automatically build a minimally sized primitive (an implicit
proxy) at the artist's request; in other cases the artists must hand-build
substitute geometry (an explicit proxy). You'll need to provide a way
to link the proxy to the visible geometry it represents, so that changes
in the physical state of an object will be visible to the user.
in a rigid-body simulation do not include scale or shear. This mathematical
simplification makes them fast and convenient to work with, but it leaves
you with the question of what to do with scale on your objects. For static
geometry, you can simply prescale the vertices and use an identity matrix.
For moving physical geometry, you'll most likely want to forbid scale
and shear altogether; there's not much point in having a box that grows
and shrinks visually while its physical version stays the same size.
In most cases, a proxy and its visible representation will have the same
transform; you want all movement generated from physics to be mirrored
exactly in the rendered view. To relieve artists from having to align
the transforms manually - and keep error out of your process - you may
find it worthwhile to move the vertices from the proxy into the coordinate
space of the visible geometry (Figure 2a).
if the proxy geometry will be used by several different visible geometries,
you may wish to keep the vertices in their original coordinate system
and simply swap in the visible geometry's transform (Figure 2b). This
method will let you use physical instances, wherein the same physical
body appears several different places in the scene. This latter approach,
while enabling efficiency via instancing, can be less intuitive to work
with because the final position of the physical geometry depends on the
transforms of objects it's used for and not the position in which it was
with time cleanly is an extremely important thing to get right early on
in integrating a physics engine. There are three key aspects of time relevant
to simulation management: game time, frame time, and simulation time.
Game time is a real-time clock working in seconds. While you might be
able to fudge your way from a frame-based clock to a pseudo-real-time
clock, working with seconds from the start will give you a strong common
language for communicating with the physics subsystems. The more detailed
your interactions between game logic, animation, and physics, the more
important temporal consistency becomes - a difference of a few hundredths
of a second can mean the difference between robust quality and flaky physics.
There will be situations where you want, for example, to query your animation
system at a higher resolution than your frame rate. I'll talk about this
kind of situation later in the "Integrating Keyframed Motion"
Frame time is the moment captured in the rendered frame. Picture it as
a strobe light going off at 30 frames per second. While you only get an
actual image at the frame time, lots is happening between the images.
Simulation time is the current time in your physics engine. Each frame,
you'll step simulation time until it reaches the current target frame
time (Figure 3). Choosing when in your loop to advance simulation can
greatly affect rendering parallelism.
frame rates can vary; if your physics step size varies, however, you'll
see different physical results - objects may miss collisions at some rates
and not at others. It's also often necessary to increment, or step, the
simulation at a higher rate than your display; physics will manage fast-moving
objects and complex interactions more accurately with small step sizes.
Tuning your physics resolution is straightforward. At physics update time,
simply divide your elapsed time by your target physics frequency and step
the physics engine that many times. Careful though, if your frame rate
drops, this approach will take more physics steps so that each step interval
is the same size, which will in turn increase your per-frame CPU load.
In situations of severe lag, this can steal time from your render cycle,
lowering your frame rate, which then causes even more physics steps, ad
In such scenarios, you need a way to drop your physics-processing load
until your pipeline can recover. If you're close to your target frame
rate, you may be able to get away with taking larger substeps, effectively
decreasing your physics resolution and accepting a reduction in realism.
If the shortfall is huge, you can skip updating the simulation altogether
- simply freeze all objects, bring the simulation time up to the current
frame time, and then unfreeze the objects. This process will prevent the
degeneracies associated with low physics resolution, but you'll have to
make sure that systems that interact with physics - such as animation
- are similarly suspended for this time segment.
If you're receiving events from the physics engine, the difference in
clock resolution between graphics and physics has another implication:
for each rendering frame, you'll get several copies, for example, of the
same contact event. Since it's unlikely that recipients of these messages
- such as scripting logic - are working at physics resolution, you'll
need to filter out these redundant messages.