Player Control
and Navigation
This paper
first discusses player control and navigation in an action oriented
3D game. The collision requirements mentioned help motivate the following
sections on collision detection. The final sections discuss application
and construction issues, as well as ideas for future work.
In a video
game there are many small projectiles, such as rocks or bullets, that
travel along a given trajectory. You would expect one of these to hit
a character only if it impacts a polygon on that character. This makes
sense because it simulates how things work in real life.
So how
should we handle the separate problem of collision and navigating of
a character in a 3D environment? In the real world this is achieved
by "left foot forward", "right foot forward", and
so on. In a video game the user has an avatar which is often a biped,
so why not implement something analogous to how things work in the real
world? The big problem here is that our interface is restricted to a
2DOF mouse and a few extra bits from the available mouse buttons and
keys. There have been various attempts, but its very hard to make an
effective interface that perfectly mirrors how we do things in the real
world - such as picking up objects, walking with our feet, and so on.
Olympic Decathlon on the Apple 2 is one of the few successful examples.
Rather than trying to physically simulate all the limbs of the body,
most of the popular shooters today seem to navigate the character as
if it was a simple shape - such as a sphere or cube.
In MDK2
we used a cylinder model for the character-to-environment collision
detection. This cylinder is always upright. Note that the cylinder has
bilateral symmetry, so the user is never ever restricted from turning.
In real life you have to turn sideways to squeeze through a narrow passage.
This is not something you would ever want to force on the user - especially
if it is a first person game.
If the
game is 3rd person, or if you are looking at another player, you still
expect to see the model move its feet when walking, or play a jump-animation
when the player jumps into the air. The animation of the character is
a function of what's going on in our simple navigation model - not the
other way around. In MDK2 each character had a set of animations for
running and walking forward, backward, sideways, jumping, climbing and
so on. These were invoked at the appropriate times. We didn't do any
IK, but that would be a great improvement over just playing "canned"
animations. IK is consistent with the system presented here
Resolving
Collisions
Based on user input the engine applies accelerations to the cylinder
object. As the player moves around it is important to check to see if
he collides with the environment. If a collision occurs you must deal
with it by changing the location where you had planned to relocate the
player. This may require a few iterations to get right.
Sliding
Its best to solve a collision by deflecting the players motion rather
than stopping at the impact point. Imagine running down a hallway and
brushing up against the wall would cause you to "stick". This
is easily solved using something in your code like:
Vt += Ni * (-dot(Ni,Vt)-Nd)
Where:
In English,
this just means: move the target-position above the impact plane in the
direction perpendicular to that plane.
There
is one thing to watch out for with this simple collision resolution
algorithm: when you have two planes of impact that face each other you
do not want an endless cycle deflecting the motion vector back and forth.
It is best if the code can handle multiple planes simultaneously.
Stepping
When was the last time you were crossing the street and tripped on the
curb?
Stepping
up onto the sidewalk is a function processed by some subconscious part
of the brain. In a video game, the users do not want to have to worry
about small steps and stairs. He mouselooks where he wants to go while
holding the forward arrow key. It is up to the game developer to implement
all the lower level brain functions.
I solved this by lifting up the path of the player when it collided
with something. If the raised path was clear I'd move the player along
it and then drop it back down. I thought this was just a temporary fix
when I first implemented it during our prototype stage, but it worked
beautifully and ended up in the final code that was shipped.
In MDK2, implementing climbing turned out to be a basic extension to
the step up ability. Climbing was used if the height transition was
large enough. In this case, the player's climbing animation was also
invoked. For the user, it became easy to move his character up onto
a ledge. No keypresses required. No special skills to learn.
Standard
Player Control Interface
In the mature world of Windows there is an evolved interface with scrollbars,
menus, and buttons, which has become our standard means of 2D interaction
at the desktop. Change this and you will make the user very angry. Some
people are actively looking for the standard 3D interface. Well, I believe
that it is already here right under our noses. Look at the games we
play.
VRML browsers
are fine for grabbing and rotating an object. But they are not as well
suited for navigating around a world. There is little sense of "presence".
Whether 1st or 3rd person, it is nice to have an avatar that has some
volume to it and will collide with the environment. Furthermore, the
movement shouldn't be cumbersome. People expect to be able to move sideways,
to slide along walls, and to automatically ascend stairs or ladders.
To appease the user you want to provide an intuitive interface that
feels right. So even if you are building some 3D internet shopping mall
application, grab the key bindings from the user's Quake config file,
and let him rocket-jump from the bank to the clothing store. It would
be nice to see more successful applications, other than games, that
are based on 3D immersive technology.