Gamasutra: The Art & Business of Making Gamesspacer
From Research To Games: Interacting With 3D Space
View All     RSS
January 22, 2019
arrowPress Releases
January 22, 2019
Games Press
View All     RSS






If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

From Research To Games: Interacting With 3D Space


April 22, 2010 Article Start Previous Page 4 of 6 Next
 

Occlusion Techniques

Occlusion techniques (also called image plane techniques), first proposed in 1997, work in the plane of the image; object are selected by "covering" it with the virtual hand so that it is occluded from your point of view.

Geometrically, this means that a ray is emanating from your eye, going through your finger, and then intersecting an object. Occlusion techniques could be used for object selection at a distance but instead of using a laser pointer metaphor that ray casting affords, players could simply "touch" distant objects to select them.

These techniques can be implemented in the same ways as the ray-casting technique, since it is also using a ray. If you are doing the brute-force ray intersection algorithm, you can simply define the ray's direction by subtracting the finger position from the eye position.

However, if you are using the second algorithm, you require an object to define the ray's coordinate system.

This can be done in two steps. First, create an empty object, and place it at the hand position, aligned with the world coordinate system. Next, determine how to rotate this object/coordinate system so that it is aligned with the ray direction. The angle can be determined using the positions of the eye and hand, and some simple trigonometry. In 3D, two rotations must be done in general to align the new object's coordinate system with the ray.

Arm-Extension

The arm-extension (e.g. Go-Go) technique, first described in 1996 and inspired by '80s cartoon Inspector Gadget, is based on the simple virtual hand, but it introduces a nonlinear mapping between the physical hand and the virtual hand, so that the user's reach is greatly extended.

Not only useful for object selection at a distance, Go-Go could also be useful traveling through an environment with Batman's grappling hook or Spiderman's web. The graph in Figure 2 shows the mapping between the physical hand distance from the body on the x-axis and the virtual hand distance from the body on the y-axis.

There are two regions. When the physical hand is at a depth less than a threshold 'D', the one-to-one mapping applies. Outside D, a non-linear mapping is applied, so that the farther the user stretches, the faster the virtual hand moves away.


Figure 2. The nonlinear mapping function used in the Go-Go selection technique.

To implement Go-Go, we first need the concept of the position of the user's body. This is needed because we stretch our hands out from the center of our body, not from our head (which is usually the position that is tracked). We can implement this using an inferred torso position, which is defined as a constant offset in the negative y direction from the head. A tracker could also be placed on the user's torso.

Before rendering each frame, we get the physical hand position in the world coordinate system, and then calculate its distance from the torso object using the distance formula. The virtual hand distance can then be obtained by applying the function shown in the graph in Figure 2.

(starting at D) is a useful function in many environments, but the exponent used depends on the size of the environment and the desired accuracy of selection at a distance. Once the distance, at which to place the virtual hand is known, we need to determine its position. 

The most common implementation is to keep the virtual hand on the ray extending from the torso and going through the physical hand. Therefore, if we get a vector between these two points, normalize it, multiply it by the distance, then add this vector to the torso point, we obtain the position of the virtual hand. Finally, we can use the virtual hand technique for object selection.

Manipulation

As we noted earlier, manipulation is connected with selection, because an object must be selected before it can be manipulated. Thus, one important issue for any manipulation technique is how well it integrates with the chosen selection technique. Many techniques, as we have said, do both: e.g. simple virtual hand, ray-casting, and Go-Go.

Another issue is that when an object is being manipulated, you should take care to disable the selection technique and the feedback you give the user for selection. If this is not done, then serious problems can occur if, for example, the user tries to release the currently selected object but the system also interprets this as trying to select a new object.

Finally, thinking about what happens when the object is released is important. Does it remain at its last position, possibly floating in space? Does it snap to a grid? Does it fall via gravity until it contacts something solid? The application requirements will determine this choice.

Three common manipulation techniques include HOMER, Scaled-World Grab, and World-in-Miniature. For each of these techniques, the manipulation of objects in the game world could be used in many different genres including setting traps in first and third person shooter games, completing puzzles in action/adventure games, and supporting new types of rhythm games.

HOMER

The Hand-Centered Object Manipulation Extending Ray-Casting (HOMER) technique, first discussed in 1997, uses ray-casting for selection and then moves the virtual hand to the object for hand-centered manipulation. The depth of the object is based on a linear mapping.

The initial torso-physical hand distance is mapped onto the initial torso-object distance, so that moving the physical hand twice as far away also moves the object twice as far away. Also, moving the physical hand all the way back to the torso moves the object all the way to the user's torso as well.

Like Go-Go, HOMER requires a torso position, because you want to keep the virtual hand on the ray between the user's body (torso) and the physical hand. The problem here is that HOMER moves the virtual hand from the physical hand position to the object upon selection, and it is not guaranteed that the torso, physical hand, and object will all line up at this time.

Therefore, we calculate where the virtual hand would be if it were on this ray initially, then calculate the offset to the position of the virtual object, and maintain this offset throughout manipulation.

When an object is selected via ray-casting, first detach the virtual hand from the hand tracker. This is due to the fact that if it remained attached but the virtual hand model is moved away from the physical hand location, a rotation of the physical hand will cause a rotation and translation of the virtual hand. Next, move the virtual hand in the world coordinate system to the position of the selected object, and attach the object to the virtual hand in the scene graph (again, without moving the object in the world coordinate system).

To implement the linear depth mapping, we need to know the initial distance between the torso and the physical hand , and between the torso and the selected object . The ratio will be the scaling factor.

For each frame, we need to set the position and orientation of the virtual hand. The selected object is attached to the virtual hand, so it will follow along. Setting the orientation is relatively easy. Simply copy the transformation matrix for the hand tracker to the virtual hand, so that their orientation matches.

To set the position, we need to know the correct depth and the correct direction. The depth is found by applying the linear mapping to the current physical hand depth. The physical hand distance is simply the distance between it and the torso, and we multiply this by the scale factor to get the virtual hand distance. We then obtain a normalized vector between the physical hand and the torso, multiply this vector by the virtual hand distance, and add the result to the torso position to obtain the virtual hand position.


Article Start Previous Page 4 of 6 Next

Related Jobs

Lucid Ones
Lucid Ones — Shanghai, China
[01.22.19]

SENIOR PROGRAMMER
Impulse Gear, Inc.
Impulse Gear, Inc. — San Francisco, California, United States
[01.20.19]

Senior Software Engineer
Cignition
Cignition — Palo Alto, California, United States
[01.18.19]

Game Programmer
Heart Machine
Heart Machine — Culver City, California, United States
[01.18.19]

Gameplay Engineer





Loading Comments

loader image