GAME JOBS
Contents
From Research To Games: Interacting With 3D Space
 
 
Printer-Friendly VersionPrinter-Friendly Version
 
Latest Jobs
spacer View All     Post a Job     RSS spacer
 
June 7, 2013
 
Sledgehammer Games / Activision
Level Designer (Temporary)
 
High Moon / Activision
Senior Environment Artist
 
LeapFrog
Associate Producer
 
EA - Austin
Producer
 
Zindagi Games
Senior/Lead Online Multiplayer
 
Off Base Productions
Senior Front End Software Engineer
spacer
Latest Blogs
spacer View All     Post     RSS spacer
 
June 7, 2013
 
Tenets of Videodreams, Part 3: Musicality
 
Post Mortem: Minecraft Oakland
 
Free to Play: A Call for Games Lacking Challenge [1]
 
Cracking the Touchscreen Code [3]
 
10 Business Law and Tax Law Steps to Improve the Chance of Crowdfunding Success
spacer
About
spacer Editor-In-Chief:
Kris Graft
Blog Director:
Christian Nutt
Senior Contributing Editor:
Brandon Sheffield
News Editors:
Mike Rose, Kris Ligman
Editors-At-Large:
Leigh Alexander, Chris Morris
Advertising:
Jennifer Sulik
Recruitment:
Gina Gross
Education:
Gillian Crowley
 
Contact Gamasutra
 
Report a Problem
 
Submit News
 
Comment Guidelines
 
Blogging Guidelines
Sponsor
Features
  From Research To Games: Interacting With 3D Space
by Joseph LaViola Jr. [Design, Programming, Art, Serious]
7 comments Share on Twitter Share on Facebook RSS
 
 
April 22, 2010 Article Start Previous Page 4 of 6 Next
 

Occlusion Techniques

Occlusion techniques (also called image plane techniques), first proposed in 1997, work in the plane of the image; object are selected by "covering" it with the virtual hand so that it is occluded from your point of view.



Geometrically, this means that a ray is emanating from your eye, going through your finger, and then intersecting an object. Occlusion techniques could be used for object selection at a distance but instead of using a laser pointer metaphor that ray casting affords, players could simply "touch" distant objects to select them.

These techniques can be implemented in the same ways as the ray-casting technique, since it is also using a ray. If you are doing the brute-force ray intersection algorithm, you can simply define the ray's direction by subtracting the finger position from the eye position.

However, if you are using the second algorithm, you require an object to define the ray's coordinate system.

This can be done in two steps. First, create an empty object, and place it at the hand position, aligned with the world coordinate system. Next, determine how to rotate this object/coordinate system so that it is aligned with the ray direction. The angle can be determined using the positions of the eye and hand, and some simple trigonometry. In 3D, two rotations must be done in general to align the new object's coordinate system with the ray.

Arm-Extension

The arm-extension (e.g. Go-Go) technique, first described in 1996 and inspired by '80s cartoon Inspector Gadget, is based on the simple virtual hand, but it introduces a nonlinear mapping between the physical hand and the virtual hand, so that the user's reach is greatly extended.

Not only useful for object selection at a distance, Go-Go could also be useful traveling through an environment with Batman's grappling hook or Spiderman's web. The graph in Figure 2 shows the mapping between the physical hand distance from the body on the x-axis and the virtual hand distance from the body on the y-axis.

There are two regions. When the physical hand is at a depth less than a threshold 'D', the one-to-one mapping applies. Outside D, a non-linear mapping is applied, so that the farther the user stretches, the faster the virtual hand moves away.


Figure 2. The nonlinear mapping function used in the Go-Go selection technique.

To implement Go-Go, we first need the concept of the position of the user's body. This is needed because we stretch our hands out from the center of our body, not from our head (which is usually the position that is tracked). We can implement this using an inferred torso position, which is defined as a constant offset in the negative y direction from the head. A tracker could also be placed on the user's torso.

Before rendering each frame, we get the physical hand position in the world coordinate system, and then calculate its distance from the torso object using the distance formula. The virtual hand distance can then be obtained by applying the function shown in the graph in Figure 2.

(starting at D) is a useful function in many environments, but the exponent used depends on the size of the environment and the desired accuracy of selection at a distance. Once the distance, at which to place the virtual hand is known, we need to determine its position. 

The most common implementation is to keep the virtual hand on the ray extending from the torso and going through the physical hand. Therefore, if we get a vector between these two points, normalize it, multiply it by the distance, then add this vector to the torso point, we obtain the position of the virtual hand. Finally, we can use the virtual hand technique for object selection.

Manipulation

As we noted earlier, manipulation is connected with selection, because an object must be selected before it can be manipulated. Thus, one important issue for any manipulation technique is how well it integrates with the chosen selection technique. Many techniques, as we have said, do both: e.g. simple virtual hand, ray-casting, and Go-Go.

Another issue is that when an object is being manipulated, you should take care to disable the selection technique and the feedback you give the user for selection. If this is not done, then serious problems can occur if, for example, the user tries to release the currently selected object but the system also interprets this as trying to select a new object.

Finally, thinking about what happens when the object is released is important. Does it remain at its last position, possibly floating in space? Does it snap to a grid? Does it fall via gravity until it contacts something solid? The application requirements will determine this choice.

Three common manipulation techniques include HOMER, Scaled-World Grab, and World-in-Miniature. For each of these techniques, the manipulation of objects in the game world could be used in many different genres including setting traps in first and third person shooter games, completing puzzles in action/adventure games, and supporting new types of rhythm games.

HOMER

The Hand-Centered Object Manipulation Extending Ray-Casting (HOMER) technique, first discussed in 1997, uses ray-casting for selection and then moves the virtual hand to the object for hand-centered manipulation. The depth of the object is based on a linear mapping.

The initial torso-physical hand distance is mapped onto the initial torso-object distance, so that moving the physical hand twice as far away also moves the object twice as far away. Also, moving the physical hand all the way back to the torso moves the object all the way to the user's torso as well.

Like Go-Go, HOMER requires a torso position, because you want to keep the virtual hand on the ray between the user's body (torso) and the physical hand. The problem here is that HOMER moves the virtual hand from the physical hand position to the object upon selection, and it is not guaranteed that the torso, physical hand, and object will all line up at this time.

Therefore, we calculate where the virtual hand would be if it were on this ray initially, then calculate the offset to the position of the virtual object, and maintain this offset throughout manipulation.

When an object is selected via ray-casting, first detach the virtual hand from the hand tracker. This is due to the fact that if it remained attached but the virtual hand model is moved away from the physical hand location, a rotation of the physical hand will cause a rotation and translation of the virtual hand. Next, move the virtual hand in the world coordinate system to the position of the selected object, and attach the object to the virtual hand in the scene graph (again, without moving the object in the world coordinate system).

To implement the linear depth mapping, we need to know the initial distance between the torso and the physical hand , and between the torso and the selected object . The ratio will be the scaling factor.

For each frame, we need to set the position and orientation of the virtual hand. The selected object is attached to the virtual hand, so it will follow along. Setting the orientation is relatively easy. Simply copy the transformation matrix for the hand tracker to the virtual hand, so that their orientation matches.

To set the position, we need to know the correct depth and the correct direction. The depth is found by applying the linear mapping to the current physical hand depth. The physical hand distance is simply the distance between it and the torso, and we multiply this by the scale factor to get the virtual hand distance. We then obtain a normalized vector between the physical hand and the torso, multiply this vector by the virtual hand distance, and add the result to the torso position to obtain the virtual hand position.

 
Article Start Previous Page 4 of 6 Next
 
Top Stories

image
Microsoft's official stance on used games for Xbox One
image
Keeping the simulation dream alive
image
A 15-year-old critique of the game industry that's still relevant today
image
The demo is dead, revisited
Comments

Dustin Chertoff
profile image
I feel like I took a class on this a couple years ago. =) (I'm a former UCF student, graduated there last year, and Joe was on my dissertation committee - k, disclosure complete.)



Seriously though, this is good stuff that game developers interested in 3DUI should be aware of. Great article and it puts everything in a nice, centralized location. And it serves as a great refresher for those already familiar with the concepts.

Simon T
profile image
@ Tim



Research informs creation.

Isaiah Williams
profile image
Research creates information.

John Mawhorter
profile image
This article is highlighting for me the fact that 3D UIs and Virtual Worlds are difficult to use and that the mouse and keyboard are by far the superior input device for most tasks. Seriously, controlling my movement by turning my head? This is uncomfortable on a basic level. Also the magical versus realist distinction and your constant use of "natural" and "immersion" are pretty silly.

John Mawhorter
profile image
Not that this isn't a useful starting point for thinking about using these devices in games, but there are many of these techniques that don't work in a time-intensive situation (ie most real-time video games) or when moving. And there's also the problem that head-tracking is needed for some of these, which most of the controllers won't provide. And the real problem is that the virtual world research mostly seems to be based on VR environments that are expensive and complicated, while also being used for specific tasks that aren't really very game-like (military training simulators excepted). If there was a real academic VR-Game research community it would be great.

Dustin Chertoff
profile image
@Tim Carter



Research does not guarantee that the results of the research are immediately applicable towards creating commercial products. In many cases, research exists solely for the sake of figuring out the truth of the very small part of the world the researcher is interested in. But at no point, is research the antithesis of creation.



Creation cannot exist in a vacuum. Creation must be informed through observations of the world. How do you know what problem needs a solution to be created? How do you know how to build the solution? How do you test that your solution works? This is all research. Creation is the process of developing an informed response based upon the questions asked and answered through the research process. Development cannot exist without research to inform what to develop, and research cannot exist without development defining the problems that need to be researched.



And while punk rock pioneers could not play their instruments with the same technical prowess of their contemporaries, they had performed plenty of research regarding the type of music out there. They felt that the music did lot let them express themselves the way they wished to express themselves (the problem). As a result, they created a new form of musical expression.



@John Mawhorter



Yeah, many of the techniques right now are very cumbersome for VR, let alone for gaming. Even the best VR equipment would choke trying to provide the same quality of experience you can get with AAA PC game. But the technology is getting there, slowly... One of the issues though, is the mindset that great games have to fit the "sit in one spot for 3+ hours" paradigm. A new genre of games based around 10-15 minute immersive experiences can emerge (where immersion refers to both physical and psychological immersion). Just like major game developers balked at the power of social gaming, only to realize now that it is a multi-billion dollar industry, the same can be said of the immersive casual game.



The Wii showed that people will buy the tech, (if not 3rd party games). It was enough to make MS and Sony play catch-up with Natal/Move. This tech is not currently suited for AAA FPS games, but it is great for other genres. It's unwise (from a business perspective) to ignore this nascent market segment because it can't be applied to the current style of AAA game. Let the tech be incorporated and refined in the new genres, so that the mature version can be added to traditional blockbuster style games.

Ruthaniel van-den-Naar
profile image
For me nice summary and something between science and design, I hate overly scientific and pieces, this is ideal combanation.


none
 
Comment:
 




UBM Tech