Gamasutra: The Art & Business of Making Gamesspacer
From Research To Games: Interacting With 3D Space
View All     RSS
March 25, 2019
arrowPress Releases
March 25, 2019
Games Press
View All     RSS







If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

From Research To Games: Interacting With 3D Space


April 22, 2010 Article Start Previous Page 6 of 6
 

System Control

System control provides a mechanism for users to issue a command to either change the mode of interaction or the system state. In order to issue the command, the user has to select an item from a set. System control is a wide-ranging topic, and there are many different techniques to choose from such as the use of graphical menus, gestures, and tool selectors.

For the most part, these techniques are not difficult to implement, since they mostly involve selection. For example, virtual menu items might be selected using ray-casting. For all of the techniques, good visual feedback is required, since the user needs to know not only what he is selecting, but what will happen when he selects it. In this section, we briefly highlight some of the more common system control techniques.

Graphical Menus

Graphical menus can be seen as the 3D equivalent of 2D menus. Placement influences the access of the menu (correct placement can give a strong spatial reference for retrieval), and the effects of possible occlusion of the field of attention. Placement can be categorized into surround-fixed, world-fixed and display-fixed windows.

The subdivision of placement can, however, be made more subtle. World-fixed and surround-fixed windows can be subdivided into menus which are either freely placed into the world, or connected to an object.

Display-fixed windows can be renamed, and made more precise, by referring to their actual reference frame: the body. Body-centered menus, either head referenced or body-referenced, can supply a strong spatial reference frame.

One particularly interesting possible effect of body-centered menus is "eyes-off" usage, in which users can perform system control without having to look at the menu itself. The last reference frame is the group of device-centered menus. Device-centered placement provides the user with a physical reference frame (see Figure 5).


Figure 5. The Virtual Tricorder: an example of a graphical menu with device-centered placement. This image was taken in 1995.

We can subdivide graphical menus into hand-oriented menus, converted 2D menus, and 3D widgets. One can identify two major groups of hand-oriented menus. 1DOF menus are menus which use a circular object on which several items are placed. After initialization, the user can rotate his/her hand along one axis until the desired item on the circular object falls within a selection basket.

User performance is highly dependent on hand and wrist physical movement and the primary rotation axis should be carefully chosen. 1DOF menus have been made in several forms, including the ring menu, sundials, spiral menus (a spiral formed ring menu), and a rotary tool chooser. The second group of hand-oriented menus is hand-held-widgets, in which menus are stored at a body-relative position.

The second group is the most often applied group of system control interfaces: converted 2D widgets. These widgets basically function the same as in desktop environments, although one often has to deal with more DOFs when selecting an item in a 2D widget. Popular examples are pull-down menus, pop-up menus, flying widgets, toolbars and slider.

The final group of graphical menus is the group known as 3D widgets. In a 3D world, widgets often mean moving system control functionality into the world or onto objects. This can also be thought of as "moving the functionality of a menu onto an object."

A very important issue when using widgets is placement. 3D widgets differ from the previously discussed menu techniques (1DOF and converted 2D menus) in the way the available functions are mapped: most often, the functions are co-located near an object, thereby forming a highly context-sensitive "menu".

Gestures and Postures

When using gestural interaction, we apply a "hand-as-tool" metaphor: the hand literally becomes a tool. When applying gestural interaction, the gesture is both the initialization and the issuing of a command.

When talking about gestural interaction, we refer, in this case, to gestures and postures, not to gestural input used with a Tablet PC or Interactive Whiteboard. There is a significant difference between gestures and postures: postures are static movements (like pinching), whereas gestures include a change of position and/or orientation of the hand. A good example of gestures is the usage of sign language.

Gestural interaction can be a very powerful system control technique as well as useful for navigation and selection and manipulation. In fact, gestures are relatively limitless when it comes to their potential uses in video games. Gestures can be use to communicate with other players, to cast spells in a role playing game, call pitches or give signs in a baseball game, and issues combination attacks in action games. However, one problem with gestural interaction is that the user needs to learn all the gestures.

Since the user can normally not remember more than about seven gestures (due to the limited capacity of our working memory), inexperienced users can have significant problems with gestural interaction, especially when the application is more complex and requires a larger amount of gestures.

Users often do not have the luxury of referring to a graphical menu when using gestural interaction -- the structure underneath the available gestures is completely invisible. In order to make gestural interaction easier to use for a less advanced user, strong feedback, like visual cues after initiation of a command, might be needed.

Tools

We can identify two different kinds of tools, namely physical tools and virtual tools. Physical tools are context-sensitive input devices, which are often referred to as props. A prop is a real-world object which is duplicated in the virtual world.

A physical tool might be space multiplexed (the tool only performs one function) or time multiplexed, when the tool performs multiple functions over time (like a normal desktop mouse). One accesses a physical tool by simply reaching for it, or by changing the mode on the input device itself.

Virtual tools are tools which can be best exemplified with a toolbelt. Users wear a virtual toolbelt around the waist, from which the user can access specific functions by grabbing at particular places on belt, as in the real world. Virtual toolbelts could potentially restructure how items and weapons are stored and accessed in game genres including first and third person shooters, role playing games, and action/adventure games.

Sometimes, functions on a toolbelt are accessed via the same principles as used with graphical menus, where one should look at the menu itself. The structure of tools is often not complex: as stated before, physical tools are either dedicated devices for one function, or one can access several (but not many) functions with one tool. Sometimes, a physical tool is the display medium for a graphical menu. In this case, it has to be developed in the same way as graphical menus. Virtual tools often use proprioceptive cues for structuring.

Conclusions

The techniques I have discussed in this article only scratch the surface for what has been done in the virtual reality and 3D user interface research communities over the years. As the video game industry incorporates more and more motion-based interfaces in the games they make, work done by researchers in this space will become increasingly important.

I would hope that the video game industry will take what we, as academics, have to offer in terms of a plethora of techniques and the lessons learned using them. Given the popularity of video games, academics from the virtual reality and 3D user interface research areas will continue to explore and develop new interface techniques specifically devoted to games. It is my hope that the game industry and academics can work together to symbiotically push the envelope in game interfaces and gameplay mechanics.

Reading List

Here is a short reading list for anyone interested in learning about work academics have done with 3D spatial interaction.

Bott, J., Crowley, J., and LaViola, J. "Exploring 3D Gestural Interfaces for Music Creation in Video Games", Proceedings of The Fourth International Conference on the Foundations of Digital Games 2009, 18-25, April 2009.

Bowman, D., Kruijff, E., LaViola, J., and Poupyrev, I. 3D User Interfaces: Theory and Practice, Addison Wesley, July 2004.

Charbonneau, E., Miller, A., Wingrave, C., and LaViola, J. "Understanding Visual Interfaces for the Next Generation of Dance-Based Rhythm Video Games", Proceedings of Sandbox 2009: The Fourth ACM SIGGRAPH Conference on Video Games, 119-126. August 2009.

Chertoff, D., Byers, R., and LaViola, J. "An Exploration of Menu Techniques using a 3D Game Input Device", Proceedings of The Fourth International Conference on the Foundations of Digital Games 2009, 256-263, April 2009.

Wingrave, C., Williamson, B. , Varcholik, P., Rose, J., Miller, A., Charbonneau, E., Bott, J. and LaViola, J. "Wii Remote and Beyond: Using Spatially Convenient Devices for 3DUIs", IEEE Computer Graphics and Applications, 30(2):71-85, March/April 2010.


Article Start Previous Page 6 of 6

Related Jobs

LeFort Talent Group
LeFort Talent Group — Toronto, Ontario, Canada
[03.24.19]

UE 4 Lead Developer
Pixel Pool
Pixel Pool — Portland, Oregon, United States
[03.22.19]

Software Developer (Unreal Engine 4, Blueprint, C++)
Crystal Dynamics
Crystal Dynamics — Redwood City, California, United States
[03.22.19]

Senior Tools Engineer
Sucker Punch Productions
Sucker Punch Productions — Bellevue, Washington, United States
[03.22.19]

Open World Content Designer





Loading Comments

loader image