[Gamasutra is happy to reprint, with permission, an analysis of the Kinect's interface, written by Dr. Jakob Nielsen and originally posted to his Alertbox, his bi-weekly column on his site, useit.com. Nielsen is an expert in usability and has become known for his insight and effort in improving the usability of the web.]
Kinect is a new video game system that is fully controlled by bodily movements. It's vaguely similar to the Wii, but doesn't use a controller (and doesn't have the associated risk of banging up your living room if you lose your grip on the Wii wand during an aggressive tennis swing).
Kinect observes users through a video camera and recognizes gestures they make with different body parts, including hands, arms, legs, and general posture. The fitness program, for example, is fond of telling me to "do deeper squats," which it can do because it knows how my entire body is moving. Analyzing body movements in such detail far exceeds the Wii's capabilities, though it's still not going to put my trainer down at the gym out of work.
Kinect presents a far more advanced gesture-based user experience than any previous system seen outside the fancy research labs.
Yes, I saw similar interfaces as long ago as 1985 at cutting-edge academic conferences — most notably Myron Krueger's Videoplace. But there's a big difference between a million-dollar research system and a $150 Xbox add-on.
On the one hand, Kinect is an amazing advance, especially considering its low price. On the other hand, the 25-year time lag between research and practice for gesture UIs is slightly worse than the usual fate of HCI research advances.
For example, 20 years lapsed between Doug Engelbart's invention of the mouse (1964) and the first commercially feasible mouse-based computer (the Mac in 1984).
Kinect exhibits many of the weaknesses Don Norman and I listed in our analysis of gestural interfaces' usability problems:
Sometimes options display as an explicit menu, making them visible. But there are no explicit affordances on the screen during gameplay for most of the things you can do. Primarily, users are forced to rely on memorizing the instructions shown before the game started. (Even though it's a key human factors principle to reduce reliance on the highly fallible human memory.)
For example, how do you know to jump up to make a long jump in Kinect Sports, even though it's a completely illogical move (and would make more sense for a high jump)? By remembering what you read before your avatar entered the stadium, of course.
Read the manual before using the interface.
(Yes, it's a *cute* manual, but these are still instructions to memorize.)
Kinect exhibits another type of visibility problem: on-screen alerts are easily overlooked because the user's attention is focused elsewhere. This rarely happens with mobile touch-UIs: because of their small size, you see anything that pops up on the screen. In contrast, it's common for users to overlook error messages on cluttered Web pages.
On Kinect, users don't overlook messages because of clutter, but because of engaging action. For example, in observing users play Kinect Adventures, I frequently saw a warning message in the upper left telling users to move to a different part of the floor. Yes, as the usability observer I saw this — but the game-playing users didn't. They were too focused on the compelling movements in the middle of the screen. They watched their avatar and the gamespace, and didn't notice the appearance of UI chrome in the corner.
How can users miss the huge "Move Forward" warning?
They miss it because they're fully engaged in steering their raft down the river.
A similar, but less serious problem occurs in Your Shape: Fitness Evolved when you're trying to complete a hard cardio program. The number of reps left in the set counts down in the corner, but you tend to focus on your trainer and keeping up with her movements.
It'll be a design challenge to increase users' awareness of system messages without detracting from their engagement in the game's primary activity.