This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
The challenges of developing a game based on a NUI stem from one simple concept -- the player approaches the game with an uncommon interface, their body, to interact with a game where the designer has to simultaneously channel the player's expectations and actions whilst accommodating 80 percent of their collective self-expressive interactions.
Put simply, people approach a NUI game with a wide range of expectations and abilities and the designer has to deliver a compelling game experience without the certainty of intent and leveling of a conventional joypad.
Confidence is a great word where NUIs are concerned and it has two very distinct meanings that occupy the efforts of the designer.
The first is the designer's confidence in predicting what 80 percent of players will do at any given point in the game. By using controls that are intuitive, working with the expectations of the average player and channeling the player towards specific interactions with good instructions and subtle multi-sensory manipulation, the risk of player frustration is significantly reduced.
The second meaning is the confidence in the engine (and tangentially the designer) that the interactions enacted by the player are being unequivocally understood.
The success of a NUI lies in delivering confidence in both these instances, whilst also delivering the intended game experience without interruption. If a game expects something unusual out of the rules or fiction of the game, or indeed fails to detect a valid input, the illusion fails instantaneously.
Kinect is the first truly mass market NUI for video games. It provides six major data streams to the designer, which provides a massive space of possibility for creating new game experiences.
If you also consider that these streams are programmable, then potential upgrades via clever team-side custom code and official software updates will ensure an exciting future.
In simple terms, the Kinect delivers a number of streams of data, and the combination of these streams defines the type of experience that the player will have. The range of possibilities is huge, but so is the amount of potential misinterpretation at every stage. There are six primary streams:
Let's jump right in with a deceptively simple example: throwing a ball for a dog using Microsoft's Kinect. We experienced these issues during the development of Fantastic Pets 2. There are three main challenges here: the player understanding that they have to throw the ball, the detection of the throwing style of the player, and the release point of the ball.
Assuming that the game is named appropriately, in this case something catchy like 'Fetch', that the dog is eagerly waiting in front of you, and that an avatar has demonstrated the throw on-screen to the player, we can safely assume the player is primed for some top quality throwing action.
The biggest paradigm shift for design is centered on point two -- detecting the throw. Essentially the designer has to construct a set of rules that will detect most peoples' throwing style with a high degree of confidence. By defining a rule set based on combinations of joint positions, angles, vectors and velocities, it is possible to detect the throwing of a ball irrespective of the actual throwing technique.
Finally, the release point of the ball has to occur without the use of a traditional button press. In these situations, the use of predictive confidence (discussed below) is the way forward.