As part of Gamasutra's Advanced I/O Week, Harmonix's Matt Boch offers practical advice for implementing motion control in video games.
Harmonix is a studio well-known for dabbling in games and technologies outside of your bog-standard game controller. From its early Karaoke
titles, to its massively popular Rock Band
series, the company is always looking at new control methods that increase social interaction among players.
Most recently, Harmonix has been developing games that use motion control -- first with its Dance Central
franchise, and now with the upcoming Fantasia: Music Evolved
Matt Boch is creative director on Fantasia
, and has had plenty of experience with the ins and outs of motion control in video games. Here, Boch details eight key points to address when designing a game for motion control.
Ask yourself if motion control is truly bringing something new to your game.
Controllers have a huge number of advantages over motion control. They offer tactile and haptic feedback that isn't easily afforded with camera-based systems.
Controllers take advantage of the highly developed fine motor skills that gamers have spent years, even decades, honing. These skills are so over developed that many gamers forget they ever took effort and have since subsumed them into their corporeality: wearing an Oculus while playing with a controller is 'immersive.'
Players' overdeveloped fine motor skills are a far more reliable than their underdeveloped sense of proprioception. Beyond that, if you look at a sensory homunculus or a motor homunculus you can clearly see how much 'bandwidth' the brain has dedicated to controlling hands and sensing hand and finger position.
Consider hiring or consulting with theater majors, actors, physical game aficionados, gymnasts, dancers, performance artists, etc.
-- anyone who has spent years experiencing or thinking about the mechanics and semiotics of movement.
This is an area where most game designers are weak, given that it isn't a skill that has been required for games previously. When you want a movement that feel powerful, a gesture that feels good, it pays to have experts to consult with. Far too often Kinect games end up privileging ease of detection over all else, which rarely results in physical controls that feel good.
Show the player the most communicative and applicable abstraction of their input for a given context.
Update that abstraction as quickly as possible; make it feel real time. In a controller-less motion experience, players will at times feel lost or doubt the system is tracking them. Do everything you can to demonstrate that this isnít the case.
Give a lot of thought to how much data the player really needs or can process in the given context.
Consider the advantages of hiding some information from the player if it's ultimately distracting. Dance Central
's mix of Spotlight (the ring around the Dancerís feet that tells you how well you're doing) and Per-Limb Feedback (the red outline that pops up around dancer's limbs when youíre missing part of a dance move) isní' all the information we have about the playerís performance, but we found that it's the most the player can react to in real time.
It's vital to show analog progress on any aspect of an interface that a player is controlling.
The Dance Central shell is a great example of this. We show your analog progress above or below a given selection as you move up and down through a list, and we also show your analog progress towards selection as you bring your hand across your body. Similarly, weíre always trying to move the spotlight the millisecond we know youíre dancing correctly.
Show the player the least abstracted version of the sensor input.
The sensor will latch onto others in the room from time to time or otherwise make mistakes. Having the sensorís input displayed on screen allows players to identify and remedy issues for themselves. If you donít want to show this type of input all the time, show it when you have reason to doubt the validity of the current input.
Consider what about motion control is uniquely exciting for your product.
With Kinect, lots of the launch titles focused on the odd promise of Avateering. While conceptually promising, it can be highly confusing (why am I looking at myself looking at the game?) and isn't reliable as a mirror image.
It may be an easy thing to market, or for players to understand the promise of, but it isn't easy for developers to truly deliver on that promise. Rather than tumble into that reflexive nightmare, we decided to break the fourth wall and invite your living room into the gaming world. You aren't controlling a character on screen, you, as in your physical presence, is acknowledged by the game. More 'Danger Room' or 'Holodeck' than 'jacking into the matrix'.
None of these decisions were painfully obvious to us when we began work on a motion dance game, and we waffled on the question of whether the Dancer is you or not. In DC1, we somewhat awkwardly tiptoe around that distinction and never take a stand, but by the time we began DC2, we had settled on having the dancers directly address the player. It instantly felt better.
Gestures are not button presses and they shouldn't stand in for button presses.
You're not going to end up with a fun game by taking a formula that works on a controller, mapping gestures to button presses, and calling it a day. Throw out the majority of your assumptions!
Gestures are not button presses. They are complex, nuanced movements and you should consider what aspects of a unique gesture performance can or should be communicated back to the player and what aspects factor into the game system. There's a huge amount of information in even relatively simple gestures. The challenge is determining what information is relevant and how best to use it.
Read more about VR and Advanced I/O on Gamasutra's special event page this week.