Some game developers that make big-budget console games for a living don't take mobile games seriously. Maybe because of the veritable explosion of clones and the fact that the barrier to entry is significantly lower. After all, there are more apps released per day than console games released in a year!
That last bit is not a documented fact, but it certainly feels that way sometimes.
It's not easier to make mobile games, it's just different. In fact, it's really hard to make games for a device where your hand is in the way and where someone might call in the middle of a bossfight. Especially if you want to keep making the same games you've made before.
Warren Spector voiced the following challenge in an interview after Junction Point's closure in 2013:
"[W]e've trained players to consume game content in five-minute chunks. People, myself included, play standing in line waiting for coffee or on a bus, situations like that. How do you tell an epic interactive tale (or even a small, personal one) in five-minute chunks?"
To a lot of big-budget developers, this doesn't even sound like a challenge but rather the reason they might not consider mobile games to be "real" games. Personally, though I definitely don't have an answer to Spector's question, I really like the challenge itself and I definitely hope to see a solution in the future.
But before we can approach that challenge properly, we need to make games that feel natural and compelling on touch screens. Maybe we even need a defining title - a Wolfenstein 3D or Super Mario for smartphones and tablets. To make that game, we need to respect the interface itself and make the best of it, and that particular battle starts at the controls.
The following are examples of widely used control schemes for touch interface games. Some of them are used much more than others, but they are all used enough to deserve mentioning. It's not an extensive list, and it makes several generalisations, but hopefully it can illustrate how some clever developers have already approached touch interfaces.
That said, let's look at the schemes!
Virtual Sticks and Buttons
Many games try their best to mimic console gameplay completely, to the point that they give you sticks and buttons on the screen. Sometimes, this is very dynamic, so that it doesn't matter where you put your finger. At other times, it's predefined and even copies the controller layout of popular console gamepads.
I don't like virtual sticks and buttons at all, so I won't give you any examples of games that use this technique exclusively, even if some of the games I mention use virtual sticks or buttons in addition to other techniques.
To me, trying to mimic controller input is a copout. It's staying with what you're comfortable with instead of adapting your ideas to the platform.
Some games try to take the core elements of console gameplay and force it to use the touch interface. The Drowning is one example of this, and does FPS mechanics based on a simple mantra; "two fingers on one hand."
Games like these try to answer the question, "what's the best way to do X on touch?" They don't really strive to find out what's best for touch, but rather to cater to an assumed consumer pattern that's expected to transfer over when players of PC and console games start playing mobile games.
What often happens with solutions like these is that they feel insufficient. In The Drowning, the turning is slow and the movement becomes inaccurate. What happens is that you try to find a good corner and force the enemies to come to you, because moving and shooting simultaneously is next to impossible. For me, this resulted in frustration, since I'm quite used to FPS games on consoles and PCs and came to The Drowning with a large set of expectations. Overall, I liked the ideas and the consistent execution, but as "FPS on touch," it still falls short merely because it doesn't behave close enough to the experience it tries to mimic.
As a sort of compromise between virtual game pads and touch interpretation, there's also a category of game that does a little bit of both. The pivotal example of this would be Infinity Blade, which combines a highly intuitive swipe mechanic for striking, with virtual buttons for dodging. The latter are locked on the screen just as if they would be buttons on a gamepad, which can sometimes lead you to miss a dodge because your finger has moved too far from the button.
Without the tactile feedback you get from a physical button, you can never really learn when or if your finger is in the right place. You'll keep missing the dodge every now and then, even after many hours of playing.
Most of our interactions with touch interfaces happen in menus. Whether reading e-mail or browsing your app market of choice, the experience of navigating and interacting with the device is intuitive and efficient, because it's something you spend a lot of time doing.
Because of this, many games are simply menus with nicer graphics where you can benefit from the knowledge you already have of how scrolling, swiping, zooming and other features already work in the OS itself. The experience of playing the game becomes almost seamless to simply navigating your device or the web.
Rage of Bahamut is an extreme example of this, as is the more subtle Spy Master. These are games that largely play themselves so long as you make key decisions along the way, and by extending what you're already used to, they require that you learn their rules rather than their controls.
Some games make part of the expected feature set from their console cousins into fully automated processes. Attacks in The Walking Dead: Assault are based on whether enemies enter a circle around your character, for example, and when you shoot in Dead Trigger 2, the attacks are based on your aiming and don't require specific trigger input.
This eliminates one of the biggest issues with realtime interaction on a mobile device, which is the player's hand. By requiring no continuous or realtime input at all, the hand is kept away from the screen so that more screen real estate can be dedicated to feedback and presentation.
A more touch-oriented way of doing buttons without actually showing buttons is to handle the whole screen as your game pad. It can be by dividing the screen in a right and a left half and interpreting any tap or swipe in either half as something specific, as is the case in Shadow Blade, or it can be to let the game handle context-sensitive input variation and otherwise allow touch input anywhere you want.
For example, if you double-tap to activate a feature, you can double-tap anywhere on the screen. This latter method is used quite successfully in Counterspy.
Fire and Forget
Some games separate input from action. Angry Birds is the most obvious example. You use your finger to aim and then fire the slingshot, but once you have, your input is very limited. Some of the special birds let you use a bit of reactionary input (see next), but for the most part, there's a clear separation between interaction and result.
A deeper way of doing the same thing is the way Crimsom: Steam Pirates divides the planning from the action and then shows the result in a full simulation based on the planning.
Turnbased games like XCOM: Enemy Unknown use this principle, too. Plan your turn, commit to it, and then see the results unfold without having to worry about realtime interaction.
This is one of the schemes that works perfectly within the limitations of the touch interface, because it guarantees that your hand is never in the way when things are happening, but still lets you watch the action unfold based on your own decisions. It's also a natural fit for asynchronous multiplayer games, like Rad Soldiers.
Another way of coping with the hand obstruction problem is to only require input as a response. Temple Run and the countless other endless runners out there are good examples of this. Your character is constantly running and interacting with the game world, and you only need to use input when something bad is about to happen.
Similarly, in Deep Dungeons of Doom, the game runs on a cooldown cycle and requires you to tap the left or right half of the screen with the right timing to attack or block incoming attacks. You spend most of your time looking at the in-game animation and trying to interpret what enemies are doing, rather than with realtime interaction.
The Room is unique to my frame of reference in that it uses touch gestures in a three-dimensional world. You get a gorgeous puzzle box, and the gestures are your hands moving along its surface, trying to experiment with the levers and other contraptions that bring you closer to what's inside the box.
It's one of the games that truly inspires and shows what can be done if you take a step back and build games that are specifically built for touch interfaces. I can't recommend it enough.
Some games, like the immensely popular Dragon Vale and Clash of Clans titles, rely heavily on the Starbucks test and are designed in such a way that they mostly play themselves. You start a construction of something, then wait until it's done. In the meantime, there's not a lot you can do without spending money to finish the process and proceed to make more decisions.
When you're not waiting for that macchiato, these games will simply chug along behind the curtains to build you that new habitat or produce more elixir. These games are perfectly suited not just to touch interfaces as a controller (they're typically menu games), but also to how most people use their mobile devices.
The final way you can handle the dilemmas of designing for a touch interface is to not use it. In games like Real Racing 3 and Tilt to Live, you use the device gyroscope to make realtime control adjustments. This is a great way to eliminate the problems entirely, but puts other requirements on the game's design and is therefore a lot harder to make good use of unless it makes thematic sense.