|
The topic of this blog post is the most important object in games: the human body. Humans are at the centre of most games. Take 10 popular games: 8 or 9 are based around humanoid characters.
Despite this, animations remain the least realistic part of modern games. Frequenty there is a striking difference between the graphical realism of the characters and the stilted way they move. Just look at the Husks in the Mass Effect series.
It doesn't have to be that way. A company called Naturalmotion have made an engine called Euphoria, that simulates human bodies. It has been around for several years and it produces some neat results, but they only license the engine to a small number of companies, such as Rockstar, who use them only in the biggest blockbusters, GTAIV, Red Dead Redemption and Max Payne 3.
They probably have reasons for this, but the approach seems anachronistic. Why not take the opposite approach and supply a general humanoid runtime to the public for free with an open API? Every parameter, such as bone length and muscle strength, would be open for manipulation by other developers and end users, who would also create skins, layered behaviours, and 3D environments for the humanoids to interact with.
This would open possibilities to quickly create deep and varied games belonging to any human-based game genre, and perhaps create som new genres as well. One idea is a rock climbing game, in which the player climbs rock walls using the mouse to directly control one limb at a time. This might sound simple but could make for a very deep game, if the objective would be not just to scale walls but to find the optimal character for each wall, with length of limbs and muscle strength.
There are some challenges associated with the open approach. First, the exception handling in the games needs to be ”gamified”. Since the simulated object (the human body) is so complicated, there will be situations where the simulation breaks. In this case, there need not just be mechanisms for just getting out of the situation but players should feel like identifying, avoiding or managing these situations is an integral part of the game itself.
Second, general control schemes need to be developed. This is easier said than done. But if the API is good enough, perhaps someone in the developer community will crack it. (I have an idea for a control scheme for general melee combat, using a mouse and a few buttons. I’ll describe it further if someone is interested.)
So where will the money come from? This is intriguing because the open model might be unusually suitable for free-to-play models, since it is based on clearly defined parameters that users would like to increase within the context of the game. There could be a version of the API that has a limit for the total muscle strength of the characters, which would cost money to increase. Could muscle power be the ultimate free-to-play currency?
All in all, I believe game animation is ripe for a shake-up. If NaturalMotion doesn't figure out how to make a general animation engine available for everyone, someone else will.
If anyone has any thoughts about the human body in games, or the state of procedural animation, please comment.
|
Have you checked out the game GIRP at all? Concept of climbing-wall-limb-by-limb game called it to mind - does a great job of simulating arduousness of climbing, and though it really does not stress animation realism at all, but I think succeeds really well with showing those edge cases of simulation-breaking in a hilarious way (if perhaps to the extreme of making crazy limb angles sort of the goal). In a more realistic version as you proposed, it might be interesting to have speed of motion be a factor in how likely an injury was to occur. Game might be a downer though, haha!
Seriously though, consider a game where you walk using wasd keys to move and a mouse for the camera.
When an opponent approaches you hold down the "aim" button. The camera shifts to a static position viewing your character more from the side. The mouse starts to control your fist's position in a curved, mostly vertical 2D plane to the side of your body. One of the edges of this plane would be the central vertical axis of your opponent's body, head to thigh. You would be able to land punches along this axis with different force and direction by moving the mouse.
Consider a zombie game where only certain points of the zombies' bodies are vulnerable. If you hit these points too softly you don't stop them, if you hit them too hard they explode and contaminate you.
Since you constantly have to get up close to the zombies and perform a fairly complicated maneuver, the game would certainly keep you on edge.
It is also a suitable first implementation of the scheme since it is a scenario where your opponents don't hit back too much, which would make things more complicated.
Noone said this was easy. But with the engine for the body itself available, someone might make it work, and become very rich. Because if there is anything that the last years have taught game developers, it is that games that accurately describe simple physical mechanisms (wii sports, angry birds, etc) can be very popular.
http://www.foddy.net/Athletics.html
Time cost of integrating their solution and making sure it blends fine with the rest of the engine and the cost in Money.
Good point...Even if the product is amazing and does everything you had ever hoped for...you do still have to fold it into your setup which costs resources.
This is why, when Euphoria is used, it is in hit reactions, falls, deaths, etc. Things that have no precise impact to the player's input or gameplay. Secondary, reactive visuals.
Personally, I think the Husk's run animations are the only ones that stick out with me months after playing the game. You can tell they started with, presumably, a mocap run and then tweaked it to make it feel a little less than human. Which is the personality and function of Husks to a T. Even when using Euphoria, multiple run cycles would have to be authored, either keyframe or mocap, which the engine then blends between, in conjunction with physics, to get results.
I do agree that game animation has a long way to go. But less human/artist authoring isn't the answer. Smarter authoring, with more care of the design, engine and implementation are what is called for.
" I do agree that game animation has a long way to go. But less human/artist authoring isn't the answer. Smarter authoring, with more care of the design, engine and implementation are what is called for."
And in the same vein believability trumps realism. Which is why motion controls often fail as well as virtual reality.
That said, it's tough to balance believable protagonist animation with gameplay that's responsive enough for players to feel in control and engaged. I think a system of handling responsiveness based on physics and animation could lead to the more realistic characters we all want to see in games.
Still, humans are pretty responsive in the physical world as well. Take a super soaker, or the closest equivalent to a firearm you have, and see how many times a second you can reaim it 180 degrees. Three times a second is not impossible.
You're absolutely right, it can become a problem if developers push too far in that direction. I think Prince of Persia (the 2008 version) suffered from that feeling of being too locked into animations. Examples like Unreal Tournament clearly show that the need for realistic animations needs to be weighed against the goals and objectives of the game, and in some cases they really aren't necessary. If a game is about fun and precision more than realism or believability, the gameplay shouldn't be restricted by animations.
@Karl
I think the disconnect between player movement and avatar movement with motion controls is a really interesting problem that doesn't get talked about enough. Maybe this is because there aren't any perfect solutions. The best I've seen is where there is enough abstraction between the player's movement and its onscreen representation that discrepancies aren't as jarring for the player. Another solution more in line with my first post could be achieved through use of player character AI. If the character knows how to react to an attack being blocked in such a way that they can sync up with the player's position quickly enough, the experience would still feel relatively seamless. Or there's a slightly more risky solution where you make block-reactions a necessary part of the player's actions. Imagine if when your attack is blocked, you have to recognize that and react accordingly for your character to recover. It would be tougher to get buy in from the player, but ultimately it's probably the easiest to implement from a tech standpoint.
On your second comment, I won't dispute that humans can be very responsive in real life. However input mechanisms in videogames to date are too abstracted and simple for players to convey the precision that they are capable of in the real world. That's why I think protagonist AI is essential in bridging that gap. It could (theoretically) see the player intent behind the rudimentary fiddling of analog sticks and binary button presses, and execute what the player meant to do more accurately than animation loops being triggered by simple inputs.
That makes a lot of sense in games like UT and control schemes that evolved from the natural ease of mlook, but I don't think "removing all limits" is always the best design philosophy.
As an example, should a Marine in a game set within the Aliens universe be able to turn 180 degrees instantaneously? Should the gun stop or move when it makes physical contact with another object? I don't think the way we're doing things now is the only way to do them.
But I do think that control should come first, and animation should reinforce.