Systemic Feedback in Competitive Games
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.
Feedback is without a doubt central for games of any kind. It is a core element in the interaction between player and gameplay system. The player's actions are processed by the ruleset, possibly under the influence of an opponent, before the system reacts in a specific way. The information the player receives by observing this reaction will then help him hone his mental model of the system and thus improve his skill. Feedback closes the gameplay cycle, thus facilitating a process of iterative learning. This is essentially how competitive, i.e. skill-based, games generate fun.
Audiovisual feedback, sometimes also called "game feel", conveys small-scale lessons about rather immediate factors such as the controls, dexterity, or reflexes. Understanding the rules and how they relate to each other only plays a secondary role. In contrast to that, systemic feedback is all about long-term improvement of the player's grasp of the gameplay intricacies. It is tightly coupled to the result of a match in the form of a win or a loss. All actions are evaluated in the context of the game's goal and thus gain a specific qualitative significance.
The closer an action takes the player to victory, the better it generally is. Only by processing systemic feedback will the player be able to evaluate his position and direction along this "goal axis" of the game and thus learn from his actions. The following sections of this article will try to shed some light on the characteristics of efficient systemic feedback, regarding gameplay itself as well as potential metagame structures.
At first let's look at the concrete feedback a player receives immediately while playing. It is absolutely essential that this feedback is comprehensible, so that the competitive learning cycle can get started. Therefore actions over the course of the game should ideally be causally related to each other. This principle is also known in storytelling by the way. The recipient of the story can ascribe much more meaning to a sequence of actions of the form "X, therefore Y, therefore Z" compared to something like "X, then Y, then Z" which seems rather arbitrary.
The masterminds behind South Park try to stay away from "and then..." connections.
Player actions should have concrete relations to other actions and events. The result at the end of the game will be able to charge a strong causal action chain all throughout, either positively (victory) or negatively (defeat). Now players can backtrack step-by-step how the outcome came about. The analysis of these aspects often happens subconsciously while playing the game. However it can also be done after the fact, for instance with the aid of a replay.
Weak action chains on the other hand are severed by systemic distortions ("noise"). An element that can cause this is for example unfair randomness that affects the quality of an action post hoc. This adds unsubstantial connections ("and then...") to the sequence of play. This has ripple effects all throughout the course of the match. Players would have to specifically "subtract out" the influence of the unfairly random effects when analyzing the gameplay. This can however be extremely difficult, depending on the system at hand. Thus, wrong conclusions could easily be drawn and misinformation internalized, which of course leads to a much less efficient learning process.
Another positive implication of a strong causal chain is how much more reliable it makes the systemic feedback. If the result of a match can easily be traced back to the beginning, then there is a good chance that it accurately reflects the skill of the players involved (Keith Burgun calls this metric "goal feedback efficiency").
As a side note, hidden information can impair the reliability of the feedback, even though it is in itself a valuable design tool. The key, once again, lies in striking a balance: On one hand the game should not be so solvable that it breaks down to a pure contest of calculation, on the other hand it has to be deterministic enough to send reliable feedback signals in most cases.
Too much random chaos will make it difficult to extract reliable feedback.
After all finding this balance during gameplay is not enough though. Even before a match starts, there can be an imbalance if the changes of winning are not distributed fairly between all players, e.g. due to bad matchmaking or inherently over- or underpowered gameplay elements. Assuming one side in a duel between two players has a chance of winning as high as 90%: In this case the advantaged player could make relatively many wrong decisions without actually losing the game in the end. In other words, he could play worse than his opponent, measured against his own skill. If he still wins the game sends fundamentally wrong feedback. Optimally efficient feedback in games with one or two player factions therefore always requires a success rate of 50%.
Reaching this probability is also the goal of a classic matchmaking system. It is supposed to confront the player with challenges tailored to his personal skill level. Since most players will increase their skill in a competitive game over time, this also means that the matchmaking has to adapt accordingly and for example match players with stronger opponents. In multiplayer games this is a well-established paradigm that can be found in all kinds of ladder, ranking or league systems in almost any modern title.
A dynamic ranking system is the basis of long-term efficient feedback.
The equivalent for single-player games is far less prevalent, even though dynamically adjusting the difficulty would be just as important for the player to steadily receive efficient feedback. Auro recently laid a possible foundation with its "single-player Elo system". Every match is self-contained and can equally be won or lost. In between playthroughs different aspects of the game system, such as the level generator or the goal, are tweaked to adapt to the player's skill. This metagame approach was recently revisited in multiple titles by BrainGoodGames, as well as Zach Gage's Really Bad Chess.
However it is still far from being a standard format, even though the clear win/loss structure avoids countless problems of the obsolete highscore model: Session length and win rate stay constant instead of converging to infinity or zero respectively; the risk management does not break down to a trivial "All or nothing!"; and prematurely conceding matches due to unfavorable randomness is also out of the question. Every match counts and potentially conveys valuable insights.
To be able to make comprehensive use of that information, the player should be able to interpret systemic feedback as regularly as possible. That means he has to traverse full iterations of the gameplay cycle, from the beginning of the game to the end. The course of actions only gains concrete meaning in the light of completion. Therefore a full transition, generally a match, should be as short as possible. Of course there needs to be enough time for the system to unfold its full depth. However, extraneous filler elements should be avoided as much as possible.
The motto of modern competitive games: short matches, lots of meaningful actions.
As an aside, there is a very practical argument to this guideline: People forget things over time. If a player performs an action and does not learn whether it made sense or not until hours later, the feedback will be diluted. The player will not be able to properly reconstruct the sequence of actions anymore. Ideally players should be able to maintain a coherent mental image of the match structure at all times: "How did the current situation come about? What does it mean for my position and the further course of action? What exactly do I want to achieve?" If the player is able to at least roughly answer all these questions, he will also be able to cognitively process the system's feedback.
Finally, speaking of matchmaking there is one factor that is crucial for any modern multiplayer game: The collective intelligence of its player base. If enough players are involved in the competition, a very granular ranking system will form pretty much on its own without too much design effort necessary. The gaps in the "skill ladder" become smaller and smaller, the more players there are, thus increasing the matchmaking potential dramatically and allowing for an accurate long-term assessment of players' capabilities.
League of Legends is blessed with a huge playerbase and thus a very granular ladder.
In contrast to single-player systems, no values or rules have to be manipulated to generate harder and harder challenges. Players are simply matched with stronger opponents over time. The last step is indeed being the "best player in the world" and not just a more or less arbitrary point where the game system becomes too hard to beat. This circumstance gives skill-based multiplayer titles a huge advantage over their single-player counterparts if they manage to accumulate sufficient player numbers.
In the end most players are of course not directly concerned with becoming the "best in the world". However they all want to learn, get better, gain competence. That is the unique appeal of competitive games. Good systemic feedback directly supports this pursuit by delivering essential and nonarbitrary pieces of information to players, thus facilitating motivation and preventing frustration.
To sum it up: If the reasons for the result of a match are comprehensible (Causality) and valuable lessons can reliably be drawn from those (Reliability), then the game's specific learning process can be initiated. Its efficiency, and therefore "fun", can constantly be kept at a high level as long as challenges are continuously (Regularity) adapted to the player's skill level (Adaptability).