Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 1, 2014
arrowPress Releases
October 1, 2014
PR Newswire
View All





If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 
Cheating and Gaming the System
by Mario Herger on 06/15/13 03:24:00 pm   Featured Blogs

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

"The first principle is that you must not fool yourself  - and you are the easiest person to fool." Richard Feynman

One of the first concerns that you may hear when you talk about gamification is “But won’t people start cheating?“ Of course people will cheat. But here is the thing: people already do cheat right now. They may not be as honest as you think, and people will always cheat. The question is more of how and how much people are cheating or being dishonest.

Is Cheating Bad for Gamification?

Not every cheating is bad. There may be cheating that demonstrates very engaged behavior and may be used by the gamification designer to enrich the game. Sometimes, a way to cheat may be built into the system to give the players the pleasure of taking a shortcut – without noticing that this was a wanted design. Cheat-codes to circumvent specific obstacles of finding hidden treasures (“Easter eggs“) may even add to the adoption of and engagement with a gamified system. And players finding new ways to cheat the system may give gamification designers new ways of making the gamified design richer.

Famously, Captain James T. Kirk from the starship Enterprise (yes, now you know it: I am a total Trekkie) cheated the simulation by reprogramming it. As you may know, the simulation was not supposed to be winnable, but because of Kirk’s cheat he could win. In that case that specific cheat was regarded as an acceptable way to show the capability of a starship commander to come up with an unusual approach to win an unwinnable situation. Much to the discontent of the law- and rule-abiding Spock, who had programmed the simulation.

Cheating becomes a problem in cases, when the players who abide to the rules have the feeling that the game becomes unfair, or that the original purpose of the gamified approach is being diluted. And as a result these players will disengage – and that is the exact opposite of what we try to achieve with gamification: engaging people.

As an example from a professional community, where members help each other by blogging, answering questions, creating help documents and so on, points were an important factor to show expertise. An unintended consequence appeared, as some members got goals set by their managers to reach a certain number of points within a period. Some of these members figured out that by creating multiple users, they could ask a simple question with one of their users, and respond with their other user and reward themselves with points and climb up the point ranking. As you may expect, this behavior had the law-abiding members go ballistic. They did not care so much about the points, but they found that such cheat-pairs diluted the content by adding redundant and often overly simple questions and answer into the system, which made it harder to find the relevant content.

While a certain level of cheating will always be there (and is already there), we want to make sure to keep it at a certain level, react at cheating, and enforce rules. For that we need to understand the reasons when and why people cheat.

The Psychology behind Cheating

Are there ways to reduce dishonesty and cheating in a game? Professor of psychology and behavioral economics at Duke University, Dan Ariely researched, when and how much his test participants would cheat. In one study, Ariely asked a group of students and undergraduates to take a test consisting of 50 multiple-choice questions. When the students were done with the tests, they were asked to transfer the answers from their worksheet to a scoring sheet. For every correct answer students would receive 10 cents.

Now here is the twist: the test subjects were split into four groups. The first group had to hand the worksheet and scoring sheet to the proctor. For the second group the scoring sheet had a small, but important change: the correct answers were pre-marked. Would they cheat more? Anyhow, they still had to hand over both the work- and the scoring sheet. The third group finally also had the correct answers pre-marked, but the students were instructed to shred their worksheets and only hand the scoring sheets to the proctor. The fourth and final group again had the scoring sheets pre-marked with the correct answers, but the students were instructed to shred both the work- as well as the scoring sheet and just take the amount of 10 Cent coins out of a jar. Both the third and fourth group had basically carte blanche to cheat to the maximum, as nobody could verify their claims. What were the results?

According to Ariely’s study, the first group that had no chance to cheat answered 32.6 of the 50 questions correctly. The second group, with an opportunity to cheat (but the risk of being caught) had 36.2 questions correct. As the group was not smarter, they had been caught in a bit of cheating and “improving” their scores by 3.6 questions. The third group that had the chance to cheat without being caught – after all their original worksheets had been shredded before anyone else than themselves could take a look at them – reported 35.9 correct questions. Which was about the same as the second group. The fourth group – remember: the one with carte blanche to cheat – reported 36.1 correct answers.

The surprise here is not that people cheated, but that the risk of being caught did not influence their amount of cheating. The students didn’t push their dishonesty beyond a certain limit. The question was if there is something else which is holding them back? Maybe this could external controls to enforce honesty? Prof. Ariely who was skeptical about the power of external controls – as we are skeptical about extrinsic motivators - tried something else. in another experiment he asked two groups of students to solve a test with 20 simple problems. Each participant had to find two numbers that would add up to 10. They were given 5 minutes to solve as many problems as they could, after which they were entered in a lottery. Once in the lottery they could win ten dollars for each problem the solved correctly.

The first group, which served as control group, had had to hand the worksheets to the experimenter. The second group was asked to write down the number of correct answers and shred their original worksheet. This basically was the group that was encouraged by the setup to cheat. But the participants were given another task prior to working on the main task: half of the group was asked to write down the names of 10 books that they had read in high school. The other half was asked to write down as many of the Ten Commandments as they could remember.

The result was clear: the control group had solved on average 3.1 of the 20 problems correctly. The second group that had to recall 10 books from high school achieved an average score of 4.1 correct answers (33% more than those who could not cheat). The third group – the one that had to recall the Ten Commandments – had on average 3 problems correctly solved. Although they were not asked to say what they were about, but just to recall them, the simple request to write them down had an effect on the participants’ honesty.

The conclusions for us are that although people cheat a little all the time, people don’t cheat as much as they could. And reminding them of morality in the moment they are tempted tends to make them more likely to be honest. A players’ code of conduct that reminds the players to behave ethically – similar to oaths or pledges that doctors and other professionals used to have - may be a good way for some player communities to keep cheating at a low level.

Is cheating more eminent, when money is involved? After all, tangible rewards are more useful than just points. Dan Ariely asked himself this as well, so he came up with a couple of interesting experiments to test this.

In his first experiment he put six packs of Coke cans in dormitory fridges. The ones that are accessible for all students living in the dormitory. Over the next days he frequently returned to check the Coke cans. Ariely found out that the half-life of Coke isn’t very long. After 72 hours all Coke cans had disappeared.

What about money? In some of the fridges Ariely placed a plate with six one- dollar bills – and did the same. He frequently returned and found a completely different result. After 72 hours, all off the one- dollar bills were still on he plate. Now this was a stunner. People take goods, but leave the money, although both are from a value perspective similar. Is the perception of dishonesty dependent on whether we are talking money or something that is one step removed from money?

To understand this behavior better, Ariely came up with another experiment. He asked students at the MIT cafeterias to participate in a little experiment by solving 20 simple math problems. For every correct answer they would get 50 cents. Ariely split the students in three groups: the first group had to hand over the test results to the experimenter, who checked the answers and gave them the money. The second group was told to tear up their worksheet and simply tell the experimenter the results to receive the payments. The third group was also told to tear up their worksheet, but would receive tokens instead of cash. With these tokens they would then walk 12 feet over to another experimenter to exchange the tokens for cash.

What happened? The first group was, as you already know from the other experiments, the test group. They solved an average of 3.5 problems correctly. The second group who had been instructed to tear up their worksheets claimed 6.2 correct answers. Basically, Ariely could attribute 2.7 additional questions to cheating. But the participants in the third group – who were no smarter than the participants from the former two – claimed to have solved 9.4 problems. 5.9 questions more, an increase of nearly 170%.

As soon as the non-monetary currency was inserted, people felt released from moral restraints and cheated as much as possible.

The Austrian artist and world vagabond (as he calls himself) Thomas Seiger did an interesting art project. He gave money away. He positioned himself at busy squares in cities all around Austria, carrying a tray with coins and banknotes in his hand. Attached to the tray was a sign saying “Money to give away.“ In total the amounts that he carried on the tray were several dozen Euros.

After traveling through South East Asia, he realized that material possessions are interfering with his understanding of what life is and decided to sell his worldly possessions and give the money away. But this turned out to be harder than imagined. People who stopped, incredulously asked him whether there is a catch or so? “No,“ he kept replying, “take as much as you want, no strings attached.“

Still, most of the people did not take money. Those who came, picked small denominations, but often they came back to return the money. They felt bad or felt embarrassed having taken the money. “Taking money is more difficult than giving,“ says Thomas. A group of 7 graders approaches him as well. After some hesitation, the most courageous boy moves forward to take a coin, but immediately the girls in the group call him back harshly. In the end one of the 7 graders takes out his wallet, and empties it on the tray. One after the others tosses more money on the tray, ignoring the protests from Thomas.

If Seiger had given away something else, like candy or balloons, he would have run out faster of his stock. After all it’s not money and people can justify it better, like the balloon or candy is for my son.

Both Ariely’s experiments and Seiger’s art project showed that dealing with cash makes us more honest. But removing us from money makes it more likely that people start cheating or loosing moral restraints.

This is an important lesson for gamification designers. If you reward players through extrinsically means, you set them up for cheating. This way you also make sure that the game masters will have a significant amount of their time spent on dealing with cheating. Finding, punishing and eliminating cheaters, constantly adapting rules to make cheating harder, dealing with dissatisfaction of “honest“ players, and always fearing Damocles’ sword of your players disengaging in troves once cheating takes over. Another reminder that rewards should be intrinsic, and not extrinsic.

One other aspect of when players are more likely to cheat was when we learned about the reactions to winning or losing in competitions by Martá Fülöp. Players with narcissistic behavior tend to feel entitlement to winning and fairness becomes an afterthought. They are more likely to cheat.

How to reduce cheating?

Drawing from the studies and outcomes above, we have a number of options to reduce cheating. The first cluster of options is through a balancing act of value, effort, and transparency.

  1. Decrease the perceived value of rewards
  2. Increase the effort required to game the system
  3. Shamification

By using intrinsic rewards without transferable value in the real world, or perks that have only a low exchangeable value, then players will be less encouraged to cheat. To prevent getting players in the system that are just aiming for the rewards and not adding any value to the system, use rewards that have a large perceived-value differential between the target audience and the rest of the world.

The next approach is to make the combination of the rewards metrics so complex, that they cannot be easily understood how to game them. An example would be the Google PageRank. The way Google defines at what position a link comes up I the search results is a secret sauce that is also subject to constant change.

If you use metrics that are less susceptible to gaming and require high effort, you can also keep cheating at lower levels. Michael Wu describes the following two variations that demonstrate how to increase the required effort:

  • Time-bounded, unique-user-based reciprocity metrics (or TUUR metrics) -> e.g. number of Retweets
  • Time-bounded, unique-content-based reciprocity metrics (or TUCR metric) -> e.g. number of Likes

And then there is the opposite technique not to make the rewards metrics hidden, but transparent. Show the public how players achieved their rewards. This way cheating patterns can be easily detected by others and create social shame and accountability. I like to call this approach “Shamification.“

Dan Ariely’s research showed some other elements that had an influence on dishonest behavior. Let’s first look at the ones that increase dishonesty and cheating.

If the players involved have a high ability to rationalize or are very creative, they are more likely to be dishonest at some point. They are more creative and can more easily find rational arguments why their dishonesty is still not dishonest. When the players are set up with conflicting interests, then they basically must cheat to achieve them. As we know from the financial system, short term interests often conflict a lot with long term interests.

If we see immoral behaviors or live in a culture where immoral behaviors are rewarded, then people are more likely to be dishonest. Often one immoral act by the player is enough to justify future immoral behavior.

Interesting enough, altruistic behavior may bring up dishonesty. If somebody else, like people in our tem with whom we have a bond, profits from our dishonesty, we are more likely to cheat for them; even and especially, if we don’t get out anything for ourselves.

And then willpower is an important factor. Willpower is a depletive resource, and when we are at a low level, our willpower may not be sufficient to resist dishonest behavior.

Methods that decrease cheating includes the above mentioned level of transparency. But also reminding people through multiple ways of the morals, laws, and honor codes. But here is the twist: players must be reminded of them before(!) they start playing the system. If you have players pledge (like let them repeat the honor code of the game), or give them a moral reminder (like letting them list the 10 Commandments or similar moral standards), or have them go through a process that requires them to sign to stick to the rules and not cheat, then cheating according to Ariely’s research is remaining at a low level. But it is important that those things are done before(!) they player start interacting with the system.

The last method to be mentioned is supervision. Cheating and dishonest behavior is reduced, if players know that each of their steps is monitored.

Factors that influence dishonesty (C) Dan Ariely

In contrast to conventional wisdom, the probability of being caught in the act of cheating or the amount of money that can be gained have no influence in how much people are going to cheat.

Cheat-Detecting Software

The video game world is very familiar with cheating, and while measuring and detecting cheating is never easy, a number of counter-measures to prevent cheating, or make it harder has been developed. Beside gamer etiquettes, PunkBuster, Valve Anti-Cheat, and Warden are some of the many software solutions used by online-games and MMOs to reduce cheating.

Conclusion

To conclude on that topic, be aware and prepared for the following:

  1. People will always try to find ways to cheat
  2. Not all cheating is bad cheating
  3. Anticipate the ways in which this may happen
  4. Make sure you detect and can react swiftly to a cheating pattern to prevent negative impacts on your gamified systems and the honest players
  5. Test your system to detect cheating opportunities early

This article is part of Mario's book Enterprise Gamification - Engaging people by letting them have fun. It was released in July 2014.

Get it on Amazon in print or for Kindle.


Related Jobs

Bohemia Interactive Simulations k.s.
Bohemia Interactive Simulations k.s. — ORLANDO, Florida, United States
[10.01.14]

Game Designer
Whow Games GmbH
Whow Games GmbH — Hamburg, Germany
[10.01.14]

Games Developer
Bright Future GmbH
Bright Future GmbH — Cologne/Koeln, Germany
[10.01.14]

Senior ActionScript Developer Mobile
Bohemia Interactive Simulations k.s.
Bohemia Interactive Simulations k.s. — Prague 5, Czech Republic
[10.01.14]

Game Designer






Comments


Ramin Shokrizade
profile image
I found Dr. Ariely's research very useful when I was building my early economic models. Cheating can be very destructive if it ends up undermining the equity of a virtual economy. It also can upset your entire player base if you have an active Persistent Gaming Collective as I describe in my "Group Monetization" paper. Hardening your game against cheating is important, but shamification is always the best prevention and remedy for cheats. Even narcissists will pause when confronted by the possibility of shame.

I should point out that when I survey cheaters, they almost always describe their activity as a sign of their superior intelligence. They were just smarter than the others, that is why they found or used the cheat before they did. I tend to associate this type of response with narcissistic behavior and I go to great lengths in my designs to not reward this as it has a chilling effect on the fun factor in a game.

Still, it should be pointed out that even if you don't want to cheat, if the other side is cheating and you have a choice between also cheating, or losing, most will cheat as a way to re-establish parity. This does not apply to "pay to win" environments. The reason here is that these mechanisms allow unlimited stacked cheating. If there was only one layer of cheating then you would see a group parity effect pull all participants in. Again I rely on this difference heavily in my monetization models.

Michael Hahn
profile image
When I owned one of the largest esports ladders and tournament sites, Cheating was always issue because the lack of repercussions if they got caught. Sure they would be branded a cheater but they can go else where, The value of the prize and klout of winning also played a huge part, The best way to stop cheating is to have spectators watching at a time, Cheating in games became such a problem at one time, you needed referees watching, screenshots to see to acid skins etc. I created a cheat patch for one Jedi Knight Dark Forces 2 and hosted 1000 dollar cheat patch for Ghost Recon, Rainbow Six back in the day, Someone once told me the best way to spot a cheater is to become one. While it is flawed. It may be true, I will never support cheating and never cheated. I miss playing FPS shooters competitively and stopped due to cheating. I found MMOs and other games with controlled environments a lot more fun.

I would also like to point out, Not everyone cheated. It was actually a small percentage of people and you found them in the top guilds/clans rather then the bottom. They needed to keep there game at a top level so they needed to keep the stats up whereas the bottom clan players played for fun and honesty. I even seen some bad cheaters lose to clean players.

Thanks for the good article.

Ramin Shokrizade
profile image
Michael, I think there is a lot of truth to your statement that you have to be a cheater to spot a cheater. When I started selling virtual goods in 1999 I was rewarded financially for finding the most serious weaknesses in games and became extremely good at breaking games and their economies. Ten years later I wrote the first paper on how to break a game economy and how to stop it.

When I was working on Shattered Galaxy in 2000 and 2001, we had a small team but one of our members was a dedicated hacker. He knew how to break our game in many ways. Others would try to hack our game on a weekly or even daily basis. He was very good at identifying hacks and submitting remedies.

I think this is the same philosophy behind how the Federal Reserve Board (which is tasked with overseeing our banks and economy) is filled with former leaders of the major banks. It seems a bit unnerving to me that the banks are tasked with policing themselves in this way, but who knows better all the tricks they play?

Andrzej Marczewski
profile image
I used to run a gaming clan (CoD 1, CoD 2, Command and Conquer Renegade). One of the things we did was offer a service to police matches. We had private servers where we tried out every cheat we could, so we could spot the signs of others behaviour (like shooting from behind a wall at someone you could have never known was there). It was a huge issue at the time and as you say, spoiled it for a lot of us. Also, yes the top clans often had the worse culprits - they thought that if they were the "best", you would not question their ability to always be better than you!

Nick Meh
profile image
Andrzej brings up a good point I was thinking of the entire read through the article.
I immediately thought of the early days of FPS and the same holds somewhat true today.

But back then, the servers weren't company controlled. They were Clan controlled. And policing was up to the Clans with their members monitoring actions.

But why it was ringing a bell is those massive amount of cheaters with classic aimbot and all those other options so easy to work, had really nothing to gain by doing so. There was no real world or game world atvantage. These were the days before golden guns or tracked effeciencies. Number padding wasn't driving it. They just did it to hassle the masses. Same is true today in MMOs. Some cheat to gain atvantage, but every cheater I can converse with before their inevitiable boot, the answer was always "I just did it to make everyone else miserable. I love the QQ"

'Some men just want to watch the world burn'

Maurício Gomes
profile image
I knew a counter-strike player that specialised in beating cheaters without cheating...

As clean vs clean he was average, but vs cheaters he owned them, specially those that used clipping bugs, exploits and cheats (he knew how to kill people inside walls, and how to spot them in first place) or those that tended to camp in one place and use aimbot (they are vulnerable to you killing them through walls or throwing grenades at them).

Of course, some cheats you cannot counter (ie: super-speed + aimbot + super-jump + seeing through walls) but those few people can do without getting banned...

[User Banned]
profile image
This user violated Gamasutra’s Comment Guidelines and has been banned.

Michael Hahn
profile image
I also wrote a blog here about 18 months ago on cheating in games and the business

http://www.gamasutra.com/blogs/MichaelHahn/20120131/90992/Combati
ng_Cheating_in_Video_Games_Verses_Business_as_Usual.php

Jonathan Murphy
profile image
They tested monkeys to determine how they responded to cheating. Most monkeys preferred to shame the cheater, even if it was to their disadvantage. The same is true about humans. People may cheat out of curiosity, to save time. But overall they dislike it. It's unfortunate that often the minority of cheaters do significant damage to the majority. Cheaters do give warning signs. They usually go for broke. Cough politicians. Cough.

Erin OConnor
profile image
I would be interested (particularly in Ramin's thoughts) on what people think of market "flipping" in mmo environments. That's where someone buys up ALL of a particular good on the market and then resells it at a substantially higher price.

Michael Hahn
profile image
thats not cheating, thats supply and demand, The utilizing the economy of a game for benefit. It could be griefing if the prices are too high

Ramin Shokrizade
profile image
Erin, this is a common exploit in both virtual and real economies. I tend to argue against anonymous transactions like you have in GW2, which don't really promote any social interaction in the economy. If you are in a social economy, then there is a human to human transaction occurring and humans will react to these obvious speculative attacks. When a computer can be used to buy up all of a resource instantly, this is not possible if a human is in the loop.

I also get around these problems by charging participants for access to the economy, and in this case anyone able to buy up all of something would have to pay immensely for this level of access to the economy. Anyone that is that heavily invested in a game is going to take the threat of a ban seriously if they are doing something they should not be. Free players never get significant access to the economy in my models because this could be used to degrade the experience of other players.

Real world and virtual world economies have a lot of weaknesses that spawn whole industries that exploit them. If you understand those weaknesses, and can build an economy from scratch without them, then these agents will go somewhere else to ply their trades.

Erin OConnor
profile image
Thanks for the responses.

I see that you have a blog on the GW2 economy as well!

Must go read!

Maria Jayne
profile image
When you're anonymous and the most you can lose is access to a game you've already been cheating in, there isn't much of a deterrent. Especially when you can get all kinds of fame and notoriety in the interim, while you cheat and have not been discovered.

Ron Dippold
profile image
Since the article focuses on multiplayer exclusively (and so do the comments), I'd like to point out that people still cheat even in entirely non-competitive, non-microtransaction games. This is an interesting case because there's nothing to 'win' except the game itself - it usually means you made the game interesting enough that they won't stop playing, but did a bad job of designing some of the mechanisms, and/or balancing. For instance, cheating through the last catastrophically bad mission in Assassin's Creed 3 seems like a very reasonable choice.

This is still a big bonus of playing a game on PC versus newer consoles - you can 'adjust' the game balance to your preference. Of course you can't let people do that where there's competition or paid microcontent involved, but it's worth considering whether people might cheat at your game even if money/score were not a concern. Perhaps it doesn't apply at all because your design is specifically to frustrate the player.

TC Weidner
profile image
I always wondered if sting operations and forbidden digital fruit would work. Meaning possibly build in some ability for people to cheat. Be upfront that doing such and such is cheating and will causes players to be suspended or what not. Then with the "trap" set and baited you can basically weed out some of the more prolific cheaters in your game.

I also always thought that a proactive approach is needed in popular PC FPS, why not create a few websites yourself that have and offer aimbots and so forth in em. Basically set up a sting, You can find out everything you need to know about the cheaters as they will be paying you for a cheat that will basically just gives you all their info and the ability to ban that anytime you wish.

As I said before I have no idea why so many large developers put up with these knuckleheads. If you had a retail store would you let these losers come into your store and steal and harass your paying customers, why then would anyone allow it in their virtual worlds?

Ramin Shokrizade
profile image
I remember when World of Warcraft came out and the key to successful PvP was to stand on the rooftops in neutral (goblin) cities and shoot anyone that came by. The guards could not go on the roofs, but would actually kill the *victims* if those victims tried to defend themselves or heal themselves.

When I reported this (daily) to the customer service team, I was told "this is working as intended". Really?!? So I'm supposed to get on the roofs too and shoot people so that the guards will gank them? This went on for 3 months before it was fixed, but never declared inappropriate.

Ron Dippold
profile image
@Ramin: I'm pretty sure from decades of hearing stories from inside (and playing their games) that Blizzard doesn't actually 'design' anything. Obviously, they do at some level, but mostly it's a highly organic process where someone gets an idea and implements something, then maybe later they'll tweak the numbers and see what happens... or if enough people complain someone will code up a hack around something... then they just rely on crazy amounts of playtesting for the balancing, and they play enough of their own games that everything is surely addictive at least in the short term. Very little top down, all bottom up.

Long story short, 'working as intended' means someone coded it, the code /is/ the design, it wasn't changed after playtesting (or got lost since they didn't have proper source control), therefore working as designed.

Henry Chong
profile image
Personally I think of a 'cheating' as a way to escape the actual system of the game to have more fun with the game. Especially with strategy games like Age of Empires or Dawn of War. Cheating in these kinds of games can let players make scenarios that they find to be hilarious or to make the game run faster and make it more challenging for themselves and everyone else.

Although this takes away from the real experience of the game, it certainly makes it more fun when you and a few buddies just want to mess around. This kind of 'cheating', I don't think is part of dishonesty.

But cheating is games like FPS's and MMO's is quite different.. It's more about the individual having fun, and making everyone else have less fun.

Michael Wenk
profile image
Like any negative behavior in gaming, cheating will happen regardless. The key IMO is to make the cheating work for you rather than trying to fight it.

[User Banned]
profile image
This user violated Gamasutra’s Comment Guidelines and has been banned.


none
 
Comment: