As the saying goes, never ascribe to malice that which is adequately explained by incompetence. But when moderators are undertrained and undersupported, their mistakes reflect on the whole company -- including you.
Not too long ago, a small furor rose up regarding a wrongful forum post moderation on Square Enix's official Final Fantasy XIV beta forums. The post in question referred to Square Enix's decision not to permit same-sex marriage in its game, and was removed as "discrimination," among other listed reasons. Here's a chunk:
We are writing to inform you that we have suspended your access to the FINAL FANTASY XIV: A Realm Reborn Beta Tester Forum because on 08-10-2013 you breached the FINAL FANTASY XIV:A Realm Reborn Beta Tester Forum Guidelines.
Arguments For Marriage Equality in Eorzea
Relevant sections from the FINAL FANTASY XIV Guidelines:
Posting aimed to create a negative impact on the community or its members.
Posting that constitutes discrimination against another forum member or group (also including forming groups for the purpose of discrimination), insults, slander, libel, harassment of a group or individual.
Posting spam, including meaningless characters and white space.
Now, I'm not here to get into the post's subject matter. I happen to feel that if you're including a system to model something like marriage in your game, going out of your way to make it exclusively heterosexual is a wasteful expenditure of resources at best, but that's neither here nor there at present. Players and developers alike have pushed for inclusion of LGBT subject matter in games for quite a while now, and for players to bring the issue to Square Enix's forums isn't at all surprising. Neither is Square Enix's decision to skirt around the issue in an official capacity by calling it "controversial."
The outcry here has to do with the fact that the moderated post in question -- by beta tester Aldeus, who was also banned for his trouble -- is cleanly written, professional and polite, not inflammatory or antagonistic in the least. Its removal as "discrimination," on the other hand, must surely speak to the company's bigotry or at the very least some moderator allowing their personal opinions to get in the way of the job, right?
Well, uh, no.
A bit of background here: prior to my work at Gamasutra I made my living as a moderator and community lead for a kids' free-to-play title. My day-to-day tasks consisted principally of removing bad posts from the forums and responding to user questions, as well as moderating comments on our corporate blog and Facebook wall. Our main audience were 8- to 14-year-old boys, who aren't known for either their nuanced understanding of social issues or their graceful way of speaking. Sure, some of the users were quite mature and well-spoken, but the vast majority spoke as you'd expect kids of that age to speak -- especially online where they believe they're free of consequences.
As a result, for the longest time, we had the word "gay" in our autofilter (alongside "sex," "tit" and "dick," among others, which made for quite a hassle when discussing the sex of a tit you spotted outside Dick's Sporting Goods). It didn't matter how it was used -- the filter would catch it, we would have to assign a value to how it was bad, and pass it along to another moderator to mete out the punishment, which ranged from a warning to a permanent ban depending on the player's history. Not assigning it a value or not punishing the kid were generally not options available to us, at least not without some deleterious effect on our autofilter, our work performance reviews, or both.
Was it right? Of course not. But we still had to do it. As contract workers we had very little leverage in the policies or enforcement set forth by our managers, which is not atypical of these kinds of moderated games. Yet still we would be expected to justify the policies to our players, and therein came the real trouble. Whatever we believed -- and some of us were LGBT ourselves -- we had to articulate what was actually a flaw of an inflexible system in terms that made some kind of socially appropriate sense.
"Because it's a mature subject," we said, which became:
"Because it can be used offensively," which turned into:
"You can talk about it, but don't use that word."
If we complained of this cognitive dissonance to our superiors, the answer was always the same: the feature request has already been filed, but it's a low priority for the developers, so just continue executing the rules as stated and be patient.
I worked for this game for over three years and saw, at best, a small handful of our requests implemented. The bigger changes came from policy, rather than tools: a free-to-play game that bans its players can't make money very well, we were told, so we scaled back moderation for many offenses. This had some positive outcomes (the filter was finally tweaked to allow "I'm gay" while catching all other uses) and some devastatingly negative ones (we were no longer able to deal efficiently with some of our more toxic players).
For the record, I don't believe that a positive community and a profitable free-to-play game are at odds with each other. It's been thrilling to see some of Riot's overhauls to the League of Legends community, for example, and I am especially chuffed to see their community management team taking such a visible and consistent stand against harassment. But for the case of my game, under the leadership we had, a positive stand against toxicity was quite late in coming, and the tools we had were not up to the task.
I know I have banned kids for wrongly stated reasons. I tried to avoid it as much as possible, skipping reports rather than assigning some illogical punishment, but if you are processing 6,000-8,000 reports per shift, mistakes happen. Even if you know a kid's post needed to be taken down to prevent a flame war or the like, how do you articulate that with half a dozen autofills and a split-second to make a decision?
Several mods had workarounds for this, including custom keymapping and browser plugins. I had a running plaintext file full of canned responses I would work from. But it doesn't change the fact that at the end of the day, our superiors were responsible for providing us with better tools than what we had.
I can't tell you with any certainty that the moderators at Square Enix's FFXIV forums work the same way, but going off the moderation notice, I'll bet you it's similar. Why is it marked as "discrimination" and "aimed to create a negative impact on the community"? Because they had a form to fill out, and these were among the options which clashed the least with the content of the post. Because they knew it was not supposed to be on the forums, but they didn't have a good explanation integrated into their backend to explain why. Because that moderator might believe -- and believe strongly, even! -- about a subject such as this, but a conflict-averse moderation policy is the easiest way to keep things tidy.
(According to GayGamer, the poster did repeatedly post the same message several times, which does account for the "spam" part of the moderation action. And it's certainly something I'd warn for, in the same place. The other two, however, speak to a busted system.)
As I said above, it's usually appropriate to assume human error is at the heart of issues like this. It's unfortunate that this happened, yes -- but from where I stand it has way more to do with a generally outmoded approach to forum moderation than it has to do with either a company or a moderator seeking to censor.
Do we have a right to be outraged? Sure. At the end of the day it still boils down to a developer refusing to engage with players on something that matters to them. That refusal has trickled down into a blanket moderation policy that in this case has hurtfully told a user their request for equality is on par with the discrimination being perpetuated against them. It's an accidental collision of risk-aversion and bad moderation, but it's still hurtful.
That said, when I see moderation notices like this, I don't see anything so much as some garbage string of words as you might get out of Cleverbot: something in the algorithm has gone awry. Forums have gotten too big and mods are still so frequently undersupported that if you get a response from a live human at all, it's still likely to be at least partly automated. And sadly, even in 2013, from a top-down corporate perspective community engagement is just part of an overarching social media strategy. And speaking of that...
Developers, mods aren't just a small cog in your overarching social media strategy.
Moderation and forum tools may often be an item of least concern when building your game's website. Oftentimes it's enough trouble just to launch the damn thing. I understand that. But if you've reached the level of stability where community engagement is a thing you're able to do, please commit to it. From a development end, community teams might be an afterthought, but from a player's perspective, your community people are the face of your game.
If you're serious about meeting your players halfway, you need to meet your moderation and community team halfway as well. If tools or policies are inflexible or out of date, you need to prioritize changes -- and not bandaid fixes, either. Ask your lead moderators what they need and try to see it from their point of view. Otherwise, you get incidents like this -- an accident that spirals out of control and ends up reflecting on the entire company.
I can't speak to an insider's perspective on any game but my own, but from where I stand I've seen a few game forums headed in the right direction. As I mentioned above, Riot has been quite visible in turning around the League of Legends community and actively, programmatically dialing down player toxicity. It's inspired stuff. I'm also optimistic watching community leads like Jessica Merizan of BioWare take a stand for both player advocacy and player responsibility. At one of her many panels at GaymerX, she assured attendees that "big changes" were on the way for the BioWare Social Network. It would be great to see more community and moderation teams like these two looking to treat the problem rather than the symptoms.