Riot experiments with permabans to stamp out toxic behavior
Toxic online behavior is an insidious problem, one that Riot Games is working to stamp out by testing a new player banning system in League of Legends
that -- according to Riot Games' Dr. Jeffrey Lin
-- can permanently ban players for "extreme toxicity" like griefing and use of racial or homophobic slurs.
This is worth paying attention to because Riot is one of the most recognizable game companies proactively dealing with toxic players in its community. The company has earned a reputation for systemically reinforcing positive behavior by League of Legends players
, but it's also banned professional players
for extremely toxic behavior.
Now, Riot is experimenting with this new toxicity-seeking League of Legends
ban system that's partially automated (Dr. Lin refers to it as a "machine learning approach") with input from Player Support reviews. It's designed to immediately ban players who are engaging in extremely toxic behavior, and those bans can last for two weeks or forever -- though Dr. Lin claims all permabans will be reviewed by Riot.
If a player complains about a permanent ban issued by this new system, Riot is committed to publicly posting the chat logs which led to the ban.
Issuing permabans and publicly shaming toxic players seems at loggerheads with much of what Dr. Lin has said in the past about Riot's efforts to improve the tenor of its player community through positive reinforcement and social experiments -- efforts that he spoke about at length in this GDC 2013 talk
. In his recent reddit post
he acknowledges the success of those reform programs, but claims that stronger measures are necessary.
"For most players, reform approaches are quite effective," wrote Dr. Lin. "But, for a number of players, reform attempts have been very unsuccessful which forces us to remove some of these players from League