The Immortal Myths About Online Abuse
After building online communities for two decades, we’ve learned how to fight abuse. It’s a solvable problem. We just have to stop repeating the same myths as excuses not to fix things.
·
Published in
·
6 min read
·
May 27, 2016
- -
1. False: You can’t fix abusive behavior online.
Yes, you can. Most small communities on the web don’t have major problems with abuse, because they are well managed and well moderated. But the majority of marginalized people endure abuse online because the bigger social networks and media platforms are so bad at preventing abuse that it’s become part of the everyday online experience for millions of people. The good news is, the same principles that allow small sites to run well can be applied to even the largest sites, with proper investment. It simply takes the appropriate amount of time, resources, and experience being applied to the problem.
2. False: Fighting abuse hurts free speech!
Because the vast majority of online abuse is directed at underrepresented people who are women, people of color, and/or members of other marginalized groups, the net effect of online abuse is to silence members of these communities. Allowing abuse hurts free speech. Communities that allow abusers to dominate conversation don’t just silence marginalized people, they also drive away any reasonable or thoughtful person who’s put off by that hostile environment. Common sense tells us that more people will feel free to express themselves in an environment where threats, abuse, harassment, or attacks aren’t dominating the conversation.
3. False: Software can detect abuse using simple rules.
Unfortunately, the technology behind social networks doesn’t work that way. The same action in an app can be harmless or destructive, depending on the context in which it’s done and the power and social standing of the people on either end. For example, software might be able to detect if someone has posted a person’s legal name, but software can’t easily tell if a community was identifying someone who had been harassing or mistreating others or because a community wanted to name a person in order to harass them. One is accountability, the other is abuse, and almost no organization can afford to develop software sophisticated enough to make that distinction. Of course, a human could usually understand this distinction in moments, but that requires a human to be assigned to do so. Unfortunately, since most online platforms are made by people who love technology, they tend to want to try to think of technology as the solution to this definitively human problem.
4. False: Most people say “abuse” when they just mean criticism.
The vast majority of online abuse goes unreported; most users will only report an action if it’s extremely egregious, part of an ongoing or large-scale campaign, or presents a particularly urgent danger. Despite how reluctant targets are to actually report abuse, the reaction when they do so is often skepticism or denial from people who aren’t the target of online abuse. These skeptics see the other, less-harmful messages that are merely critical or insulting and think that the targeted person is overreacting to messages that are merely annoying. This is especially true because social platforms will hide a potentially threatening or harmful message while they investigate reports of abuse. The net result? Skeptics only see messages that weren’t worthy of being reported, and they use that to justify doubting reports. This is especially true because many of the worst abusers online know how to shift from one medium to another, meaning a campaign that starts on one platform can lead to abuse on a different platform, making the full context of the abuse hard to recognize.
5. False: We just need everybody to use their “real” name.
One of the most common reflexive solutions to abuse is to call for the use of “real names”. This is usually from people with little experience in managing large-scale online communities. Those who do run such systems can attest that an enormous amount of abuse is caused by people who are acting under their legal names; this is possible because many abusive behaviors can be extremely destructive without actually being illegal. (See #7.) For dedicated trolls, it’s also usually not very hard to create a name that seems “real”, which they can use for their attacks. For vulnerable people, using one’s legal name can make them targets for stalkers or others they are trying to avoid, or can force people to retain an identity that is no longer theirs. Worse, even if a user does want to use their legal name on a service, it can be almost impossible to capture someone’s real name in most common social apps. While persistent identities (pseudonyms) can be a useful tool for making a more accountable community when appropriate, legal names do very little to reduce abuse in large scale communities.
6. False: Just charge a dollar to comment and that’ll fix things.
Charging people to participate doesn’t stop abuse, it just limits your community to only people with extra disposable income, curtailing the expression of well-intentioned people who don’t have extra money lying around. There are plenty of people who perform abusive behaviors online who will spend money to do so; they’re often already investing an enormous amount of time in their antisocial behaviors, so a small fee doesn’t act as much of a deterrence. And even if you set an inordinately high fee, we now have millionaires knowingly leading communities of abusers and harassers online in many of the worst communities — they won’t be deterred by fees.
7. False: You can call the cops! If it’s not illegal, it’s not harmful.
As discussed in “What is Public?”, there are lots of behaviors — even including some kinds of doxing — that aren’t illegal (or whose legality is unclear) but that can have life-ruining effects on victims. Even when actions are clearly illegal, very few law enforcement organizations take online abuse seriously, or have staff trained in how to help fight it. Making a case to law enforcement requires victims to spend enormous amounts of time and money documenting the abuse that they’ve endured, usually to very little effect. And as has been amply documented, remedies that rely on law enforcement demonstrate all the problems that we see in offline interactions with law enforcement, including an overwhelmingly ineffective track record in actually reducing abuse or prosecuting abusers.
8. False: Abuse can be fixed without dedicated resources.
Perhaps the most pervasive myth amongst people creating communities online is the idea that addressing abuse is a matter of simply fixing a few technological bugs. Abuse, harassment, threats, and attacks are both common issues and ever-evolving problems to be solved. Yet, unlike systemic issues like service downtime or content creation, most of the technology and media companies that host communities online refuse to assign proper resources to keeping their community healthy. Depending on the size of the community, it requires people who have specifically been tasked with moderating a community, staying up to date on larger social and cultural issues that drive abuse, and learning from other communities about what threats they face. This of course has to be matched with the appropriate technological resources to build systems to protect and empower targets or potential targets.
If your website (or app!) is full of assholes, it’s your fault.
The bottom line, as I wrote half a decade ago, is that if your website is full of assholes, it’s your fault. Same goes for your apps. We are accountable for the communities we create, and if we want to take credit for the magical moments that happen when people connect with each other online, then we have to take responsibility for the negative experiences that we enable.
Our communities are defined by the worst things that we permit to happen. What we allow tells the world who we are.
After so many years and so many conversations about this problem, it’s frustrating to think that these basic fallacies keep creeping back into the conversation. Sometimes it’s from those who would seek to enable abuse; obviously, we can safely dismiss the people who support awful behaviors online. But at times, we hear these arguments from otherwise-reasonable people who simply are uninformed about the lessons we’ve learned over the past 20 years.
Too often, decisions about our online communities are being made by those who aren’t familiar with the discussion that’s come before. Perhaps if we can ensure that the well-intentioned aren’t repeating the hoariest and least accurate clichés that stand in the way of addressing abuse online, we can finally make some real progress.
I’m the cofounder of Makerbase, a community for people who make apps and websites. Join us! Thank you to Les Haines for the photos.