Managing Hurt Feelings at Scale

How policing thought has scattered the social media Tower of Babel

Tim K
11 min readOct 23, 2018
The tighter you squeeze your grip, Admiral Tarkin, the more star systems will slip through your fingers.

Big tech companies have taken a pretty drastic turn in the past two years. Google’s motto, “don’t be evil” has quietly disappeared. Twitter, previously the “free speech wing of the free speech party” has cracked down on harassment and “hate” — whatever that means. It seems they’re forsaking what originally made them great, in search of something different. What has prompted this change, and how is society reacting to it?

As with most of my reactionary articles, I don’t have much of a solution. I just want to understand the problem better. I think if we understood how “social media” worked in the past, it’d shine a bit of a light on where we are now. So without further ado…

Rules in the Days of IRC

The rules were often this brutal on any IRC server, not just Russian ones.

Communication in 1990 was vastly different than modern messenger services like Facebook, WhatsApp, Discord, and Slack. Text only interfaces made rudimentary graphics with ASCII characters. Buddy lists weren’t quite a thing, and in some instances the concept of accounts didn’t even exist, making identity even fuzzier.

The screenshot above is from EFnet, one of the oldest chat networks in existence, predating AOL Instant Messenger by seven years. When you connected to an IRC server, you would be greeted with a message of the day (or MOTD). These often contained administrative contacts, availability notices, alternate means of accessing the server, and most importantly, the server wide rules. Most of the time, the rules were shockingly simple. As long as you weren’t attacking the service, the server operators didn’t care what you did, said, or thought.

Fear the sysop and his mighty ban hammer!

Each server was divided up into channels. If you wanted to discuss problems with your Apple II, you could join the #macintosh channel. At the same time, you could check to see if #redskins was a channel. If it wasn’t, congrats! You’re the new channel operator! First come, first serve!

As the channel operator, you got to make the rules — somewhat like the separation between federal and state governments. Perhaps #christianity didn’t tolerate profanity, or adhoc debates with #atheism. Maybe they did. The point was every channel got to decide their own rules for government. If anyone misbehaved, the channel operators could banish them from the channel — but not from the server. You had to try a lot harder to get that punishment.

On a personal level, what could you do if a user harassed you? If this happened in a channel, perhaps the chanops would come to your rescue, and kick or banish the contentious user. However, the easiest and most immediate solution was to block the user. Taking it to the sysops was akin to asking God to make a Ferrari appear in your driveway. A kind sysop may gently remind you that you can block the user yourself. However, a sysop with a short temper may knock you offline for trying to involve them in your petty dispute.

The only pertinent problem for server administrators was keeping the service online, and preventing abuse. If a miscreant user was spamming advertisements or gibberish across many channels, the sysops would step in and banish them from the entire server. Especially bad users could use “clone accounts” to evade bans and blocks, which was particularly difficult to crack down on.

Never the less, there was an understanding in the days of IRC that has been forgotten: your membership on the server (or channel) wasn’t a right, it was a privilege. If you were more of a burden to the administrators than a benefit, they could arbitrarily ban you and there was no recourse.

If someone reported you for starting a flame war, trolling, or running a farm of clone accounts, you’d be lucky if you even got a trial. In most cases, the punishment was summary execution. The judgment was definitively human. You might just catch the admin on a bad day. Perhaps their punishment would be over the top, but what recourse did you have?

Migrating to a Centralized Network

Tower of Babel, Valckenborch (1594). The Biblical story of when humanity decided to band together in one city, under the rationale of strength in numbers.

Since then, we’ve migrated from the “Wild West” of IRC to the Tower of Babel that is Twitter, Facebook, and Reddit. Many of the sore aches of IRC, like identity verification, have been resolved. Twitter introduced a “blue checkmark” for public figures or celebrities to validate the identity for end users. You can also have friend lists, and post images.

Perhaps the biggest change was how content was presented. Instead of many small IRC channels with a defined topic and small population, social media cut the walls down and put everyone in a single gigantic channel. This allows plebeians to rub noses with celebrities, tech titans, and politicians alike. There is no set topic, except for what people want to discuss that day.

Obviously, with the increase in scale, admins had to adapt how they handled users that abuse the system. Sites like reddit have come up with ingenious methods for handling misbehaving users. Instead of banning them outright, the admins “shadowban” users. The spammer thinks they’re reaching a wide community, but their posts are being silently dropped. Perhaps technological castration is more sinister than outright bans, but no one seems to worry about this being used against spammers.

What draws users ire is how Twitter, Facebook, and reddit handle users that break the more subjective laws — like being too offensive, or having the wrong views. If IRC was an intimate book club, Twitter is more like walking down the streets of New York City. As such, the userbase’s expectations for the administration change. There’s no room for admins to have a bad day, as /u/spez did on reddit when he used his powers to edit Trump supporter’s comments, putting ridiculous words into their mouths.

Just like real life expectations for state and federal government, users demand that social media has fair rules that are applied consistently. Slides have been leaked from Facebook’s moderation team that show a flow chart of how to decide if a post is inappropriate. The “personal touch” of IRC’s ban hammer has been replaced with the cold inhuman machinery of a faceless moderation team .

So Mark, what’s considered “hateful”?

On the surface, nothing seems bad about demanding fair moderation. However, things changed after the 2016 election. The word on the street was that Facebook and Twitter were to blame for spreading “Fake News” that resulted in Trump’s election. Hearkening back to the days of IRC, Twitter and Facebook should have responded: “It’s your fault for believing random things you read on the internet.” Instead, the social media giants foolishly accepted responsibility, and established a ridiculous new rule: no saying untrue things.

The insane rules have extended further, in recent days. You may not hurt other people’s feelings — especially our favorite people. At one point, GitHub went so far to explain in their “Code of Conduct” that they would only investigate reports of harassment for “marginalized people”. Anyone deemed sufficiently privileged would simply have to deal with matters on their own. (Since then, the CoC has been updated to be slightly less ridiculous.)

Despite “transparency reports” and other attempts to appear neutral and unbiased, public confidence in the social media administration is waning. The tighter they squeeze their grip, the more “nazis” and “racists” seem to slip through their fingers, while regular users are banned for absurd or political reasons.

But Free Speech is Only About the Government!

XKCD #1357 — CC 2.5

Of course, Twitter is free to govern their users in any way they see fit. They can strain out gnats and swallow camels to their hearts content. One thing hasn’t changed since the days of IRC: your membership on Twitter is still “at will”. This point is stressed ad nauseam by victors of internet fights. (Just remember folks, Twitter is free to ban whoever they want, but they picked my side because I’m right!)

The problem is that Twitter wants to give off the impression that its administration is fair, unbiased, and impartial — but it is anything but. The blue checkmark was supposed to be only about identity, Twitter has revoked it for users that exhibit wrongthink. Is the checkmark a sign of identity, or approval of content? Either case is fine — but don’t tell the users one thing and do another!

Users violating Twitter’s new content rules get a boiler plate email about the offending tweet, like a speed camera ticket from an uncaring machine. Behind the curtain though, we know there’s a short bald man pulling the levers. In an Orwellian twist, Twitter doesn’t actually delete tweets that violate their policy. The offending tweet is “quarantined”, and the account locked down until the user deletes it.

This allows Twitter to claim they don’t delete tweets, they simply “reeducate” the user on appropriate behavior, and the problem solves itself! To me, however, this comes off as a forced confession of guilt. (Confess, heretic!)

It’d be easier to swallow Twitter’s administration if Jack simply stood up and said, “Look, as benevolent dictator of Twitter, I decide what’s on the platform. You’re a tool, so. Goodbye.” Even still, because of the size of Twitter, users wouldn’t be comfortable with that form of government.

It seems that sites like Twitter and reddit have simply hit critical mass. Rule by benevolent dictators does not scale, and the concept of electing a moderation team is ridiculous. While no one worries about spammers getting the axe, bans centering around hurt feelings beg the question: are the admins simply picking favorites, and gaslighting us when they deny it?

1492 All Over Again: The “New World” of Mastodon

The Puritans felt shunned in the Old World, and set sail for America seeking a new life. Although society without a strong centralized government posed significant risk, the freedom was alluring. To that end, Twitter users are leaving for Mastodon, a new Twitter-like service that promises a new “decentralized” frontier for social media.

Instead of a single website, Mastodon is split up into multiple websites. No one single person makes decisions for what is permitted — each website decides for itself. However, this doesn’t mean users are isolated to their instance. Like email, your messages can be delivered across websites to other users.

Mastodon’s Mascot

If you look at a mastodon instance, it’s reminiscent of the MOTD banner from an IRC server. As an example, the scholar.social instance is dedicated to academic work, and makes special note of being safe for the LGBT community. More than likely, starting antivax debates or praising the Masterpiece Bakery Supreme Court decision on the platform will land you in hot water!

Still, it’s less cognitive dissonance to say, “Certain things aren’t allowed here, based on the arbitrary whims of the administration. Your membership is at our leisure.” There are a number of other instances that would be happy to have you as a member, and you’d still be able to keep in touch with your friends on scholar.social.

This sounds great on the surface, but the Mastodon “fediverse” had a pretty rough PR blunder when Wil Wheaton left Twitter in favor of Mastodon. His account lasted a short while before the user base began to revolt in protest. All he did was sign up. The small admin team simply couldn’t handle the situation, and “asked” him to leave.

Despite Mastodon’s decentralized nature, the user base carried over an unrealistic expectation from Twitter that didn’t exist in the old days of IRC. Somewhere along the line, the role of administrators changed. It’s not enough to keep the spammers at bay and make sure the server stays online. As tools of “progress”, administrators must arbitrate truth, define morality, and protect our emotional well being. Declaring what is true and what is good is simply an inhuman task. Management of spam bots can scale with the platform, but management of emotions is something no one but the end user can control.

Scattering the Tower of Babel

I think there’s a future for Mastodon and other federated protocols for users that want more freedom in how their social media is run. However, they should avoid repeating the mistakes that Twitter et al. have made. These points are just as much for the end users, as they are the administrators and protocol developers.

  • Administrators first and foremost must be concerned with the technical aspects of the server: maintaining availability, security, blocking spam, and optionally enforcing a topic.
  • Users must make use of the block tool when offended or harassed. The only time to involve a system administrator is when a user is using multiple accounts to evade bans.
  • Administrators must be careful moderating content. As several articles have observed, there are legal benefits to being considered a platform, and avoiding responsibility for user content.
  • Users must remember old adages like “do not feed the trolls”. Racist troll posts don’t require a 60 minute point-by-point “debunking” video by a fedora wearing mouthbreather.
  • Users must depolarize their worldviews. Someone may disagree with you, but that doesn’t make them the second coming of Adolf Hitler. You don’t need to seek their destruction to ensure that all is right with the world.

Already, I can hear the cries. “But you don’t understand. Hate speech has real world consequences that hurt real people! If you aren’t protecting them, you’re helping nazis!” If you (or someone you know) are under a real physical threat, you need to contact your local police department, not the administrator of your Mastodon instance.

More likely than not, this “threat” that users complain about is vague and specious. It’s insane to try and implicate the user (or the administrator!) in what amounts to the butterfly effect — that somehow, a post which is not calling for violence, will cause a riot or murder somewhere else through unpredictable 3rd order effects. Bernie Sanders isn’t responsible for the congressional ballpark shooting, and Facebook isn’t responsible for Trump getting elected.

Smaller communities are a wonderful thing. Perhaps this is why people are migrating from Twitter to smaller Mastodon instances. As I’ve written earlier, if you want to try and change things for the better, start small. However, we saw how small communities treated Wil Wheaton. If Mastodon wants to succeed where Twitter has failed, it should focus on free speech — not only from the perspective of the administrators, but also the user base.

--

--

Tim K

Tim builds circuit boards in Virginia Beach, and enjoys writing about current events, history, theology, and philosophy.