A better moderation system is possible for the social web
The Fediverse has a bit of a moderation problem. In fact, it’s had one for quite a while.
When I wrote the first draft of the ActivityPump spec back in 2014, it had an Authorization section which began like this:
Authorization
This is a stub, to be expanded. OAuth 2.0 is an open question.
ActivityPump uses authorization for two purposes; first, to authenticate clients to servers, and secondly in federated implementations to authenticate servers to each other. These methods are based upon the OAuth 2.0 authorization framework as specified in RFC6749.
It continued on, but it was broadly a sketch. I was hoping we’d elaborate some more on this as a working group; instead, we spent 4 years going around in circles arguing with each other about minor details. The SocialWG was a working group divided; if you ever wondered why it produced two families of incompatible social networking protocols, this is why! (and also one of the most perfect examples of the failure modes of standardization bodies)
At the end of the process, I’m pretty sure that everyone involved was exhausted. Four years is a long time to spend working on one thing, never mind a thing which such a tumultuous development cycle.
So if there’s something that’s obviously missing from ActivityPub that you think should be there, here’s your rationale.
An aside on working groups and standard bodies
From this you might conclude that the standards process is fundamentally broken; I think I’d say it can be, but it doesn’t have to be. The biggest problem we had is that we effectively had two factions within the working group, each looking for the legitimization of their preferred solution in the form of being adopted as a W3C standard, and with some fundamental and irreconcilable differences of opinion with regards to approach
Despite all of this, I’d be happy to dive back into a working group chartered specifically to make a specific set of improvements to, or more generally maintain, ActivityPub. Having that more defined scope vastly reduces the risk of the kinds of fracturing we had before.
Anyway, back to moderation
This hell we’re in
The basics of ActivityPub are that, to send you something, a person POST
s a message to your Inbox.
This raises the obvious question: Who can do that? And, well, the default answer is anybody. You might think “Wait, isn’t that a bit like E-Mail? Why on earth don’t we have an enormous spam problem?” and my answer to that is: you know, I just don’t know. I guess implementing AP has been too much effort for spammers up until now? Perhaps it has just been easier to sign up for accounts on open Mastodon installs?
Anway, this means that we’re dealing with things now with the crude tools of instance blocks or allow list based federation; and these are tools which are necesssary, but not sufficient
The hell of shared blocklists
Shared blocklists are the most commonly suggested solution; there are even crude tools for implementing them today.
They can help, but they can also cause enoromous harm. Back in 2014/15, during the Gamergate harassment campaign, there was a fairly popular Twitter blocklist, widely used by the gaming media, pupporting to block the perpetrators of said campaign. It also blocked a bunch of random people - particularly trans women - who the creator had previously gotten into arguments with; because it was based upon their personal block list. Those caught in the cross fire found their ability to reach out to journalists and e.g. promote games that they were developing substantially curtailed.
I don’t mean to cast aspersions on the creator here; I don’t think this was done with ill intent, and I’m especially willing to give benefit of the doubt with regards to personal blocks made by someone under continual attack. However, the collateral damage was sizable.
The trust one must place in the creator of a blocklist is enormous, because the most dangerous failure mode isn’t that it doesn’t block who it says it does, but that it blocks who it says it doesn’t and they just disappear.
I’m not going to say that you should not implement shared blocklist functionality, but I would say that you should be very careful when doing so. Features I’d consider vitally important to mitigate harms:
- The implementation should track the source of any blocks; and any published reason should also be copied
- Blocklists should be subscription based - i.e. you should be subscribing to a feed of blocks, not doing a onetime import
- They should handle unblocking too - its vitally important for a healthy environment that people can correct their mistakes
- Ideally, there would be an option to queue up blocks for manual review before applying them
That said, shared blocklists will always be a whack-a-mole scenario
Use our superpower
Why are we allowing random people to send us messages, anyway?
The answer, of course, is discovery - we want to meet new people - but perhaps letting any random person send us a message is not ideal.
In real life, we normally meet people either through introduction by mutual friends, or in community spaces that we both inhabit. It’s an inherently social discovery method.
And - hang on - we’re building a social network here! This feels perfect for us!
So here’s a really simple idea: We let trusted friends hand out “Letters of introduction” to people they trust, and we make these letters carry information about who handed them out, so when someone turns up who we really don’t like, we can tell our friend “dude, not cool”.
Now we just need to decide who we trust to introduce new people to us, and who we are willing vouch for (these are probably but not necessarily the same lists). This is going to come down to policy, at the end of the day, but some possible options include:
- Just an allow-list of people we trust, or
- Any instance that someone on our instance has been following someone from for more than N days, or
- Anything you can imagine
(We probably even have multiple levels of trust, and probably even pass those along inside our letters of recommendation, so people can make decisions)
This is effectively an allow list system, but it’s an anarchic one which allows discovery. It’s not a foolproof way of stopping bad actors from getting into our feeds, but it should be a fairly effective one; they need to find somebody that we trust to introduce them. And, well, if someone keeps introducing their racist friends to you, you’re probably not going to stay friends with them for long (or are at very least you’re hopefully going to ban them from introducing new people to you)
Note that in this system we don’t share blocks; blocks become something which flows naturally from the fact that neither we nor any of our friends want to talk to someone. Manual moderation is still required when there isn’t unanimous agreement about someone; but these are the sorts of complicated cases which should be left to human judgement anyway
For the curious, the technical underpinnings I’d go with for this system are Macaroons
Closing notes
-
I am intentionally biasing systems towards inclusivity here; I’m willing to tolerate some degree of potentially avoidable harassment in order to avoid excluding people unintentionally.
Other people may wish to draw the line somewhere else; these are my biases from having seen ostracism used as both an intentional and unintentional weapon.
-
This is not the be-all and end-all of protocol evolution discussions we can or should be having around abuse. There are other issues (around e.g. reply threading) which we can and should close independently.
A lot of those are simultaneously more straightforward and more technical, however (that is to say: they’re less about social dynamics and more about addressing specific aspects of the protocol)
-
Are you an ActivityPub implementer who’d like to implement something along the lines of what’s in this document? Please, let’s talk and flesh this out!