Designing decentralized moderation

Jay Graber
6 min readJan 21, 2021

In decentralized social networks, communities can set their own moderation policies, but what tools are available for enforcement? There is no central authority to make network-wide moderation decisions. Instead, content is filtered through an interplay of interfaces, algorithms, social consensus, and protocol constraints. This post looks at technical approaches to decentralized moderation, and points out areas that could use more experimentation and research.

Every social network needs ways to filter out spam and unwanted content

Email is one of the oldest and most successful decentralized protocols to learn from. Nearly 85% of email sent is spam, but the ecosystem has evolved to hide most of this from users. Among newer decentralized social protocols, Mastodon (2.9 million users) and Matrix (18 million users) have reached the scale where moderation issues truly begin to emerge, and as a result have been doing a lot of work to mitigate abuses. Many p2p social networks are tiny by comparison, but in the absence of server-level controls, they’ve experimented with novel moderation methods. (For a comparison of federated and p2p network architectures, see my post on decentralized social networks.)

Some decentralized moderation strategies I’ll cover:

  • Third-party tooling
  • Relative reputation systems
  • Machine learning solutions
  • Economic incentives
  • Blocklists/Allowlists

Supporting third-party tooling

Decentralized networks can provide a developer experience equivalent to locking open the APIs of a centralized platform, guaranteeing a standard way to access and exchange data. However, applications built on decentralized networks that become popular still need to provide usable APIs to encourage other developers to build for them. The need to support the development of third-party tooling for moderation has emerged in both Mastodon and Matrix.

Mastodon added an API to the admin interface in 2019 to allow third party tools to help build solutions for server admins dealing with harassment and spam. Matrix built Mjolnir, a moderation bot separate from the server implementation, to assist server admins with moderation tasks.

Any new decentralized social network application, anticipating this requirement, should design with APIs for pluggable moderation tools in mind.

Relative reputation systems

Centralized social networks can establish a single source of truth for reputation on the platform. In decentralized social networks, a single source of reputation is not very feasible or desirable. Decentralized networks are more likely to use relative reputation systems that differ based on the user’s position in the network.

Matrix is currently experimenting with a relative reputation system that allows anyone to produce subjective scores on network entities or content, published as a reputation feed. Users can combine these feeds in any way to produce their own reputation scoring system. A UI is available to visualize and toggle the filtering. Their first pass at this approach, shareable binary banlists, has already been in use for a year.

Web-of-trust systems are another way to filter based on relative reputation. A web-of-trust determines trust based on a user’s social graph. For example, interactions may be limited to users n degrees of separation apart. Secure scuttlebutt is based on a web-of-trust, as only data from nodes within a specified degree of proximity gets stored on a user’s device. Iris is another p2p social network that uses a web-of-trust to filter content. Accounts that a user upvotes become their 1st degree web-of-trust connections.

Centralized platforms such as Twitter can perform user verifications, providing signals such as a blue checkmark. These kinds of verifications are not possible in a decentralized network since there is no central authority. Instead, user identities can be established by linking together multiple existing identity sources. Mastodon cross-references links a user puts on their profile to confirm that they are the owner of that site, and displays a checkmark if they are. Matrix allows users to link third-party identifiers, such as an email address and phone number, to their Matrix id. There are not yet services for decentralized social networks that provide verifications for users, but this kind of reputation service could be created. Third-party services could attest to attributes of user identities that they have confirmed, and servers could opt-in to using attestations from services they trust.

Relative reputation systems are still evolving, and there will likely not be one system that works for everything, so designing for multiple reputation sources to be combined is important for future-facing decentralized networks.

Machine learning solutions

Centralized social platforms, as well as popular email clients such as Gmail, use methods such as Bayesian filters and machine learning to combat spam and filter content. These strategies rely heavily on centralized training data. In decentralized social networks, it is unclear how to produce these results without centralizing around large providers like Gmail. As it stands, spam in federated social networks is still a difficult problem, and machine learning-based solutions are underutilized. Mastodon maintainer Eugen Rochko has offered to pay anyone who comes up with a spam detection implementation for Mastodon.

Some ideas for creating global filters while protecting user privacy:

  • Network-wide content aggregators that curate content across servers, as well as provide spam and moderation signals back to servers as a service
  • Opt-in spam filter services that users voluntarily upload spam messages to when they move them to trash
  • Training methods that operate over private datasets, such as federated learning

A service that aggregates data across a decentralized social network and produces superior content filtering and curation would occupy a position similar to Google/Gmail in the web/email protocol ecosystem. At that point, the challenge for decentralization advocates would be to prevent lock-in and market dominance of such a player.

Economic deterrents

Outside of having a good detection system, another way to prevent spam and unwanted content is to make it computationally or financially expensive to produce.

Adding computational requirements on message sending can impose a bottleneck on spam. Hashcash, a proof-of-work algorithm that inspired Bitcoin’s mining system, was originally proposed as a way to limit email spam. It attached “stamps”, a proof that was more computationally difficult to produce than to check, to emails. Aether, a p2p reddit-like social network, requires the user’s computer to produce a similar calculation with each post to limit mass spamming.

Decentralized social networks that have integrated cryptocurrencies can easily add micropayments to user actions. An extreme example of this is Twetch, which charges a few cents each time a user creates a post, likes a post, or follows a user, among other actions. A less heavy-handed version of this mechanism could be to charge users when they DM people that don’t follow them, or impose a charge when a post they created is deleted by a moderator.

Imposing costs on spam and unwanted content is a mechanism that is underused in some networks, and overused in others. A decentralized social application that strikes the right balance could access a powerful method of disincentivizing bad content.

Blocklists/Allowlists

When all other attempts at implementing moderation in a decentralized network fail, the fallback is often an allowlist. Blocklists which attempt to screen out bad actors are a first line of defense, but get caught up in cat-and-mouse games requiring constant vigilance. Bad actors may constantly spin up new servers or create new accounts to evade blocklists.

If the goal is to achieve complete network decentralization, ending up with a managed allowlist is an undesirable outcome, as it can be biased against new entrants — but if decentralization is simply a means of achieving a more diverse and competitive ecosystem than a centralized alternative, it can be an acceptable compromise. Nobody can be prevented from participating in a decentralized network, since by design anybody can implement a protocol, but maintaining an allowlist can enforce a social contract around who is officially considered a member of the network.

Mastodon’s Server Covenant is a form of allowlist that requires servers that want to be listed on the main “join mastodon” site to adhere to certain standards. Individual servers in the Mastodon network may also maintain allowlists which determine which other servers they connect to, and users can choose to block entire domains from their view.

For email, there are public blocklists of servers that have been sending spam. The most popular blocklists are run by companies such as Spamhaus and Spamcop that specialize in preventing spam. If decentralized social networks get big enough, similar services could emerge.

Allowlists and blocklists almost always end up getting used to some degree, so proactively designing for them to be part of the network governance structure can anticipate some problems down the road.

This is not an exhaustive overview of decentralized moderation strategies, but covers some things I’ve been thinking about. I may add to it over time. Are there decentralized moderation strategies you find particularly interesting or compelling? I’m always happy to discuss, reach out at @arcalinea.

--

--