Harmful, bigoted, or generally distasteful content isn’t as frequent on our site as it is elsewhere, but these do still appear enough that we sometimes get questions about why certain comments are left up. This post is meant to help you understand a bit about how we moderate.

As stated in our sidebar and in tons of other places on our site, Beehaw aspires to be(e) a safe, friendly, and diverse place. Because everyone who participates on Beehaw plays an important role in that mission, we stress our collective responsibility to be inclusive and interact with intention in our communities. Building this space goes deeper than how admins and mods exercise content moderation though. It’s also dependent on our culture, our personal duty to the community, and - to some extent - on our community itself.

Leaving Things up and the “Sanitized Space”

Moderation often sees us balance two things:

  • (1) people being able to see community support - especially when they’re used to seeing shitty behavior overtly supported, glossed over, or swept under the rug like on other sites - and;

  • (2) some people’s desire to see a site where they don’t see unsavory comments at all (a “sanitized space”).

We prioritize the first, and understand but generally disagree with the second when moderating Beehaw.

When browsing Beehaw, you may run across and report material that is not nice. Sometimes comments are clearly hate speech and easy to deal with, but most material that isn’t nice is more ambiguous - it’s an uninformed person who has a bad, bigoted-sounding take. As a team of admins and moderators, a lot of what we do is discuss what crosses the line - what should be left up and what should be removed. Nearly every removal crosses anywhere from a few to dozens of moderator eyes before action is taken (it’s one of our most active chat channels). In many cases our community influences that decision. When the community puts forward a strong response against something, we might be more inclined to allow a comment we’d otherwise remove to stay. If no such response happens, we’re much quicker to remove content.

Many of our users probably pay no mind to any of this. They implicitly trust us to make the right call, do not have a strong opinion on this balancing act, or agree with leaving the content alone in the first place. Some users do disagree with this moderation style however - and we understand why. This can feel like us betraying the promise of a safe space, brave space, or whatever you wish to call our website.1 For shorthand purposes and to fit the theme, we’ll call this position of wanting an online space completely free of bigotry and other distasteful content a sanitized space.

To be clear: a sanitized space has its place. We are not disputing the overall utility of said spaces, and it’s fine to want one. For our purposes however this is not possible or desirable - we do not wish Beehaw to be a sanitized space.

In a sanitized space it’s really hard to know when people are acting in good faith or bad faith. The act of sanitization also necessarily assumes bad faith - painful material is removed with no discussion. For our purposes, this is a problem: without discussion we sometimes cannot tell the difference between a willful bigot and someone with good intent who is just uninformed. This assumption of bad faith also leads to a zero-sum moderation style, where people can be permanently banned for a single comment because there is no room to gauge intent. By giving space for people to ask questions (and presuming those questions are at least reasonably well intentioned), we are facilitating their growth as people - and hopefully, facilitating the growth of other people in the community too.

This is not navel gazing, either. We’ve already seen cases where someone changed their view because of informative comments or thoughtful pushback from our community. In one particular case someone retroactively edited their comment to say they were wrong and had learned something. We want to encourage scenarios like these! But those scenarios simply can’t happen in sanitized spaces - because even the slightest doubt about whether a poster is acting in good faith should be (and often is) cause for removal.

Aggressive sanitization has other drawbacks too. It can - unfortunately - encourage a persecution complex even in cases where that’s not warranted. Meaningful disagreement generally cannot happen in sanitized spaces either, and this makes it very difficult to demonstrate a community’s values. By necessity, these spaces also cannot be reflective of the world we live in - which is frequently very bigoted and hateful - and that can be to their detriment. If every bigoted or questionable take is swept under the rug, there’s no chance for the community to band together and show solidarity for people being marginalized or attacked.

All this is to say that a sanitized space can deter the community, collectively, from potentially beneficial dialogue. Stuff like that is part of what we want to encourage on Beehaw, and this is true even if no minds among the original participants are changed in the process. This is not a vacuum: there are also observers to conversations that must be considered - Beehaw users, users from other instances, and people considering joining Beehaw. For these people comment chains can be spaces for learning and they can practically show (rather than tell) our community’s values. They can also deter bigots; people who are Just Asking Questions (often referred to as JAQing) or sealioning; or other bad-faith participants from joining at all. And they can of course save us from having to repeat ourselves.

And for what it’s worth, we do recognize it’s really hard to moderate the way we want to - that is, by involving some community discussion in the process - without eventually tipping over into being too permissive of bad behavior and creating an environment which feels aggressive or negative.

User Responsibility

While most moderation obviously falls to us, you as a user have responsibilities in this domain too. Ultimately: you must take some responsibility for curation of what you see on this website. We have thousands of active users. It is unrealistic and unfair to expect us to moderate everyone else based purely on someone’s discomfort or expectations. If you ever see any questionable, upsetting, or not nice content, please report it! Alerting us to material that might warrant discussion, even if no actions are taken, helps us to keep this place nice.

However, from a pragmatic standpoint: Beehaw is ultimately a public space - curated as it may be - and you will be exposed to content you neither agree with nor feel comfortable with as a product of that. This is perfectly fine. It is also fine to disagree with some things we allow here that you personally would not. These are normal parts of existing in public spaces with other people and using services that are not your own respectively. But that also means you must be willing to compromise or take personal action to protect yourself sometimes - our moderation cannot and will not guarantee a completely agreeable space for you.

From a logistical standpoint: we simply cannot privilege your personal discomfort over anyone else’s, and we cannot always cater specifically to you and what you want. Your personal positions on right or wrong are not inherently more valid than someone else’s when weighing most questions of how we should moderate this space. There are often plenty of people who do not feel like you that we must also consider in moderation decisions.

Different spaces also exist for different people, and that’s for a reason. If this space is not to your own personal standards, you are welcome to leave - we genuinely don’t take it personally! This place is not for everyone, and we freely say so. A key selling point of federation - and part of why we’re on a federated service - is that you don’t have to register on our site to interact with us if you don’t want to. You should exercise this option if and when you feel it necessary.

Community-Based Moderation

Building online communities isn’t a perfect science and there’s no single right way to go about it. Our moderation is just one effort towards making Beehaw safe, friendly, and diverse - but it is far from the only important aspect. Community also matters here.

Moderators aren’t cops, and Beehaw’s communities aren’t our personal court rooms. We aren’t (and can’t be) supreme or ideal authorities over what is and isn’t acceptable speech. We can generally feel when someone isn’t acting in good faith or being nice, and it’s obvious to us when someone is just spewing garden-variety bigotry. But for comments where the distinction between good and bad faith is unclear, community interventions matter just as much as mod action.

What we feel as moderators doesn’t always have the same weight as strong community support or opposition. Just like an individual user’s sensibilities with respect to moderation aren’t universal or infallible, our individual judgments as moderators are limited. That’s why even when moderators haven’t been able to directly engage, community responses (through comments, reports, upvotes, etc.) fill the void. If our community and our users weren’t so clear about their attitudes towards certain speech, the modlog would be much more crowded. We can ban users or remove comments, but these are only small, individual actions - tools which don’t have the same thrust as community culture. As things are, we’re limited by Lemmy’s moderation tools, which aren’t (yet) built for greater scale.

Ultimately, our community building revolves around not only mod actions, e.g. to remove the worst stuff or nudge people towards being nice, but also our efforts as users to curate our experiences and enact our community ethos across Beehaw. We’ve seen countless examples of an outpouring of community support in reply to questionable content and we want this to happen regularly. A chorus of supportive voices often does more to demonstrate ethos than any moderator actions could or should. Ultimately we’re a community, and we both want and need all of your involvement to make this space work.



  1. There are some overlapping as well as unique intentions behind terms like safe space, brave space, or accountable space, but they’re (mostly) beyond the scope of this philosophical piece. ↩︎

Last updated 10 Sep 2023, 13:37 -0400 . history