Do you think I ever wanted a Trump tweet to exist in my content? Absolutely not. But here we are. Ugh…
In addition to the Justice Department’s recent proposal to reform the law, Trump is also asking the FCC to rewrite Section 230. Uh, what? Can they do that?
So let’s get into it! What I know about Section 230: Republicans think it allows Twitter and Facebook to be biased against conservative political views. Repealing (or revising) Section 230 would greatly change the way communities are moderated.
I wanted to go deeper, so I reached out to Patrick O’Keefe, who has been raising concerns about threats to Section 230 for some time, to help me out. Patrick has been building communities since ‘98, and is the host of the podcast, Community Signal.
Patrick, help me out, what's up with Section 230 and why should the community community care?
Section 230 is the legal basis for our work in the U.S., enabling us to moderate while ensuring that the liability for what is said remains with the author of the words. Imagine that you have a blog with comments. You write a new post, I come and make some really stupid comments, that prompt legal action. But instead of me facing consequences, you do. That’s the type of thing that Section 230 prevents.
Imagine if you were legally liable, in every conceivable way, for every piece of user-generated content, that you did not author, on your community, app, social network, website, blog, or any other space you manage. How would that change how you moderate? Instead of moderating to your policies, would you moderate more strictly so that you didn’t offend the powerful?
When I started building communities, I was in my teens. This is not an uncommon origin story for those of us who do this work. Something that is even far more common among those that do this work, unfortunately, is that we receive threats. Vague threats, death threats and, yes, legal threats.
You start a community, and you do your best to moderate it. And then someone in your community makes a post that is critical of a person or company that has financial resources that you don’t. And they don’t like it, so they threaten you. They have the money to fool around with, to intimidate you, to stress you out, and you realize that you couldn’t possibly afford to defend yourself legally, even if you are in the right. And so, even though the post was fine, and a perfectly legitimate expression of speech, you remove it.
If Section 230 is repealed, Facebook won’t go away. They have the money to defend themselves and a team of attorneys. Do you?
Even if it’s not repealed, amending it will likely harm you. Regulators are throwing all sorts of things at the wall right now. Should you be able to remove content you feel is “otherwise objectionable,” or should you only remove content for specific reasons they define for you? Can you ban MAGA hats while allowing Biden hats or must you allow both or neither? Can you provide your members with a clarifying or safety notice, like Twitter has done with it’s fact-checking?
These are literally all things that are being discussed right now.
What would you say if someone asked, “Shouldn’t a company be responsible for the content that lives in our communities? Does Section 230 give platforms a free pass to ignore bad behavior?”
Section 230 empowers platforms and communities to take action against bad behavior. One concern that I have consistently heard from people over the years, when it comes to communities, is that they should not remove any content because they fear they would then be held liable for whatever they didn't remove.
That's not true, because of Section 230. The law says that you can moderate content without being held liable for what remains. If you remove a post that violates your guidelines, but someone then finds a post you missed that does the same thing, you can't be sued or shut down simply because of that. It is because of Section 230 that you can feel confident moderating.
It's worth noting here that a company is still responsible for the content it creates. That's sort of the point of 230: Liability rests with the creator of the content. Twitter’s fact-checking notices don’t concern Section 230. They’re a First Amendment issue. Twitter is exercising their right to free speech in displaying those. As attorney Anette Beebe recently told me, “They can have a difference of opinion against the president. Thank you for living in the U.S. This is what we can do here.”
Thank you, Patrick! Your passion for the community manager and the complex work they face shines through. Any other community experts with a hill-they-will-die-on they want to share, reach out!
What do you think? Protect Section 230 or reform?
And if you want to go deeper:
As Trump Targets Twitter's Legal Shield, Experts Have A Warning
Audio: Threats to Section 230 Threaten the Very Existence of Our Communities
Oh, by the way, Biden also wants to repeal Section 230. 🙃