14 Comments

> a ridiculous cadre of people operating in a fake domain of expertise

Came for the well thought out punditry, stayed for the spicy (and true) insults. xD

Expand full comment

This article conflates fact checking with a new content moderation policy that explicitly permits users to post various slurs directed at minority groups (which is what Casey Newton is referencing, I believe). If you like that kind of thing, then sure, but there’s a reason Gab, Parler, and Truth Social aren’t wildly popular.

Further, Josh is ignoring the societal misery promoted by grifters at scale on Meta, which the world is vastly better off without, and it’s a shame that they’re probably going to allow it again. Do we need more QAnon? Was it good when the “Plandemic” video went massively viral on Facebook? Is it only fair that Alex Jones gets more space to organize vicious mobs to harass Sandy Hook parents? MAGA conservatives attacked all of the moderation choices to delete these accounts or cut back on their reach as anti-right wing at the time. Meta can do whatever it wants, but this set of decisions is likely to - as Josh said about Trump’s reelection - make the platform dumber and crueler.

Expand full comment

This reminds me of the Trump-Kamala debate and the hullabaloo on the right regarding the fact checking by the moderators of ABC. They fact checked Trump more, but even though Trump lies prodigiously and shamelessly and did so far more than Kamala, I remember talking to people at work the next day who said “I thought the moderators were biased.”

In my head, I’m thinking, “really?!” Trump talks about Haitians eating pets in Ohio, and you’re concerned that they fact checked Trump a little more?

The point is, people need to perceive the fact checking as fair. It’s hard to keep human biases in check. I think the community notes method is better because it incorporates a wider audience, but it won’t be perfect either. If we could find a way for AI to do it more objectively one day, that would be good, it seems.

Expand full comment

If the fact-checking had been limited to bad stuff like hiding the President's mental state I'd be more inclined to agree with this take, but I suspect we're going to miss some of the moderation. I'm already seeing some pretty ugly brigading going on that's well beyond seeing a few unsavory opinions. But honestly a good number of liberals also seem to want me to stay rapt and unhappy, so maybe it's a wash.

Expand full comment

I’m OK with Meta not fact checking, etc. if Congress removes the exemption from litigation for what is posted on Meta. If something posted on Meta that causes harm, then the person posting it should face potential consequences like any other person. And, since Meta (and X, etc.) allows anonymity that protects that person, then the change in law should also subject Meta to discovery to look behind the curtain to see who “Puppylover” is if Puppylover’s posts were so egregious as to lead to serious harm to another person. Agree, on the political oriented examples you gave, if the GOP says Trump prevented an asteroid from hitting the earth, fact? Could be, I mean, who knows, right? But there’s more than the political issues here that matter. People, especially kids, are getting hurt. As the saying goes, can’t have your cake and eat it to. Like any of our Constitutionally guaranteed rights, there could be limits. Time to start thinking more about social media to find a good middle ground. Leaving it up to the “community” alone doesn’t cut it….

Expand full comment

That is... exactly how the law currently works. The system you propose is already the system we have. No change in the law is required.

As my non-lawyer brain understands it, under current law, if you can show "real evidentiary basis that [e.g. an anonymous Facebook poster, like "Puppylover"] has engaged in wrongful conduct that has caused real harm [to your interests]", you can get a Court order for Facebook to turn over any information they have about the anonymous account that posted it. Then you can sue whoever posted it for whatever harm they caused.

The case I pulled this quote from is about a Yahoo post, but I think it is still the controlling precedent: https://www.dmlp.org/sites/dmlp.org/files/sites/citmedialaw.org/files/2005-01-18-Order%20Quashing%20Highfields%27%20Subpoena.pdf

Expand full comment

There are lots of conflicting rulings in the courts, typically focused on direct liability by the platforms as a publisher of information provided by third parties (excluded from liability under Section 230). Lawyers are poking at it, looking to find ways to establish outside the box arguments. There are advocates out there that believe content should be wide open, 1st amendment stuff, and nobody should attempt to censure it, especially the government. Several laws and proposed laws are trying to put guardrails around it (like for sex trafficking). I think we have a long way to go both in law and society norms to figure out how to manage what is said and done on social media. It is a worthy debate for sure, and hopefully done in a non-political way. E.g., GOP wanted guardrails when they thought the Meta’s out there were biased against them, now that the social media giants are falling in line with Trump, financially and ethically, they seem OK with “community” self-governance…. Now the Dems will start looking closer at it (besides where they typically sit — hate speech, protecting minors, etc.) with a political lens if anything goes…. We’ll see how it shakes out.

Expand full comment

There are lots of conflicting rulings in the courts, typically focused on direct liability by the platforms as a publisher of information provided by third parties (excluded from liability under Section 230). Lawyers are poking at it, looking to find ways to establish outside the box arguments. There are advocates out there that believe content should be wide open, 1st amendment stuff, and nobody should attempt to censure it, especially the government. Several laws and proposed laws are trying to put guardrails around it (like for sex trafficking). I think we have a long way to go both in law and society norms to figure out how to manage what is said and done on social media. It is a worthy debate for sure, and hopefully done in a non-political way. E.g., GOP wanted guardrails when they thought the Meta’s out there were biased against them, now that the social media giants are falling in line with Trump, financially and ethically, they seem OK with “community” self-governance…. Now the Dems will start looking closer at it (besides where they typically sit — hate speech, protecting minors, etc.) with a political lens if anything goes…. We’ll see how it shakes out.

Expand full comment

There are lots of conflicting rulings in the courts, typically focused on direct liability by the platforms as a publisher of information provided by third parties (excluded from liability under Section 230). Lawyers are poking at it, looking to find ways to establish outside the box arguments. There are advocates out there that believe content should be wide open, 1st amendment stuff, and nobody should attempt to censure it, especially the government. Several laws and proposed laws are trying to put guardrails around it (like for sex trafficking). I think we have a long way to go both in law and society norms to figure out how to manage what is said and done on social media. It is a worthy debate for sure, and hopefully done in a non-political way. E.g., GOP wanted guardrails when they thought the Meta’s out there were biased against them, now that the social media giants are falling in line with Trump, financially and ethically, they seem OK with “community” self-governance…. Now the Dems will start looking closer at it (besides where they typically sit — hate speech, protecting minors, etc.) with a political lens if anything goes…. We’ll see how it shakes out.

Expand full comment

Until I read this, I had no idea there was such a thing as a misinformation expert. I would prefer to read experts in any field such as physics, immunology, etc. I don’t need a misinformation expert to weigh in on the experts.

Expand full comment

You need the core function of someone with a different perspective challenging a statement to force the speaker claiming authority or expertise away from a fixed point to a point that’s either more nuanced or in some other way closer to truth. Even “experts” can be wrong, and it isn’t always going to be another expert from the same field who sees that first.

Expand full comment

During the dark days of covid censorship (which really was borderline insane in its reach and aggressiveness) I often wanted to articulate a theory along these lines:

There is, almost by definition, no such thing as "misinformation" that needs to be censored in a free society. Every example cited is either (a) so ridiculous that it will be debunked (probably more effectively!) in the normal course of things or (b) not actually misinformation at all but opinion, humor, hypothesis, interpretation, metaphor, etc. - all the categories you enumerate. (Including the all-important "turns out to be true" category.) And of course some appropriately cancellable material falls into entirely separate categories like threats, etc.

What else is there? A statement of fact on a socially consequential matter that is absolutely not humor, hypothesis, etc.,, and also absolutely certain to be false, but somehow also so alluring or seemingly true that it can't simply be debunked in the open and rather people have to be prevented from hearing it in the first place? What kind of Infinite Jest contraption would ever make that cut?

Expand full comment

In his examination of the persistent belief in witches in Europe and early European America in Nexus, I think Yuval Noah Harari makes a persuasive case that misinformation can kill.

Expand full comment

I just paid $6 to point out the irony of someone lecturing about filter bubbles to an audience of personal subscribers, where only paying members can comment.*

To fully get my money's worth, I'll add: The examples cited seem legitimately stupid, but as someone who has been angered plenty by right-leaning fact checkers, I suspect they are cherry picked in support of the "fact checkers bias left" argument. Lying in bed, I don't have examples of bad conservative fact checkers at the ready, but maybe the better point is that fact checkers can also be biased and wrong, and do so in both directions.

I don't know what the best, least flawed option is for managing Facebook community standards, but I agree with Liz A above that there are good reasons to not circulate or algorithmically boost some truly marginal, or sometimes bot-driven content, or allow ersatz Klanspeople to burn their rhetorical crosses on people's Facebook lawns.

Lastly, like Josh's article about WaPo's "correct decision" to pull their endorsement, this article makes some excellent points while studiously undervaluing the fact that both Zuck and Bezos are changing their company policies to curry favor with a corrupt, transactional strongman. Whether it's good business or not, it's morally bankrupt and rather gross.

*I do appreciate hearing opinions outside my bubble, which is why I subscribe here at all, and why I enjoyed Josh's stint at LRC, including the occasionally wrong Center. :)

Expand full comment