Is Design Censorship?

We live in a world where seemingly settled issues are being reframed. A recent, fascinating discussion on the Persuasion podcast focused on the role of social media in spreading both misinformation and what Renee DiResta, the expert being interviewed, labeled “rumors.”

As she explained, using the term “misinformation” (a use to which I plead guilty) isn’t a particularly useful way of framing  the problem we face, because so many of the things that raise people’s hackles aren’t statements of fact; they aren’t falsifiable. And even when they are, even when what was posted or asserted was demonstrably untrue, and is labeled untrue, a lot of people simply won’t believe it is false. As she says, “if you’re in Tribe A, you distrust the media of Tribe B and vice versa. And so even the attempt to correct the misinformation, when it is misinformation, is read with a particular kind of partisan valence. “Is this coming from somebody in my tribe, or is this more manipulation from the bad guys?”

If we aren’t dealing simply in factual inaccuracies or even outright lies, how should we describe the problem?

One of the more useful frameworks for what is happening today is rumors: people are spreading information that can maybe never be verified or falsified, within communities of people who really care about an issue. They spread it amongst themselves to inform their friends and neighbors. There is a kind of altruistic motivation. The platforms find their identity for them based on statistical similarity to other users. Once the network is assembled and people are put into these groups or these follower relationships, the way that information is curated is that when one person sees it, they hit that share button—it’s a rumor, they’re interested, and they want to spread it to the rest of their community. Facts are not really part of the process here. It’s like identity engagement: “this is a thing that I care about, that you should care about, too.” This is rewarmed media theory from the 1960s: the structure of the system perpetuates how the information is going to spread. Social media is just a different type of trajectory, where the audience has real power as participants. That’s something that is fundamentally different from all prior media environments. Not only can you share the rumor, but millions of people can see in aggregate the sharing of that rumor.

Her explanation of how social media algorithms work is worth quoting at length

When you pull up your Twitter feed, there’s “Trends” on the right hand side, and they’re personalized for you. And sometimes there’s a very, very small number of participants in the trend, maybe just a few hundred tweets. But it’s a nudge, it says you are going to be interested in this topic. It’s bait: go click this thing that you have engaged with before that you are probably going to be interested in, and then you will see all of the other people’s tweets about it. Then you engage. And in the act of engagement, you are perpetuating that trend.

Early on, I was paying attention to the anti-vaccine movement. I was a new mom, and I was really interested in what people were saying about this on Facebook. I was kind of horrified by it, to be totally candid. I started following some anti-vaccine groups, and then Facebook began to show me Pizzagate, and then QAnon. I had never typed in Pizzagate, and I had never typed in QAnon. But through the power of collaborative filtering, it understood that if you were an active participant in a conspiracy theory community that fundamentally distrusts the government, you are probably similar to these other people who maybe have a different flavor of the conspiracy. And the recommendation engine didn’t understand what it was doing. It was not a conscious effort. It just said: here’s an active community, you have some similarities, you should go join that active community. Let’s give you this nudge. And that is how a lot of these networks were assembled in the early and mid-2010s.

Then DiResta posed what we used to call the “sixty-four thousand dollar question:”  are changes to the design of an algorithm censorship?

Implicit in that question, of course, is another: what about the original design of an algorithm?  Those mechanisms have been designed  to respond to certain inputs in certain ways, to “nudge” the user to visit X rather than Y.  Is that censorship? And if the answer to either of those questions is “yes,” is the First Amendment implicated?

To say that we are in uncharted waters is an understatement.

Comments