Is Design Censorship?

We live in a world where seemingly settled issues are being reframed. A recent, fascinating discussion on the Persuasion podcast focused on the role of social media in spreading both misinformation and what Renee DiResta, the expert being interviewed, labeled “rumors.”

As she explained, using the term “misinformation” (a use to which I plead guilty) isn’t a particularly useful way of framing  the problem we face, because so many of the things that raise people’s hackles aren’t statements of fact; they aren’t falsifiable. And even when they are, even when what was posted or asserted was demonstrably untrue, and is labeled untrue, a lot of people simply won’t believe it is false. As she says, “if you’re in Tribe A, you distrust the media of Tribe B and vice versa. And so even the attempt to correct the misinformation, when it is misinformation, is read with a particular kind of partisan valence. “Is this coming from somebody in my tribe, or is this more manipulation from the bad guys?”

If we aren’t dealing simply in factual inaccuracies or even outright lies, how should we describe the problem?

One of the more useful frameworks for what is happening today is rumors: people are spreading information that can maybe never be verified or falsified, within communities of people who really care about an issue. They spread it amongst themselves to inform their friends and neighbors. There is a kind of altruistic motivation. The platforms find their identity for them based on statistical similarity to other users. Once the network is assembled and people are put into these groups or these follower relationships, the way that information is curated is that when one person sees it, they hit that share button—it’s a rumor, they’re interested, and they want to spread it to the rest of their community. Facts are not really part of the process here. It’s like identity engagement: “this is a thing that I care about, that you should care about, too.” This is rewarmed media theory from the 1960s: the structure of the system perpetuates how the information is going to spread. Social media is just a different type of trajectory, where the audience has real power as participants. That’s something that is fundamentally different from all prior media environments. Not only can you share the rumor, but millions of people can see in aggregate the sharing of that rumor.

Her explanation of how social media algorithms work is worth quoting at length

When you pull up your Twitter feed, there’s “Trends” on the right hand side, and they’re personalized for you. And sometimes there’s a very, very small number of participants in the trend, maybe just a few hundred tweets. But it’s a nudge, it says you are going to be interested in this topic. It’s bait: go click this thing that you have engaged with before that you are probably going to be interested in, and then you will see all of the other people’s tweets about it. Then you engage. And in the act of engagement, you are perpetuating that trend.

Early on, I was paying attention to the anti-vaccine movement. I was a new mom, and I was really interested in what people were saying about this on Facebook. I was kind of horrified by it, to be totally candid. I started following some anti-vaccine groups, and then Facebook began to show me Pizzagate, and then QAnon. I had never typed in Pizzagate, and I had never typed in QAnon. But through the power of collaborative filtering, it understood that if you were an active participant in a conspiracy theory community that fundamentally distrusts the government, you are probably similar to these other people who maybe have a different flavor of the conspiracy. And the recommendation engine didn’t understand what it was doing. It was not a conscious effort. It just said: here’s an active community, you have some similarities, you should go join that active community. Let’s give you this nudge. And that is how a lot of these networks were assembled in the early and mid-2010s.

Then DiResta posed what we used to call the “sixty-four thousand dollar question:”  are changes to the design of an algorithm censorship?

Implicit in that question, of course, is another: what about the original design of an algorithm?  Those mechanisms have been designed  to respond to certain inputs in certain ways, to “nudge” the user to visit X rather than Y.  Is that censorship? And if the answer to either of those questions is “yes,” is the First Amendment implicated?

To say that we are in uncharted waters is an understatement.

Comments

About That War On Education

Evidently, Indiana’s censorious legislature has company–ours aren’t the only lawmakers issuing “gag orders” to educators.

According to a January report from Pen America,

It has been an extraordinary month for educational gag orders. Over the last three weeks, 71 bills have been introduced or prefiled in state legislatures across the country, a rate of roughly three bills per day. For over a year now, PEN America has been tracking these and similar bills…

According to the Pen report, 122 educational gag orders have been filed in 33 states since January 2021. Of those, 12 have become law in 10 states, and another 88 are currently live.

Of those currently live:

84 target K-12 schools
38 target higher education
48 include a mandatory punishment for those found in violation

When Pen looked at the measures that have been introduced so far in 2022, it found “a significant escalation in both scale and severity.”

Forty-six percent of this year’s bills explicitly target speech in higher education (versus 26 percent in 2021) and 55 percent include some kind of mandatory punishment for violators (versus 37 percent in 2021). Fifteen also include a private right of action. This provision, which we analyzed in an earlier post, gives students, parents, or even ordinary citizens the right to sue schools and recover damages in court.

One final feature that is increasingly common to 2022’s bills is how sloppily many are written. Legislators, in their haste to get these bills out the door and into the headlines, are making basic factual errors, introducing contradictory language, and leaving important terms undefined. Given the stakes, the result will be more than mere confusion. It will be fear.

The Pen report then zeroed in on legislation from a single state, in order to help readers “appreciate” the chilling nature of the threat.

That state? Indiana. (I am so not proud.)

With eight bills currently under consideration, only Missouri (at 19) has made a greater contribution. Of the eight in Indiana, all target public K-12 schools, two target private K-12 as well, six would regulate speech in public colleges and universities, four affect various state agencies, and two threaten public libraries. All are sweeping, all are draconian, and few make any kind of sense.

House Bill 1362, sponsored by Bob Behning ( because of course it was), prohibits teachers and professors from including in their instruction any “anti-American ideologies.” What this means is never defined (because of course it wasn’t), but violators may be sued in court.

Pen tells us that House Bill 1040 is even more confusing. That bill requires teachers to adopt a “posture of impartiality” –but also contains the following language:

Socialism, Marxism, communism, totalitarianism, or similar political systems are incompatible with and in conflict with the principles of freedom upon which the United States was founded. In addition, students must be instructed that if any of these political systems were to replace the current form of government, the government of the United States would be overthrown and existing freedoms under the Constitution of the United States would no longer exist. As such, socialism, Marxism, communism, totalitarianism, or similar political systems are detrimental to the people of the United States.

As the report notes, this would be farcical if the consequences of failure to comply weren’t so dire. A teacher or school  that failed to navigate the whiplash mandated by this effort to ensure that teachers indoctrinate, rather than educate, would–under this bill– face civil suits, loss of state funding and accreditation, and/or professional discipline up to and including termination.

The linked article describes several other, similar efforts, and I encourage anyone who wants to wallow in despair over Indiana governance to click through.

The none-too-savvy legislators pushing these bills are evidently unaware that kids today can easily access multiple sources of information. (There’s this newfangled thing called the Internet.)

Ironically, these legislative efforts that display our lawmakers’ anti-intellectualism and bigotry also motivate young people to access the information they are trying to suppress. After a Tennessee school board censored a graphic novel about the Holocaust, it soared to the top of Amazon’s best-seller list. Young people (and a number of older ones) have rushed to form banned book clubs.

A few days ago, when I threatened to start an online class in “banned history,” the response was so heavy and positive I’m now seriously considering doing so. (Once I’ve done some research and figured out the logistics, I’ll let you all know.)

What we should be teaching students is how to evaluate the credibility of the sources they consult. Efforts to “shield” them from the uglier realities of the past are  likely to spark interest in exploring that past, and it would be helpful to give them the tools to separate sound scholarship from the propaganda produced by both Left and Right.

Several lawmakers could use those lessons too.

Comments

Horton Hears A Censor

A number of years ago, when my husband was still practicing architecture, he was presenting a school board with preliminary plans for a school they’d hired him to design. There were a number of decisions on which he wanted their feedback, but the board focused entirely–for an hour!– on arguments over the size of an elevator, and whether it should accommodate one wheelchair or two.

As he left, he ran into a friend, and explained his frustration with the school board’s focus. The friend said something I’ve thought about on multiple occasions since: “people argue about what they understand.” Insightful as that observation was, I think it needs amending to “People argue about what they think they understand.”

Which brings me to censorship, accusations of “cancel culture,” and Dr. Seuss, with a brief side trip to Mr. Potato Head.

The right wing is exercised–even hysterical–and screaming “censorship” about a decision made by the company that controls publication of the Dr. Seuss books. It will suspend publication of six of those sixty-odd books, based upon a determination that  they contain racist and insensitive imagery. The decision didn’t affect “Green Eggs and Ham,” “The Cat in the Hat,” “Horton Hears a Who” or numerous other titles.

This is not censorship, not only because they aren’t proposing to collect and destroy the numerous copies that already exist but because, in our constitutional system,  only government can censor speech. In fact, a decision by the company that owns the rights to Dr. Seuss’ books is an exercise of that company’s own free speech rights.

Think of it this way: you post something to Twitter, then think better of it and remove that post. You haven’t been censored; you made both the initial decision to post whatever it was and the subsequent decision to remove it.

Or think about that same example in the context of contemporary criticism of so-called “cancel culture.” You post something that other people find offensive. They respond by criticizing you. Your public-sector employer hasn’t punished you and, for that matter, no government entity has taken any action, but many people have expressed disdain or worse. Again–that is neither censorship nor “cancellation.”

The Free Speech clause of the First Amendment protects us from government actions that suppress the free expression of our opinions or our ability to access particular information or ideas. It doesn’t protect us from the disapproval of our fellow-citizens. It doesn’t even protect us from being sanctioned or fired by our private-sector employer, because that employer has its own First-Amendment right to ensure that messages being publicly communicated by its employees are consistent with its own.

When Walmart decides not to carry a particular book, when a local newspaper (remember those?) rejects an advertisement or refuses to print a letter to the editor, when the manufacturer of “Mr. Potato Head” decides to drop the “Mr,” those entities are exercising their First Amendment rights. They aren’t “censoring.” They aren’t even “cancelling.”

You are within your rights to disagree with the decision by those who own the Dr. Seuss catalogue (actually, that “company” is run by the author’s family, aka the Seuss estate.) Disagreement and criticism are your rights under the First Amendment. You are free to argue that the decision was misplaced, that it constituted over-reaction…whatever. But since the government did not require that decision–or participate in it– it wasn’t censorship. And unless the criticism was accompanied by ostracism–unless it was accompanied by removal of the author’s books from bookstores and libraries–it isn’t cancellation, either.

Americans have a right to freedom of expression. We have no right–constitutional or otherwise– to freedom from criticism. The desire of America’s culture warriors to “own the libs” doesn’t trump that reality.

As for the decision to stop printing and circulating six books with unfortunate portrayals, we’d do well to heed Charles Blow. In a column for the New York Times, Blow reminded readers that the images we present to young children can be highly corrosive and racially vicious.Times article on the controversy noted that  a number of other children’s books have been edited to purge what we now recognize as racist stereotypes. Often, those edits have been made by the authors who wrote the books, who belatedly recognized that they had engaged in hurtful stereotyping.

Agree or disagree with a given decision–whether by the Dr. Seuss estate or by Hasbro, the Potato Head manufacturer–it was a decision they had the right to make and a right that the rest of us have an obligation to respect, even if we disagree.

Comments

Mandating Fairness

Whenever one of my posts addresses America’s problem with disinformation, at least one commenter will call for re-institution of the Fairness Doctrine–despite the fact that, each time, another commenter (usually a lawyer) will explain why that doctrine wouldn’t apply to social media or most other Internet sites causing contemporary mischief.

The Fairness Doctrine was contractualGovernment owned the broadcast channels that were being auctioned for use by private media companies, and thus had the right to require certain undertakings from responsive bidders. In other words, in addition to the payments being tendered, bidders had to promise to operate “in the public interest,” and the public interest included an obligation to give contending voices a fair hearing.

The government couldn’t have passed a law requiring newspapers and magazines to be “fair,” and it cannot legally require fair and responsible behavior from cable channels and social media platforms, no matter how much we might wish it could.

So–in this era of QAnon and Fox News and Rush Limbaugh clones– where does that leave us?

The Brookings Institution, among others, has wrestled with the issue.

The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.

Brookings says that a non-partisan public-private effort is required because disinformation crosses platforms and transcends political boundaries. They recommend a “public trust” that would provide analysis and policy proposals intended to defend democracy against the constant stream of  disinformation and the illiberal forces at work disseminating it. 
It would identify emerging trends and methods of sharing disinformation, and would
support data-driven initiatives to improve digital media-literacy. 

Frankly, I found the Brookings proposal unsatisfactorily vague, but there are other, more concrete proposals for combatting online and cable propaganda. Dan Mullendore pointed to one promising tactic in a comment the other day. Fox News income isn’t–as we might suppose– dependent mostly on advertising; significant sums come from cable fees. And one reason those fees are so lucrative is that Fox gets bundled with other channels, meaning that many people pay for Fox who wouldn’t pay for it if it weren’t a package deal . A few days ago, on Twitter, a lawyer named Pam Keith pointed out that a simple regulatory change ending  bundling would force Fox and other channels to compete for customers’ eyes, ears and pocketbooks.

Then there’s the current debate over Section 230 of the Communications Decency Act, with many critics advocating its repeal, and others, like the Electronic Frontier Foundation, defending it.

Section 230 says that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of “interactive computer service providers,” including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.

Most observers believe that an outright repeal of Section 230 would destroy social networks as we know them (the linked article explains why, as do several others), but there is a middle ground between total repeal and naive calls for millions of users to voluntarily leave platforms that fail to block hateful and/or misleading posts.

Fast Company has suggested that middle ground.

One possibility is that the current version of Section 230 could be replaced with a requirement that platforms use a more clearly defined best-efforts approach, requiring them to use the best technology and establishing some kind of industry standard they would be held to for detecting and mediating violating content, fraud, and abuse. That would be analogous to standards already in place in the area of advertising fraud….

Another option could be to limit where Section 230 protections apply. For example, it might be restricted only to content that is unmonetized. In that scenario, you would have platforms displaying ads only next to content that had been sufficiently analyzed that they could take legal responsibility for it. 

A “one size fits all” reinvention of the Fairness Doctrine isn’t going to happen. But that doesn’t mean we can’t make meaningful, legal improvements that would make a real difference online.

Comments

The New Censorship

One of the many causes of increased tribalism and chaos worldwide is the unprecedented nature of the information environment we inhabit. A quote from Yuval Noah Harari’s Homo Deus is instructive–

In the past, censorship worked by blocking the flow of information. In the twenty-first century, censorship works by flooding people with irrelevant information.

We are only dimly beginning to understand the nature of the threat posed by the mountains of “information” with which we are inundated. Various organizations are mounting efforts to fight that threat–to increase news literacy and control disinformation– with results that are thus far imperceptible.

The Brookings Institution has engaged in one of those efforts; it has a series on Cybersecurity and Election Interference, and in a recent report, offered four steps to “stop the spread of disinformation.” The linked report begins by making an important point about the actual targets of such disinformation.

The public discussion of disinformation often focuses on targeted candidates, without recognizing that disinformation actually targets voters. In the case of elections, actors both foreign and domestic are trying to influence whether or not you as an individual vote, and for whom to cast your ballot. The effort goes farther than elections: it is about the information on whether to vaccinate children or boycott the NFL. What started with foreign adversaries now includes domestic groups, all fighting for control over what you believe to be true.

The report also recognizes that the preservation of democratic and economic institutions in the digital era will ultimately depend on efforts to control disinformation by  government and the various platforms on which it is disseminated. Since the nature of the necessary action is not yet clear–so far as I can tell, we don’t have a clue how to accomplish this– Brookings says that the general public needs to make itself less susceptible, and its report offers four ways to accomplish that.

You’ll forgive me if I am skeptical of the ability/desire of most Americans to follow their advice, but for what it is worth, here are the steps they advocate:

Know your algorithm
Get to know your own social media feed and algorithm, because disinformation targets us based on our online behavior and our biases. Platforms cater information to you based on what you stop to read, engage with, and send to friends. This information is then accessible to advertisers and can be manipulated by those who know how to do so, in order to target you based on your past behavior. The result is we are only seeing information that an algorithm thinks we want to consume, which could be biased and distorted.

Retrain your newsfeed
Once you have gotten to know your algorithm, you can change it to start seeing other points of view. Repeatedly seek out reputable sources of information that typically cater to viewpoints different than your own, and begin to see that information occur in your newsfeed organically.

Scrutinize your news sources
Start consuming information from social media critically. Social media is more than a news digest—it is social, and it is media. We often scroll through passively, absorbing a combination of personal updates from friends and family—and if you are among the two-thirds of Americans who report consuming news on social media—you are passively scrolling through news stories as well. A more critical eye to the information in your feed and being able to look for key indicators of whether or not news is timely and accurate, such as the source and the publication date, is incredibly important.

Consider not sharing
Finally, think before you share. If you think that a “news” article seems too sensational or extreme to be true, it probably is. By not sharing, you are stopping the flow of disinformation and falsehoods from getting across to your friends and network. While the general public cannot be relied upon to solve this problem alone, it is imperative that we start doing our part to stop this phenomenon. It is time to stop waiting for someone to save us from disinformation, and to start saving ourselves.

All good advice. Why do I think the people who most need to follow it, won’t?

Comments