Tag Archives: algorithms

Who’s Talking?

I finally got around to reading an article about Facebook by a Professor Scott Galloway, sent to me by a reader. In it, Galloway was considering the various “fixes” that have been suggested in the wake of continuing revelations about the degree to which Facebook and other social media platforms have facilitated America’s divisions.

There have been a number of similar articles, but what Galloway did better than most was explain the origin of Section 230 of the Communications Act in language we non-techie people can understand.

In most industries, the most robust regulator is not a government agency, but a plaintiff’s attorney. If your factory dumps toxic chemicals in the river, you get sued. If the tires you make explode at highway speed, you get sued. Yes, it’s inefficient, but ultimately the threat of lawsuits reduces regulation; it’s a cop that covers a broad beat. Liability encourages businesses to make risk/reward calculations in ways that one-size-fits-all regulations don’t. It creates an algebra of deterrence.

Social media, however, is largely immunized from such suits. A 1996 law, known as “Section 230,” erects a fence around content that is online and provided by someone else. It means I’m not liable for the content of comments on the No Mercy website, Yelp isn’t liable for the content of its user reviews, and Facebook, well, Facebook can pretty much do whatever it wants.

There are increasing calls to repeal or reform 230. It’s instructive to understand this law, and why it remains valuable. When Congress passed it — again, in 1996 — it reasoned online companies were like bookstores or old-fashioned bulletin boards. They were mere distribution channels for other people’s content and shouldn’t be liable for it.

Seems reasonable. So–why the calls for its repeal? Galloway points to the multiple ways in which the information and communication environments have changed since 1996.

In 1996, 16% of Americans had access to the Internet, via a computer tethered to a phone cord. There was no Wi-Fi. No Google, Facebook, Twitter, Reddit, or YouTube — not even Friendster or MySpace had been birthed. Amazon sold only books. Section 230 was a fence protecting a garden plot of green shoots and untilled soil.

Today, as he points out, some 3 billion individuals use Facebook, and fifty-seven percent of the world population uses some sort of social media. Those are truly astonishing numbers.

I have previously posted about externalities–the ability of manufacturers and other providers to compete more successfully in the market by “offloading” certain of their costs to society at large. When it comes to social media, Galloway tells us that its externalities have grown as fast as the platforms’ revenues–and thanks to Section 230, society has borne the costs.

In sum, behind the law’s liability shield, tech platforms have morphed from Model UN members to Syria and North Korea. Only these Hermit Kingdoms have more warheads and submarines than all other nations combined.

As he points out, today’s social media has the resources to play by the same rules as other powerful media. Bottom line: We need a new fence. We need to redraw Section 230 so that it that protects society from the harms of social media companies without destroying  their  usefulness or economic vitality.

What we have learned since 1996 is that Facebook and other social media companies are not neutral platforms.  They aren’t bulletin boards. They are rigorously managed– personalized for each user, and actively boosting or suppressing certain content. Galloway calls that “algorithmic amplification” and it didn’t exist in 1996.

There are evidently several bills pending in Congress that purport to address the problem–aiming at the ways in which social media platforms weaponize these algorithms. Such approaches should avoid raising credible concerns about chilling free expression.

Reading the essay gave me some hope that we can deal–eventually–with the social damage being inflicted by social media. It didn’t, however, suggest a way to counter the propaganda spewed daily by Fox News or Sinclair or their clones…

Increasing Intensity–For Profit

Remember when Donald Rumsfeld talked about “known unknowns”? It was a clunky phrase, but in a weird way, it describes much of today’s world.

Take social media, for example. What we know is that pretty much everyone is on one or another (or many) social media platforms. What we don’t know is how the various algorithms those sites employ are affecting our opinions, our relationships and our politics. (Just one of the many reasons to be nervous about the reach of wacko conspiracies like QAnon, not to mention the upcoming election…)

A recent essay in the “subscriber only” section of Talking Points Memo focused on those algorithms, and especially on the effect of those used by Facebook. The analysis suggested that the algorithms were designed to increase users’ intensities and Facebook’s profits, designs that have contributed mightily to the current polarization of American voters.

The essay referenced recent peer-reviewed research confirming something we probably all could have guessed: the more time people spend on Facebook the more polarized their beliefs become. What most of us wouldn’t have guessed is the finding that the effect is  five times greater for conservatives than for liberals–an effect that was not found for other social media sites.

The study looked at the effect on conservatives of Facebook usage and Reddit usage. The gist is that when conservatives binge on Facebook the concentration of opinion-affirming content goes up (more consistently conservative content) but on Reddit it goes down significantly. This is basically a measure of an echo chamber. And remember too that these are both algorithmic, automated sites. Reddit isn’t curated by editors. It’s another social network in which user actions, both collectively and individually, determine what you see. If you’ve never visited Reddit let’s also just say it’s not all for the faint of heart. There’s stuff there every bit as crazy and offensive as anything you’ll find on Facebook.

The difference is in the algorithms and what the two sites privilege in content. Read the article for the details but the gist is that Reddit focuses more on interest areas and viewers’ subjective evaluations of quality and interesting-ness whereas Facebook focuses on intensity of response.

Why the difference? Reddit is primarily a “social” site; Facebook is an advertising site. Its interest in stoking intensity is in service of that advertising–the longer you are engaged with the platform, the more time you spend on it, and especially how intensely you are engaged, all translate into increased profit.

Facebook argues that the platform is akin to the telephone; no one blames telephone when people use them to spread extremist views. It argues that the site is simply facilitating communication. But–as the essay points out– that’s clearly not true. Facebook’s search engine is designed to encourage and amplify some emotions and responses–something your telephone doesn’t do.  It’s a “polarization/extremism generating machine.”

The essay ends with an intriguing–and apt–analogy to the economic description of externalities:

Producing nuclear energy is insanely profitable if you sell the energy, take no safety precautions and dump the radioactive waste into the local river. In other words, if the profits remain private and the costs are socialized. What makes nuclear energy an iffy financial proposition is the massive financial costs associated with doing otherwise. Facebook is like a scofflaw nuclear power company that makes insane profits because it runs its reactor in the open and dumps the waste in the bog behind the local high school.

Facebook’s externality is political polarization.

The question–as always–is “what should we do about it?”