Guest Opinion: Internet bias is bad, state ‘neutrality’ is worse

Published 10:43 pm Monday, August 5, 2019

by Dr. Brian Dellinger

Tony Wang, the general manager of Twitter, declared in 2012 that the company was “neutral as to the content” of speech, noting “[we] like to say that we are the free speech wing of the free speech party.” The same year, Facebook CEO Mark Zuckerberg praised international leaders “who are pro-internet and fight for the rights of their people, including the right to share what they want and the right to access all information that people want to share with them.”

Events since then have cast doubt on the sincerity of these claims. In March 2019, Zuckerberg penned an op-ed calling for government regulation of “harmful content” on the internet. For his part, Twitter CEO Jack Dorsey dismissed Wang’s remarks as “a joke” that was “never a mission of the company.” Matching deed to word, in recent months social media companies have banned, suspended, or demonetized multiple accounts out of step with progressivism. Among these are journalist Meghan Murphy, banned from Twitter for referring to transgender persons by their biological sex, the account for the pro-life film Unplanned, YouTube streamer Stephen Crowder, and conservative investigative group Project Veritas.

Get the latest headlines sent to you

Right-leaning voices have argued that these actions show explicit political bias; other commenters have raised concerns with the social media move towards “curated” information feeds. In the past, a site like Facebook or Twitter might present a user with a chronological list of activity by accounts he follows. Now, such sites algorithmically filter and reorder the data, presenting only some of the activity and strategically selecting which items to present first. Thus, a company could invisibly skew perception of an event simply by consistently suggesting news stories or posts that favor one side of a political event, while quietly filtering out other perspectives.

Whether social media giants have indeed injected political bias in this way is a subject of debate. Multiple organizations analyzed Facebook’s recent curation changes and concluded that, overall, left-leaning news sites saw increased traffic as a result, while right-leaning ones declined. The overt bans are varying degrees of defensible; Crowder’s videos, for instance, repeatedly refer to a Latino journalist as an “anchor baby” and “lispy queer.” Even as it demonetized Crowder, however, YouTube acknowledged that “[his] flagged videos did not violate our Community Guidelines.” One might expect that to preclude financial punishment.

Definitively answering the question of bias requires access to the curation algorithms — access which companies, for whom the algorithms are a competitive advantage, are loath to grant. It is no stretch to suggest that Facebook might deliberately manipulate users by reordering their feeds, as the company did exactly that in 2014, but proving viewpoint discrimination would likely require inside information. (Ominously, Project Veritas’s Twitter ban followed their claims to possess such data about the photo site Pinterest.)

In the absence of definite proof, much of the discussion around social media bias has focused on the legal protections such sites enjoy, particularly those granted by Section 230 of the Communications Decency Act (CDA). Prior to the CDA, online platforms risked legal responsibility for the content of their users’ posts; if a user posted a defamatory message, the platform itself might have been charged with libel. Ironically, the best way for a site to reduce risk was to decline to moderate its users’ posts. A site that allowed users to post anything, without review, could claim to be simply a “distributor,” separate from its users’ speech. Meanwhile, one that made good-faith efforts to remove offensive or illegal material could be categorized as a “publisher” and so made liable for posts it failed to delete. Section 230 removed this legal risk, explicitly declaring that moderation did not make a site responsible for its users’ speech. In so doing, it formed the legal foundation for the modern environment of internet platforms.

As critique of these platforms has intensified, so has discussion of the CDA. Some conservative figures, such as Senator Ted Cruz, have claimed that Section 230 only protects “neutral public forums” without political bias. These claims are false; Section 230 does precisely the opposite, guaranteeing that moderation of whatever sort cannot make a site liable for users’ speech. Other Republicans have called for modifying Section 230 to add a neutrality requirement, or simply to repeal it altogether.

Such changes would be a mistake, albeit an understandable one. Republicans are right to be concerned with the moderation of social media sites. Misbehavior by such sites on a grand scale is nothing new; if a website eliminated certain political voices wholesale, there exists no clear legal recourse to object. To offer protection only under the umbrella of supposed neutrality, however, makes a bad situation worse. Conservatives would rightly balk at attempts to audit newspapers for bias, for the obvious reason: to do so makes control of the press into a political trophy for whichever party enjoys power. The same logic applies to social media. Indeed, it strains credulity to believe that either political party would neglect the opportunity to silence the most strident voices in the opposition under the auspices of “preserving neutrality.”

Unfortunately, these exact concerns also make repeal an alluring prospect. Both Lindsay Graham and Nancy Pelosi recently spoke against Section 230, and the 2018’s overwhelmingly bipartisan FOSTA legislation has already weakened its protections. Congress should resist the temptation to weaken them further. The current environment allows at least the possibility of a plurality of social media voices. To pass control to government guarantees only one.

—Dr. Brian Dellinger is an assistant professor of computer science at Grove City College. His research interests are artificial intelligence and models of consciousness.