by Phyllis Raybin Emert
Freedom of speech is the foundation of the United States. Social media has changed the landscape of free speech but essentially the same rules apply.
The First Amendment to the U.S. Constitution states: “Congress shall make no law…abridging the freedom of speech, or of the press…” Free speech on the Internet receives the same First Amendment protection as traditional print and broadcast media.
The U.S. government cannot ban Internet speech through congressional legislation since it would be a violation of the First Amendment. Some European countries can make their own laws regulating online speech, but in America, it is up to the social media platforms to regulate themselves.
Lata Nott, an attorney and executive director of the First Amendment Center of the Freedom Forum Institute in Washington DC, explains that the First Amendment prevents the government from censoring or punishing anyone for speech and that speech is not limited to what someone says out loud, or is printed in books or newspapers. Freedom of speech also includes freedom of expression, which promotes ideas and different points of view through symbolic meaning, such as artwork or films. Nott points out that the First Amendment right to free speech does not apply to organizations or companies.
“Social media platforms [like Facebook, Twitter, Instagram, and YouTube] are private companies, so they don’t have to comply with the First Amendment,” Nott says.
“They get to set their own rules and policies about what speech they’ll allow on their sites. That’s actually their First Amendment right.”
Fighting hate speech
Several violent and racist events have taken place in recent years. In August 2017 in Charlottesville, Virginia, a neo-Nazi supporter purposely smashed his car into peaceful civil rights protesters, killing one and injuring more than a dozen. In October 2018, 11 Jewish worshipers at a synagogue in Pittsburgh were killed while worshipping. In March 2019, gunmen at two Muslim mosques in New Zealand killed 51 and injured dozens while streaming the shootings live on Facebook. Weeks after the incidents in New Zealand, the U.S. House of Representatives Judiciary Committee held a hearing to explore the spread of white nationalism through social media.
House Judiciary Chairman Gerald Nadler, who oversaw the hearing, told legislators that online hate speech and the rise of white supremacists is “an urgent crisis in our country,” despite one witness who gave testimony at the hearing and accused Congress of “fear mongering.”
The hearing was live streamed on YouTube with a live chat posted along side it. Approximately 30 minutes into the hearing YouTube had to disable the comments section because users were posting anti-Semitic commentaries, claiming that white nationalism is not a form of racism. Some of these comments were read aloud during the hearing as evidence of the problem’s scope. One comment from someone with the screen name Fight White Genocide said, “Anti-hate is a code word for anti-white.”
“Hate speech, whether it’s online or out loud, is protected by the First Amendment, unless it’s a truly threatening statement,” Nott says. “That means that the government can’t arrest or otherwise punish someone for making a hateful post online, but the online platform is still free to remove that post or ban the user if it chooses to.”
Tech companies take action
Over the last few years, tech companies and social media sites have slowly begun to curb Internet access to extremist groups. Facebook and its subsidiary, Instagram, banned white supremacist content on its sites. In March 2019, the company expanded that ban to include white nationalist and white separatist content.
In a post titled, Standing Against Hate, Facebook stated: “…[W]hite nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups. Our own review of hate figures and organizations—as defined by our Dangerous Individuals & Organizations policy—further revealed the overlap between white nationalism and separatism and white supremacy. Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.”
In May 2019, Facebook and Instagram banned seven of its most divisive and controversial users under its Dangerous Individuals policy, including noted conspiracy theorist Alex Jones and Nation of Islam leader Louis Farrakhan, who is known for his anti-Semitic remarks. In a statement, Facebook said: “We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology. The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today.”
In June 2019, YouTube removed thousands of videos and channels from its site that advocated bigoted ideologies and instituted a policy that bans “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion.”
While some have called the bans discriminatory, Paul Barrett, the deputy director of New York University’s Stern Center for Business and Human Rights told The New York Times, “The social media companies not only have the right but an ethical responsibility to remove disinformation and hate speech and those who spread it from their platforms.”
Section 230 of the Communications Decency Act protects social media platforms like Facebook from being sued for what third parties post on their site. As an example, suppose a customer leaves a scathing, even libelous, restaurant review on Yelp.
According to Nott, the restaurant owner can sue the customer, but according to Section 230, the owner can’t sue Yelp. The rationale behind the protection, Nott says, is that Yelp cannot be expected to fact-check all of the reviews posted on its platform and shouldn’t be responsible for its users’ actions.
“If you took away its Section 230 protection,” Nott says, “the likely outcomes are that: 1) Yelp would be sued out of existence; 2) Yelp would remove any remotely negative reviews from its site to avoid being sued out of existence; and 3) Yelp would only allow a small and specific group of users to write content for its site, like a newspaper or magazine.”
Social media platforms are in the same category as Yelp. For instance, YouTube estimates that more than 500 hours of new content is uploaded to its site every minute. The company uses algorithms to search for offensive videos but they can’t catch all of them.
“Without Section 230,” explains Nott, “no company could afford to provide a platform where anyone and everyone could freely express their views.” She also notes that Section 230 doesn’t just protect the big media platforms but also small bloggers “from being liable for the comments posted by visitors.”
The fact that the First Amendment doesn’t apply to private companies like Facebook, meaning they are allowed to set their own rules about who can and can’t use their platforms, does raise concerns for Nott.
“It means that a small group of private companies have a lot of power over what speech gets heard and what speech doesn’t,” Nott says. “While that might not violate the First Amendment, it is something that people who value free speech should pay attention to [in the future].”
- Should all speech be allowed online, even hate speech? Explain your reasoning.
- The article talks about Section 230 of the Communications Decency Act. Do you agree or disagree with Section 230? Who do you think should be held responsible for comments made on social media platforms?
- Should Facebook and other social media platforms be able to decide what is and is not hate speech? Why or why not?
algorithm: a process or set of rules to be followed in calculations or other problem-solving operations.
anti-Semitic: hostile or prejudiced against Jewish people.
ideology: principles or a way of thinking that is characteristic of a political system.
libelous: defamatory or insulting.
This article originally appeared in the spring 2020 edition of Respect.