The Boundless Web: Navigating Legal Implications and Internet Expression
By Lien Phuong Pham
“Congress shall make no law respecting an establishment of religion or prohibiting the
free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the
people peaceably to assemble, and to petition the Government for a redress of grievances.”
(U.S. Constitution Amend I.)
No matter how little one knows about American law, or about the Constitution, the first
amendment is widely known across the country. The right to free speech - a right entitled to
every American, regardless of race, gender, or identity. The words of our founding fathers, to be
able to verbally express yourself freely, still bear a profound effect today. Not only did it
establish our first rights as citizens, but also set precedent for the rest of American society.
However, does freedom of speech have its limits? In the growing age of technology, social
media is vital to keep up with. Whether it’s the news, or entertainment - the media is constantly
updating, and ever-so advancing. Everyone utilizes social platforms as a form to express their
thoughts and opinions. Due to these very facts, the law fails to grapple with the ideas of the
internet, and thus, the concept of self-expression on social media. This essay will discuss the
legal implications of free speech and its correlation with the internet.
On January 1st, 2017, Abdulkadir Masharipov conducted a terrorist attack which killed
39 people and injured an additional 69 others in the Reina nightclub in Istanbul, Turkey. One of
which was Nawras Alassaf, a victim who was killed in the attack. The day after the incident, the
Islamic State of Iraq and Syria claimed responsibility that Masharipov, the gunman, led the
attack on their behalf. The Islamic State of Iraq and Syria, also known as ISIS, is a foreign
terrorist organization who has accumulated a cult-like following on social platforms through
internet algorithms. Witnessing this injustice, one of the victim’s family in the attack, Nawras
Alassaf, sued Twitter and YouTube, alleging that they hold responsibility for regulating inactions
in prohibiting the content of, and are liable for the deaths in the terrorist attacks. They claim that
ISIS and its adherents have used the platforms’ algorithms as tools for recruiting, fundraising,
and spreading propaganda. Instead of suing ISIS directly, Alassaf’s family sues the three of the
largest social-media companies in the world—Facebook, Twitter, and Google - for liability under
the §2333(d)(2). The plaintiff then sought damages under 18 U. S. C. §2333(b), an Anti-
Terrorism Act provision that permits U. S. citizens who have aided and abetted “acts of
international terrorism”. When appealed to the lower judicial courts, the district court dismissed
the plaintiffs’ complaint for “failure to state a claim”. Although this was the case, the Ninth
Circuit Court reversed the decision, finding that plaintiffs had plausible allegations that made
defendants secondarily liable for the Reina nightclub attack. The case was then brought in front
of the Supreme Court in hopes of resolving the situation. On May 18th, 2023, after careful
consideration, the nine Justices issued a unanimous decision that media platforms did not violate
the following claims. Justice Clarence Thomas presents this opinion to the public; he justifies
this decision stating that companies are treating the ISIS social media accounts like any other
users on their platforms. Based on the legal framework of Halberstam v. Welch (Case Text, n.d.),
the Court noted that the guilt of the defendant rests on three core elements: First, “the party
whom the defendant aids must perform a wrongful act that causes an injury.”. Second, “the
defendant must be generally aware of his role as part of an overall illegal or tortious activity at
the time that he provides the assistance.” And third, “the defendant must knowingly and
substantially assist the principal violation.” However, the case was concluded for the plaintiff’s
lack of factual records and hence, the decision of the Court of Appeals was reversed. (Supreme
Court, 2023) On the very same day, the Court ruled against Google v. Gonzalez, a similar case,
which desired the Supreme Court to review the Communications Decency Act. The plaintiff,
especially, wanted the Court to fundamentally re-examine Section 230, which grants immunity to
computer services from liability by third parties generated content.
Whilst it may seem insignificant, the overruling of both cases sparks a prevalent question:
“What is the borderline for free speech on social media?” Over the past decades, Section 230 of
the Communication Decency Act has been the primary principle of the internet. To ensure that
individuals would be able to participate in the innovative age of social media, Section 230
declares that participants will not be held liable for illegal content posted online by other users as
well as encouraging content moderation. By doing so, it has been widely recognized as one of
the most important laws contributing to the success of social media, particularly largely operated
media algorithms. In the years since, many other countries adopted similar liability laws to
ensure that the Internet lives up to its full potential. (Kelly O'Hara, 2023) Despite the
significance of algorithms from apps like TikTok and Instagram, it can also be a double-edged
sword, causing users on the Internet to be subjected to Disinformation, Organizing Violence,
Harassment - if public figures decide to abuse it. Controversial media mogul, Andrew Tate is a
concerning example of this problem. Tate, a former kickboxer who has risen to popularity for his
views on women, has utilized social media platforms and its current algorithms to gather a loyal
fanbase of impressionable teenage boys to spread his conservative outlook. His misogynistic
remarks, especially, contain very discriminatory language. After being prevalent on popular
media services for some time, Tate was finally banned on YouTube, Twitter, Facebook and
Instagram. In a statement, YouTube discloses: “We have permanently terminated channels
associated with Andrew Tate for multiple violations of our Community Guidelines and Terms of
Service, including our hate speech policy.” (Wilson & Danise, 2022) What is unsettling about
this situation, is that Andrew Tate was on social media platforms for MONTHS before getting
banned, allowing him to accumulate millions of fans to propagandize and support unhealthy,
toxic gender stereotypes. Tate is not the only one who has been getting away with hate speech -
there are others like him. Sneako. Kanye West. Pearl Davis. And many more. (Horowitz, 2023)
Although the majority of influencers that were listed are now banned, it is disturbing how
internet platforms fail to quickly regulate this kind of discriminatory content off of their
platforms. It has been reported that hate-speech can increase violence and intolerance, altogether
threatening peace around the world. (United Nations, n.d.) With users regularly violating the
First Amendment Rights limitations on speech, this would create a negative environment,
harming the digital space as well as for others.
Moreover, social media plays a pivotal role in organizing violence. On January 6, 2021, a
mob of radical supporters of the former president, Donald J. Trump attacked the Capitol to
protest the alleged convening of Congress in the most recent presidential election. Due to the
COVID-19 pandemic posing an imminent threat to public health, the conduct of the 2020
presidential election had significant changes in order to provide a safe, as well as fair way for
American citizens to vote. Several states postponed primary elections or implemented changes to
the voting procedures. Among these measures were extending early voting periods and loosening
requirements for casting absentee ballots. The changes infuriated Trump, and he falsely claimed
that Democrats were utilizing “fraudulent ballots” to “rig” the election by systematically forging,
altering, or discarding absentee ballots, among other means. Throughout his 2020 candidacy,
Trump continued to maintain the false narrative of Democratic party rigging the election and
spreading disinformation about the ongoing counting of absentee ballots. After Biden wins the
presidency, Trump and other Republican members of Congress both directly and indirectly
encouraged his large crowd of supporters near the White House to march to the Capitol.
(Duignan, n.d.) At the same time, on social media, a similar stance was circulating. A couple of
days before the January 6 assault, violent messages were being displayed on various internet
sites. Maps of the U.S. Capitol, bridges into D.C, suspected police checkpoints, strategies on
how to illegally sneak firearms into D.C and so on - all these documents were leaked on
platforms like Twitter, Instagram and Facebook, to clearly mobilize conservative extremists into
rioting. (Davies, 2023) The Jan. 6th attack is still widely considered today as an act of domestic
terrorism. The support of the internet played a significant role in organizing the violence that
occurred at Capitol that day and endangering the lives of many lawmakers and representatives of
Congress, ultimately showing how media can be utilized for illegal purposes if not moderated
adequately.
Are our media services really doing an adequate job in providing a positive digital space?
And what does that mean for us, as a society? Today, technology seems to be advancing at a
speed that the law cannot keep up with. Andrea Matwyshyn, a professor at the University of
Pennsylvania's Wharton School, who tracks the intersection of law and technology, estimates
that "the law is at least five years behind technology as it is developing," (Tanneeru, 2009) This
estimation leaves an uneasiness in most people, making them question the ability of the law. Law
is incredibly slow - some critics even say ‘outdated’. With that, it is understandable why it
cannot keep pace with the constantly growing internet. It is evident in the examples given, that
with the current regulations provided by platforms, the development of cyber law has been
stagnant. The law has yet to comment upon subjects like harassment, or misinformation on social
media, and therefore, different multimedia must create their own policies to maintain an orderly
digital space. Due to this slow moderation, or in some cases, lack of regulations, users can
explicitly exploit the first amendment rights on the internet to harass and discriminate against
others.
To conclude, although there have been previous laws established to encourage
moderation as well as grant immunity for discriminatory speech on platforms, corporations fail
to properly regulate misconduct that violate their terms of service. Without trusted institutions
moderating the media, it does not work properly. The internet is a crucial part of everyday life; it
is where we spend the most of our free time, where we are updated on global events, and where
we go when we need help…Despite this, our current lawmakers and laws do not view it with as
of importance, and disregards creating strict enforcements on what the internet can and cannot
do, altogether hindering the development of social media. If we want to foster a healthy and
vibrant digital public sphere, the law will need to keep up with technological advancements, to
progress American society even further.
Works Cited
Casetext. “Halberstam v. Welch, 705 F.2d 472.” n.d., https://casetext.com/case/halberstam-v-welch. Accessed 24
July 2023.
Davies, Dave.“Social media's role in Jan. 6 was left out of the final report.” NPR, 26 January 2023,
https://www.npr.org/2023/01/26/1151360750/social-medias-role-in-jan-6-was-left-out-of-the-final-report. Accessed
26 July 2023.
Duignan, Brian. “January 6 U.S. Capitol Attack| Background, Events, Criminal Charges, & Facts.” Britannica, n.d.,
https://www.britannica.com/event/January-6-U-S-Capitol-attack. Accessed 26 July 2023.
Horowitz, Justin. “Beyond Andrew Tate: Meet the misogynistic "manosphere" influencers proliferating across social
media.” Media Matters for America, 16 March 2023,
https://www.mediamatters.org/diversity-discrimination/beyond-andrew-tate-meet-misogynistic-manosphere-
influencers-proliferating. Accessed 25 July 2023.
Kelly O'Hara. “Why you should care about Section 230.” Internet Society Organization, 13 June 2023,
https://www.internetsociety.org/blog/2023/02/what-is-section-230-and-why-should-i-care-about-it/?
gclid=Cj0KCQjwwvilBhCFARIsADvYi7KhLTpSfnQcHCDHJIW3Bh2TcJMMhSifH_OWqvlITRxcb-
5EYIadlskaAtdiEALw_wcB. Accessed 24 July 2023.
Supreme Court. “21-1496 Twitter, Inc. v. Taamneh (05/18/2023).” Supreme Court, 18 May 2023,
https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf. Accessed 24 July 2023.
Tanneeru, Manav. “Can the law keep up with technology?” CNN, 17 November 2009,
http://www.cnn.com/2009/TECH/11/17/law.technology/index.html. Accessed 24 July 2023.
United Nations. “Say #NoToHate - The impacts of hate speech and actions you can take | United Nations.” The
United Nations, n.d., https://www.un.org/en/hate-speech. Accessed 27 July 2023.
Wilson, Josh, and Amy Danise. “The Downfall Of Andrew Tate And Its Implications.” Forbes, 30 August 2022,
https://www.forbes.com/sites/joshwilson/2022/08/30/the-downfall-of-andrew-tate-and-its-implications/. Accessed
24 July 2023.