Regulating Hate Speech Qua Speech Is Not The Solution To The Epidemic Of Hate On The Internet - Part I

Part II of this article appears in the September 2004 issue of The Metropolitan Corporate Counsel.The following article is adapted from a paper given at the Organization for Security and Cooperation in Europe Meeting on the Relationship Between Racist, Xenophobic and Anti-Semitic Propaganda on the Internet and Hate Crimes in Paris, June 16-17, 2004 by Christopher Wolf, Chair of the Internet Task Force of the Anti-Defamation League. Mr. Wolf wishes to acknowledge Marcella Ballard and Cynara Hermes, lawyers at Proskauer Rose LLP, for their assistance in the preparation of this piece. Mr. Wolf's practice focuses on Internet law, privacy and commercial litigation.

I am deeply gratified that this important international conference is taking place. And I applaud the organizers for focusing on the powerful impact and powerful problems presented by the relationship between hate crimes, hate propaganda and the misuse of the Internet by hate groups.

As Chair of the Internet Policy Committee of the Anti-Defamation League, I am well aware of the means by which the Internet may be misused to disseminate messages of hate and violence. As a lawyer specializing in Internet law, I am also well aware of the challenges faced by legislators, law enforcement and national governments to keep up with the daily barrage of hate propaganda on the Internet. As an individual concerned about the growing misuse of the marvelous medium which is the Internet, I am particularly pleased to participate in this conference and discuss the ways in which law, technology, education and guidance may be used to positively affect the Internet without interfering with the free flow of ideas and information.

The Internet is now an entrenched presence in our society. We all know and appreciate that the Internet has transformed the ways in which we communicate, educate, inform and entertain. But there is a dark side to the Internet. Terrorists, anti-Semites, racists, homphobes and other haters have logged on and are online. The Internet has rapidly transformed the way people worldwide may communicate. The ability to send information instantaneously at any time for relatively little or no cost is truly revolutionary. But the possibilities to use this medium for unlawful activity have grown as well. Unfortunately, the Internet has become the new frontier in spreading hate.

From web sites spewing vitriolic messages of hate and recruiting the young to join in the sponsor's organizations, to technology attacks, to the consummation of terrorist conspiracies through e-mail, bulletin boards, extranets and downloadable files with target coordinates and recipes for bombs, the Internet has become a potent tool for the spread of hate and violence. The Internet is an especially inviting host to spread the virus of hate. Instead of standing on a street corner and handing out mimeographed leaflets, hate mongers may now promote their causes at sites on the World Wide Web and in chat rooms. The Internet facilitates communication among like-minded bigots across borders and oceans and enhances their ability to promote and recruit for their causes anonymously and cheaply.

What can be done? In the United States, judges have struggled over the electronic dissemination of hate speech because although it is offensive and hurtful, the First Amendment to the United States Constitution protects such expression. When speech, however, contains a direct, credible threat against an identifiable individual, organization, or institution, it crosses the line to criminal conduct. Hate speech containing criminal threats is not protected by the First Amendment. And although criminal cases concerning hate speech on the Internet are few in number, there is much to learn from the first few successful prosecutions of hate speech in the United States. Where do Courts draw the line? By looking at what has been done already and what can be prohibited online, we may gain an understanding of the limits to Internet hate speech, and the problems inherent in seeking only legal means to stop it. Just last week, a Federal jury in Idaho deadlocked over these very issues in a fascinating case which I will discuss in a moment.

Historically, American lawmakers have generally taken a "hands off" approach to hate on the Internet, recognizing the broad First Amendment protections. Internet legislation has largely focused on sexually explicit materials deemed harmful to minors, and much of that legislation has been struck down as overly broad and violative of Constitutional free speech protections. Still, pre-Internet era laws prohibiting obscene materials, threats of imminent violence and violations of civil rights have been applied to the Internet, and have resisted challenge. The majority of the reported cases result from e-mail messages containing threats, such as a barrage of online anti-Asian epithets directed at Asian college students, which resulted in the successful prosecution of the sender. Or another college student who proclaimed to hate homosexuals and threatened to "shoot" them in the "back of the ... head" which resulted in the student being successfully enjoined from continuing to disseminate such hate speech.

U.S. Courts have been reluctant to draw a line for expressions of hate speech, as exhibited by the historic case in Skokie, Illinois, which allowed a "Nazi parade" to march through the streets of the predominantly Jewish town.1 The U.S. Supreme Court later confirmed its reluctance in permitting regulation of hate speech when it struck down a Minnesota city ordinance banning speech that "arouses anger, alarm or resentment in others on the basis of race, color, creed, religion, or gender." The Supreme Court noted that it shall not permit a government to impose special prohibitions on those who express disfavored views.

The dawn of hate on the Internet has wreaked havoc on American society with a marked increase in hate crimes. Online recruiting has aided many hate groups increase their membership linked to violence against Jews, African-Americans, and homosexuals. In fact, Don Black, former Grand Dragon of the Ku Klux Klan, noted that, "as far as recruiting, [the internet has] been the biggest breakthrough I've seen in the 30 years I've been involved in [white nationalism]."2 Moreover, perpetrators of Internet hate crimes are not hampered by existence of national or international boundaries, because information can be easily transmitted worldwide through communications and data networks. Even though connections may be of short duration, most computers are physically located in identifiable places. Of course, computers can be accessed remotely, regardless of the location of the persons who post, send, view, or receive information online.

The leading case that sought to establish the line with respect to hate speech online3 involved Neal Horsley, in conjunction with the American Coalition of Life Activists (the "ACLA"), who created an anti-abortion site known as "The Nuremberg Files." The ACLA Web site offered extensive personal information about abortion providers: pictures; addresses and phone numbers; license-plate numbers; Social Security numbers; names and birth dates of spouses and children. Viewers were exhorted to send photos, videotapes and data on "the abortionist, their car, their house, friends, and anything else of interest."

The ACLA Web site said that the information garnered would be used to prosecute abortion providers when abortion becomes illegal, just as Nazi leaders were prosecuted after the Second World War. The list of abortion providers at The Nuremberg Files site read like a list of targets for assassination. Names listed in plain black lettering were of doctors still "working"; those printed in "Greyed-out" letters were "wounded"; and those names that were crossed out ("Strikethrough") indicated doctors who had been murdered ("fatality").

The "Nuremberg Files" trial court wrestled with the issue of whether the Web site constituted protected speech under the First Amendment or qualified as a "true threat."4 At trial the jury found that the ACLA Web site was a threat to plaintiffs and ordered the Web site owners and operators to pay plaintiffs over $100 million in damages. The court then issued a permanent injunction to prevent the defendants from providing additional information to the "Nuremberg Files" Web site.5

The Appellate Court unanimously reversed, holding that the defendants' Web site was a lawful expression of views protected by the First Amendment. On appeal, the Court concluded that "unless [defendants] threatened that its members would themselves assault the doctors, the First Amendment protects its speech."6 The court later decided to rehear the case en banc and in an 8-3 decision held that the Web site constituted a true "threat of force" and was not protected by the First Amendment.7

Thus the Appellate ruling found that a true threat, that is, one where a reasonable person would foresee that the listener would believe he would be subject to physical violence upon his person, is unprotected under the First Amendment. Responding to arguments made by amicus (the ACLU) asking the court to adopt a subjective test, the Appellate Court stated that it is not necessary for the defendant to intend to, or be able to carry out his threat. Rather, "[t]he only intent requirement is that the defendant intentionally or knowingly communicate the threat." "The Nuremberg Files go beyond merely offensive or provocative speech. . . . As a result, we cannot say that it is clear as a matter of law ... [these] are purely protected, political speech."

Thus, after the Nuremberg Files case, it is clear that hateful content, when it knowingly and intentionally communicates a credible threat, will not be protected. The few other cases involving hate speech on the Internet in the United States have also wrestled with the issue of where to draw the line. The following four cases present stark examples of vitriolic hate speech in which the sender was successfully prosecuted for the threatening content of his message. In each instance, a knowing and intentional threat was communicated via the Internet.

United States v. Machado

The case of United States v. Machado is one of the first examples of successful prosecution of hate online. Richard Machado, a 21-year-old expelled college student, sent a threatening email message to 60 Asian students: "I personally will make it my life career [sic] to find and kill everyone one [sic] of you personally. OK?????? That's how determined I am ..." Machado's first trial ended in a hung jury. A trial in 1998 resulted in Machado's conviction for interference with federally protected activities in violation of a federal statute.8 He was sentenced to one year in prison to be followed by a one-year period of supervised release.9 Machado appealed his conviction on grounds of preemptory challenges, however the appellate court rejected his argument.10

State v. Belanger

Casey Belanger, a 19-year-old freshmen student posted his resume on the university's computer network, which included a statement that he "dislike[d] fags." Later that same day, Belanger posted a message to student groups affiliated with gay and lesbian causes, which stated [expletives deleted]: "I hope that you die screaming in hell...you'd [sic] better watch your...back you little...I'm [sic] gonna shoot you in the back of the...head...die screaming [name of student], burn in eternal...hell. I hate gay/lesbian/bisexuals, so...what..."11 The State Attorney General brought an action against Belanger under the Maine Civil Hate Crime Act seeking an injunction to require the student to cease from threatening any person because of the person's sexual orientation, race, color, religion, ancestry, sex, national origin, or physical or mental disability. The court issued a permanent injunction.

Commonwealth of Pennsylvania v Alpha HQ

A year after the Belanger case, in 1998, Ryan Wilson, a white supremacist, started a Web site for his racist organization, ALPHA, depicting a bomb destroying the office of a fair housing specialist who regularly organized anti-hate activities. Next to her picture, the ALPHA Web site stated, "Traitors like this should beware, for in our day, they will be hung from the neck from the nearest tree or lamp post." Wilson was charged by the Pennsylvania Commonwealth's Attorney General with threats, harassment, and ethnic intimidation. Wilson did not contest the State's action under Pennsylvania's Civil Hate Crimes Act; the site was removed from the internet, and the Court issued an injunction against the defendant and his organizations barring them from displaying certain messages on the Internet. 1 Nat'l Socialist Party v. Village of Skokie, 432 U.S. 43, 44 (1977).
2 Id. (citing Andrew Backover, "Hate Sets up Shop on Internet," Denver Post, Nov. 8, 1999, at E-01)
3 23 F. Supp. 2d 1182 (D. Or. 1999); 41 F. Supp. 2d 1130 (D. Or. 1999)(vacated and remanded); 244 F.3d 1007 (9th Cir. 2001)(reh'd en banc granted); 268 F.3d 908 (9th Cir. 2001)(affirmed in part, vacated in part and remanded); 290 F.3d. 1058 (9th Cir. 2002).
4 ACLA, 23 F. Supp. 2d at 1188.
5 41 F. Supp. 2d 1130, 1153 (D. Or. 1999).
6 Id. at 1015.
7 290 F. 3d 1058, 1063 (9th Cir. 2002) (Hon. Alex Kozinsky, judge who rendered opinion below, joined in the dissent).
8 United States v. Machado, 195 F.3d 454 (9th Cir. 1999).
9 Id. at 455.
10 Id. at 457.
11 Anti-Defamation League, Investigating Hate Crimes on the Internet, 2003.

Published .