The First Amendment protects speech that advocates a person’s point of view – even if that point of view is based on hatred of a certain group of people or anger or resentment towards a particular person. While many countries prohibit “hate speech” that attacks or maligns people based on certain characteristics or identities, the Constitution does not generally permit viewpoint- or content-based regulation of speech, and generally holds that there is an inherent unfairness in allowing only certain opinions on a topic to be expressed. Thus, the Supreme Court routinely strikes down state and federal “hate speech” legislation that includes viewpoint-based restrictions on speech. Similarly, the Court has held that laws against “incitement,” “fighting words” and “true threats” must not curtail protected expression.
But some speech that might otherwise be protected as a point of view can be prohibited if a court determines that it is likely to incite violence. Speech “incites violence” under the Court’s definition if it encourages others to commit violent acts at a particular time and place. The Court differentiates between speech that serves as a general call for violent action and language that calls for violence at a specific time. Likewise, the Court has held that bans on “incitement” and “true threats” may criminalize only those statements where the speaker means to communicate a serious expression of intent to commit violence against a particular individual or group, and that and the specific threat is likely to be carried out soon.
Many other countries do have regulations that determine whether certain viewpoints can be expressed; these laws typically address speech disparages a particular race, gender, ethnicity, religious affiliation, or nationality. This may pose an issue if the country is able to claim that its laws apply to you and your art.The Basics
Under U.S. law, there are very few circumstances in which the government can regulate speech based on the speaker’s point of view. While many other countries have “hate speech” laws that prohibit speech that threatens or disparages people based on qualities such as race, gender, ethnicity, religious status, and sexual orientation, the U.S. Supreme Court has determined that this type of speech receives protection under the First Amendment, and the Court routinely strikes down such prohibitions. The law does permit regulation of some types of threatening speech, including speech that is likely to incite immediate violence or urges listeners to take specific violent action.
If your art includes criticism or unpopular opinions about a particular group or type of person, you may want to consider the information below. Also, every artist should keep in mind that the website or server that hosts your work may include prohibitions against certain forms of hate speech in its terms of tervice.
When your art appears to advocate violence against a specific person or group of people, a court might consider it to be “inciting violence.” Such speech is not protected by the First Amendment, and states can make laws prohibiting it. General encouragement of violence is not typically considered “incitement,” but speech with more specific threats or calls to action may be. Check out our section on violence to learn more.
The U.S. Supreme Court has also upheld laws that regulate “fighting words” – speech that tends to immediately breach the peace, or is likely to start a fight. Speech that counts as fighting words can include insults, threats, and derogatory slang based on personal characteristics, but is generally limited to face-to-face communication. Because the main characteristic of fighting words is the likelihood that they will start an immediate fight, this area of law is unlikely to be applied to Internet-based speech.
In many countries, certain types of viewpoint-based restrictions on speech are permitted. Prior to the development of the Internet, international law was rarely a concern for artists working only in the U.S. But now that any website can be accessed from almost any country, artists who choose to post or sell their work using the Internet should be aware of the potential issues.
Usually, a country’s government cannot enforce its laws on an individual unless that person has some kind of relationship with the country. Some nations have successfully argued that selling a product over the Internet to a resident of that nation makes the seller subject to the nation’s laws. Other countries have gotten judgments against U.S. citizens in their own courts, and then brought those judgments to the U.S. to be enforced. For an illustration of how this might work, and for more discussion of international issues, see the Important Laws and Cases section.
Terms of Service
“Terms of Service” are very common on the Internet. Every time you use a hosting site or service provider, whether you are uploading photos or video, posting your work on a blog or a website, or selling your work online, you are subject to the site or provider’s Terms of Service.
Most of the major content-hosting websites have rules in their terms against “objectionable content,” and it’s up to them to determine what is objectionable. They have a lot of discretion, though most content hosts use a common-sense standard to determine whether your content meets the site’s criteria. Site operators are more likely to consider content “objectionable” and remove it if they receive multiple complaints about it, or if it clearly violates explicit site policies.
If you’re not sure whether your art complies with your host’s terms, you can try posting it anyway and allow your host to determine whether your content violates their policies. You can also search for a content host that will not censor your content. For more information, read our discussion of Terms of Service Violations.
The U.S. Constitution protects speech as a fundamental right; the government may regulate speech only in limited circumstances. In recent years, several states and public universities have attempted to enact laws that prohibit insulting, threatening, or hateful speech related to race, gender, religion, sexuality, sexual orientation, and ethnicity, but the Supreme Court has typically struck down such laws if their main purpose was to suppress this type of expression. If suppression of speech was only an incidental effect of a law, however, the law might be allowed to stand.
Art that poses a specific threat to a person or group can cause problems under the law.
The more specific the threat, the more likely it is that a court will determine that it isn’t the type of expression protected under the First Amendment. Generally, only specific threats that target individuals at a particular time and place fall into this category, and the specific threat has to be likely to be carried out soon. More general threatening language is protected under the First Amendment. See more in the section on violence.
Web hosts and service providers are the primary sources of online speech restriction and regulation.
Since these companies are usually private actors (and not part of the government), the First Amendment doesn’t apply to them in the same way. They are free to include provisions in their terms of service that allow them to censor content, and they have no obligation to post anyone’s work.
The international landscape of free-speech protection regardless of viewpoint is more conservative.
Canada, England, France, Germany, the Netherlands, South Africa, Brazil, Croatia, Denmark, Finland, Iceland, Ireland, Norway, Sweden, Switzerland, Australia, India, and other countries have laws banning certain types of speech, which may include speech that considered derogatory to any group based on race, ethnicity, gender, sexual orientation, or religion. These have the potential to affect U.S.-based artists when their art is accessible in foreign countries. For more information, see the section on international issues.
The General Rule
In 1992, the Supreme Court heard a challenge to Minnesota law that made it criminal disorderly conduct to “place on private property a symbol, object, appellation, character, or graffiti” that the person knew or had reasonable grounds to know “arouses anger, alarm, or resentment in others on the basis of race, color, creed, religion, or gender.” The challenge arose after two men were convicted under this law for burning a cross on a black family’s lawn. The Court held that the law was unconstitutional as both a content-based restriction on speech and as a viewpoint-based restriction. The Court also noted that there were other “content neutral” laws on which the men could be prosecuted.
Because the law did not prohibit fighting words in general, but rather prohibited only fighting words that “arouse anger on the basis of race, color, creed, religion, or gender,” the law was regulating racist and sexist fighting words but not regulating, for example, homophobic or politically focused fighting words. The government is not permitted to make this kind of distinction regarding the content of someone’s speech. And, because the law restricts only speech that arouses anger on the basis of the listed categories, it could result in viewpoint discrimination: a speaker who used a racial epithet would be held in violation of the law, while someone who argued “against racial bigotry” would not be. The law effectively assigns criminal penalties to the expression of certain points of view, and the Court struck it down as a violation of the First Amendment.
In Brandenburg, a man was convicted in violation of an Ohio law that created criminal and civil penalties for advocating crime, violence, sabotage, or terrorism as a means for political or industrial reform, and for voluntarily assembling with people who gathered to teach or advocate doctrines of criminal syndicalism. The Supreme Court struck down this law, noting that “constitutional guarantees of free speech and free press do not permit a state to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”
This case overruled a prior decision in the Whitney v. California, 274 U.S. 357 (1927), which had held that advocacy alone was enough to cause danger, and that states could therefore pass laws forbidding it. The implication of Brandenburg for artists who post their work on the Internet is that they have much freedom to express themselves under the First Amendment. As long as the work cannot be construed as directing illegal action in the near future and as likely to actually bring about that action, then states cannot regulate it or punish the artist who created it.
Planned Parenthood of the Columbia/Willamette, Inc. v. American Coalition of Life Activists, 244 F.3d 1007 (9th Cir. 2001)
This is one of the few recent cases where a prohibition against speech from a certain viewpoint was upheld. In that case, the American Coalition of Life Activists posted a sign with the name of twelve doctors who performed abortions. They distributed other doctors’ names and home addresses. Sometime after these posters started circulating, three doctors who performed abortions were assassinated. Someone related to the coalition eventually created a website that listed the names of abortion doctors. On his list, the website creator grayed out the names of doctors who had been wounded and struck through the names of people who had been murdered.
The Ninth Circuit Court of Appeals determined that, in this context, the posters constituted a serious threat of harm, one that had lead doctors who performed abortions to wear bulletproof vests and accept the protection of U.S. Marshals. The Court held that even though the posters did not appear to be threatening, in context they presented a reasonably foreseeable threat of an intent to commit bodily harm. This constituted a “true threat,” and the possibility of harm was great enough that the state could permissibly suppress the speech.
This case demonstrates how certain instances of speech can be construed as a serious threat, even if the speech itself does not contain direct threats. If speech targets particular individuals for attention and is associated with other violent or threatening conduct toward those people, it may not be protected under the First Amendment.
At issue in Chaplinsky was a statute that prohibited calling others by “offensive or derisive names” in public places. Chaplinsky, a Jehovah’s Witness who was passing out pamphlets and speaking out against organized religion, was arrested after his activity drew a large crowd. On his way to jail, he came across the town mayor, who had warned Chaplinsky against causing commotion within the town. Chaplinsky attacked him verbally, shouting “You are a God-damned racketeer” and “a damned Fascist,” and was convicted under the statute.
The Supreme Court held that this statute’s restriction on speech was permissible because it was a prohibition on fighting words in general, and not on speech from a specific point of view. “Fighting words” are “those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.” A prohibition on fighting words, the court reasoned, was acceptable under the First Amendment because “such utterances are no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived from them is clearly outweighed by the social interest in order and morality.”
The holding of Chaplinsky has narrowed somewhat over the years; for example, the 1969 case Street v. New York held that speech that is merely offensive cannot be regulated as fighting words. Still, the ability of the government to prohibit fighting words persists, and it is possible that such regulation might extend to the Internet. But one of the key characteristics of fighting words is their tendency to bring about an immediate fight or breach of the peace, and because Internet-based communication tends to happen across distance and over time, it is much less likely than face-to-face communication to satisfy this immediacy requirement.
LICRA v. Yahoo!, Inc., Tribunal de grande instance (2000)
In this case, French plaintiffs challenged American-based Yahoo! because people had offered to sell Nazi items on Yahoo's international and American sites. The sale of such products is illegal in France, and the French non-government organization Ligue Contre le Racisme et l’Antisémitisme (LICRA) demanded that Yahoo! stop selling the products anywhere that a person in France could access them. Yahoo! refused, and LICRA asserted jurisdiction over the corporation because of its contacts with France. LICRA received a judgment in its favor in the French court; Yahoo! challenged the judgment in U.S. court, arguing that the judgment was unenforceable on First-Amendment grounds. Though the U.S. court agreed with Yahoo!, finding that an order to enforce the judgment would violate the U.S. Constitution, the Ninth Circuit Court of Appeals eventually decided that, while the District Court had jurisdiction over the French parties in this case, the issue of enforceability of the French court’s judgment was not ripe for adjudication, because nothing indicated that Yahoo! was in violation of the order to the extent that any party would seek to enforce it. Yahoo! eventually removed the offending materials from its site and no longer permits the sale of such material.
This case illustrates how U.S. free speech protections, which would not have required Yahoo to remove the content, can come under attack in other countries. As Internet use continues to grow, this type of international conflict will arise more and more frequently.