Sexual Content
Generally, nudity and depictions of sexual activity in art are protected under the First Amendment. But some of the most extreme sexually-oriented speech is not protected, and sexual content more broadly is frequently the target of criticism, and sometimes the target of state and federal regulations. First Amendment case law has developed related to sexual content in almost every mainstream communications medium to emerge in the past century, including the Internet.
Regulations of sexually oriented speech cover content that falls into three main categories: obscenity, sexual content that is deemed to be “harmful to minors,” and child pornography. Obscenity and child pornography are both considered illegal non-speech under the law – they are completely prohibited forms of expression that carry criminal penalties. On the other hand, “harmful to minors” content is material that is fully protected under the First Amendment as to adult audiences, but can be regulated so as to prevent minors’ access to the content. The government may regulate minors’ access to the “harmful-to-minors” material (sometimes called “indecent” material) as long as the regulations do not unduly interfere with adults’ abilities to access the work. We discuss the definitions and limitations on these types of speech in the sections below.
The BasicsObscenity
The production and distribution of obscene material is not protected by the First Amendment. Courts have struggled to define what, precisely, is considered “obscene.” While material must have a sexual component to be considered obscene, it is certainly not the case that any sexually explicit material is obscene. In fact, the vast majority of sexual content – often called pornography – is protected under the First Amendment.
In the 1973 case Miller v. California, the Supreme Court articulated the current three-part test for determining whether material is obscene. First, the work taken as a whole must appeal to the prurient interest, as determined by an average person applying contemporary community standards. Second, the work must depict sexual conduct in a patently offensive way; and third, the work, when taken as a whole, must lack serious literary, artistic, political, or scientific value.
The Court held that the “contemporary community standards” at the heart of the first prong of the test referred to the physically local community in which the dispute arose. When cases involve material sent from one community to another, the Court has generally found that it is the publisher or distributor’s responsibility not to send materials into communities that would consider such materials obscene; this reasoning rests in part on the ability of publishers to review the postal or area codes of the intended recipients of their material, and to determine whether to take the risk of offending those communities’ standards.
The concept of “contemporary community standards” is more difficult to apply to the Internet. Material posted online can potentially be accessed by anyone in the world, making the geographical location of any individual less important to the exchange of speech and ideas. Further, it is much more difficult to accurately pinpoint the geographic source of a request for material, and difficult, if not impossible, to prevent people from particular communities from accessing it.
The issue of the relevant community standards to apply to material available over the Internet remains unsettled: while many courts have followed the traditional Miller test and applied the standards of the community in which the recipient of material is located, a federal appeals court in California recently suggested in US v. Kilbride that Internet obscenity cases required consideration of national community standards, not local (and potentially more conservative) ones. Even under such a proposed national standard, it is unclear how courts would determine such a standard. These issues have only begun to be considered in the courts, and may come before the U.S. Supreme Court in the next few years.
Indecency and Speech that Is “Harmful to Minors”
In contrast to obscene material – which is illegal for adults – speech that is considered “harmful to minors” or “indecent” is lawful for adults, but may be inappropriate for minors. Harmful-to-minors or indecent speech is protected under the First Amendment for adults, but in certain cases, access to such speech may be limited or regulated to shield minors from it. While such speech cannot be banned in the same way as obscenity or child pornography, it can be segregated, and access to it may be limited such as by laws that require adult-oriented movies or magazines to be shielded from minors in retail stores, and not sold to minors.
Much like obscenity, the terms indecent and harmful-to-minors are somewhat difficult to define, and those two terms have are often merged as a practical matter. Courts have generally found that the government may regulate indecent material to the extent that it is harmful to minors.
To determine whether material is “harmful or obscene as to minors,” courts will apply the same three-part obscenity test discussed above, with the additional consideration of the material’s suitability for minors. Not surprisingly, this definition runs into the same problems as the obscenity definition: courts must determine the relevant community standards to apply, and must assess the value material may have to minors.
Prior to the advent of the Internet, harmful-to-minors material was policed in relatively simple ways: keeping kids away from indecent content could be reasonably easily achieved by putting material in restricted-access sections of stores, prohibiting broadcast of indecent speech between the hours of 6AM and 10PM, and requiring young people to provide identification when attempting to purchase or access adult content. These strategies, however, do not translate well into the online context, and to date most laws that sought to prevent harmful-to-minors content from being online have been struck down as in violation of the First Amendment.
Generally, courts have found these law to be unconstitutional because the laws result in over-blocking of protected speech and they unjustly infringe on the rights of adults to access the material. Critically, the courts have found that the use of filtering and other technical tools by parents is a more effective way to protect children online, thus allowing parents to shield their children from unwanted content without infringing on the rights of others.
Although most laws aimed at regulating speech on the Internet have been struck down, such laws are very popular with legislators and some advocacy groups, and new proposals are introduced frequently.
Contrasting Regulation of Broadcast and Online Speech
There are different standards for different communications mediums regarding what types of regulation are permissible. With broadcast radio and television, for example, the Supreme Court held in FCC v. Pacifica that the government may prohibit the broadcast of indecent material during times of the day when children are more likely to be watching. The Court’s decision was based on two characteristics of broadcast radio and television: it is “uniquely pervasive,” being constantly beamed into people’s homes and accessible it at any time, and it is very accessible to children, who could be exposed to indecent content even before they learned to read. In contrast, indecent material receives a higher level of protection when it is conveyed over the phone, as the Court decided in the Sable Communications dial-a-porn case; when individuals must take specific action to access the material, unsuspecting listeners can easily avoid the indecent material.
In the Reno v. ACLU case of 1997, the Supreme Court was faced with deciding what standard should be applied to government regulation of speech on the Internet. At issue was the Communications Decency Act, which was aimed at restricting access to indecent material online. The government argued that, like broadcast radio and television, the Internet was pervasive and easily accessible by children, and thus the government should have the same broad power to limit indecent content online, in order to protect children.
Fortunately, the Court rejected this argument, finding that the Internet is not pervasive in the way that broadcast TV and radio is, and that users seldom encounter indecent material online accidentally. The Court recognized that the Internet presented a new communications medium that allowed more people than ever before to communicate about all kinds of topics, and held that the Internet should receive full First Amendment protection. The material banned by the CDA was clearly protected, and because the effect of the CDA would be to reduce all content on the Internet to the level appropriate for children, the Court overturned the law.
The foundation of broadcast indency regulation, Pacifica, has come under increasing attack in light of the much greater levels of parental control that is available for broadcast television, ranging from the V-Chip to digital video recorders that give parents significant ability to control what their children watch on TV.
Child Pornography
Like obscenity, child pornography is not protected under the First Amendment. The production, possession, or distribution of child pornography is a crime punishable by imprisonment.
Federal law defines child pornography as “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture . . . of sexually explicit conduct, where the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct.” Outside of the United States, the term “child abuse images” is used to refer to what is termed “child pornography” in the U.S.
The rationale for prohibiting child pornography is the unquestioned harm that is inflicted on any child who is used in its creation. The courts have concluded that child pornography is one of the limited circumstances in which the speaker’s rights are vastly outweighed by the harms suffered by children involved in the production of the material. As one of the most egregious crimes, child pornography, also known as child sexual abuse imagery (CSAI), is understandably a key focus of law enforcement efforts, both at the federal and state level in the United States, as well as internationally. Its regulation may prohibit the production, importing, purchase, downloading or possession of child abuse images. Most recently, Congress has moved to target online advertising for adult services because of fears those ads could be connected to child abuse.
The concern about the direct abuse of children is less applicable in the context of efforts to ban digital images produced without the involvement of actual children, including in art and advertising featuring adult actors or “morphed” or simulated images. In the 2002 case Ashcroft v. Free Speech Coalition, the Supreme Court addressed this issue and found that a federal law prohibiting any depiction that “appears to be … of a minor engaging in sexually explicit conduct” was unconstitutionally overbroad because it would prohibit non-obscene material that was not created using actual minors. In 2003, Congress passed the PROTECT Act in response to this decision, making it illegal to create or distribute an image that “appears virtually indistinguishable” from real child pornography. At least one individual has been successfully prosecuted under this “virtual child pornography” statute for the possession of Japanese anime cartoons depicting minors engaging in sexually explicit conduct. This conviction (of a defendant named Whorley) has been upheld on an initial appeal, but the Supreme Court has not yet decided whether the “virtual child pornography” provisions of the PROTECT Act meet constitutional standards, or whether composite or “morphed images” of real children that have been altered to appear as if nude or sexually explicit in nature can be criminalized.
Current U.S. federal law requires many Internet and online service providers to report any child pornography that they become aware of to the National Center for Missing and Exploited Children (NCMEC). Service providers do not have an affirmative duty to police their networks for child pornography, but if they learn of any child pornography material on their systems, they must file an extensive report with NCMEC, preserve copies of information about the user or subscriber involved, and cooperate with law enforcement requests for the information.
Obscenity
Roth v. United States, 354 U.S. 476 (1957)
The Roth case was the first modern articulation of the standard by which material may be deemed obscene. Though the Court upheld the prior categorization of obscenity as non-speech that receives no First Amendment protection, it restricted the definition of obscenity to material whose “dominant theme taken as a whole appeals to the prurient interest” to the “average person, applying contemporary community standards.” This was a more precise standard than the prior rule, which derived from the 1868 English case Regina v. Hicklin that defined obscenity merely as material that tended to “deprave and corrupt those whose minds are open to such immoral influences.” But the Roth standard still left room for confusion among the Justices as to what, exactly, constituted obscenity, leading to Justice Potter Stewart’s famous assertion in Jacobellis v. Ohio that he could not define obscenity, but “I know it when I see it.”
Miller v. California, 413 U.S. 15 (1973)
The Miller case developed a three-part test that supplanted the earlier Roth standard and widened the scope for what material could be deemed obscene. Under the Miller test, the court must consider:
-
whether the average person, applying contemporary community standards, would find that the work, taken as a whole, appeals to the prurient interest;
-
whether the work depicts or describes, in a patently offensive way, sexual conduct or excretory functions specifically defined by applicable state law; and
-
whether the work, taken as a whole, lacks serious literary, artistic, political or scientific value.
The first two prongs of the test are considered with respect to contemporary community standards, while the third prong applies a national “reasonable person” test. Material must fail all three prongs of the Miller test to be considered obscene. As discussed above, the law is still unsettled as to what the relevant community should be for cases involving dissemination of material via the Internet.
Indecent and Harmful-to-Minors Material
Federal Communications Commission v. Pacifica Foundation, 438 U.S. 726 (1978)
The Pacifica case established the principle that the Federal Communications Commission had the authority to regulate indecent, but not obscene, content that was broadcast over the airwaves in the interest of protecting minors. The FCC argued that it had a compelling interest in shielding children from material that was patently offensive to them, and in ensuring that unwanted speech did not enter people’s homes. The Court agreed, reasoning that because the broadcast medium is a “uniquely pervasive presence” in people’s lives, reaching them in the privacy of their homes, and is accessible to children in a way that the written word may not be, the government may impose restrictions that aim to limit the broadcast of indecent speech to times when children are less likely to be in the audience.
Sable Communications v. Federal Communications Commission, 492 U.S. 115 (1989)
In Sable, the Court demonstrated that the FCC’s ability to restrict indecent speech truly depended on the unique nature of the broadcast medium. In this case, the Court held that an FCC ban on indecent telephone messages (a.k.a. “dial-a-porn”) violated the First Amendment because it was overbroad and limited adults’ access to protected speech to an extent not justified by the interest in protecting children from such messages. Dial-a-porn messages are not “pervasive” – indeed, one must take affirmative steps to access the indecent material. Further, the FCC’s regulation of indecent broadcasts in Pacifica did not constitute a total ban, and the government was unable to present any sort of record that would indicate that a total ban for indecent phone messages was justified or required. The Court held that there were less restrictive means of achieving the government’s goal of keeping kids away from indecent phone content, and struck down the ban as a violation of the First Amendment.
Reno v. American Civil Liberties Union, 521 U.S. 844 (1997)
Congress’s first attempt to regulate minors’ access to indecent material over the Internet was the Communications Decency Act (“CDA”). Passed in 1996, the CDA made it a crime to disseminate indecent material online in a way that made it accessible to minors. This law was challenged immediately on constitutional grounds, and the Supreme Court agreed, finding that “the CDA lacks the precision that the First Amendment requires when a statute regulates the content of speech.” The Court drew a distinction between Internet-based and broadcast media, and held that the Internet should receive the highest level of First Amendment protection available. Because the indecent material targeted by the CDA was constitutionally protected speech, the government could neither ban it nor drastically restrict adults’ access to it.
The Court also warned of the chilling effects likely to occur when a “local community standards” assessment was used in relation to material available on the Internet: because online text, video, and images are immediately accessible by practically anyone in the world, there is a danger that the most restrictive community standards will be applied, and that, for fear of prosecution, speakers will censor themselves according to what is deemed suitable in the most restrictive communities. The Court concluded that there were less restrictive means, such as user-controlled filters and other user empowerment tools, that could be implemented to protect children from indecent content without reducing all discourse on the Internet to the level appropriate for small children.
Child Online Protection Act of 1998
Following the Supreme Court’s rejection of the Communications Decency Act, Congress passed the Child Online Protection Act, which would have made it illegal to make any commercial communication that was “harmful to minors” unless the provider of the material had restricted minors’ access to it (such as by requiring a credit card number to view the material). COPA was challenged on the same grounds as CDA, for both acting as an undue burden on protected speech and failing to achieve its goal of protecting children. COPA restricted adults’ access to protected content, and would likely constitute a chilling effect on protected speech, as the age-verification process required by the law would be costly and time-consuming to implement. The government could not claim that COPA would keep all or even most of the indecent content available online away from children, since the law would only apply to U.S.-based content providers, and the Internet is a global network. COPA also did not present the least restrictive means for protecting children online, as user empowerment tools remained the best way for parents to decide for their own children what sort of material was or was not appropriate, while not hindering adults’ access to protected material.
Litigation over COPA dragged on for ten years, with two Supreme Court cases (Ashcroft v. American Civil Liberties Union, 535 U.S. 564 (2002); Ashcroft v. American Civil Liberties Union, 542 U.S. 656 (2004)) highlighting the constitutional problems with the statute. Finally, in 2009, the Court declined to hear a final appeal from the Third Circuit’s invalidation of the rule on constitutional grounds, effectively striking down the law.
Children's Internet Protection Act
One federal law aimed at limiting children’s access to indecent content that has survived the Supreme Court’s scrutiny is the Child Internet Protection Act (“CIPA”). This law, enacted in 2000, requires that public libraries receiving federal funding install filtering software on each of their computers that can access the Internet. The American Library Association challenged this law in 2002, arguing that this requirement was equivalent to government-mandated censorship in libraries, and that it thus violated the First Amendment. The federal district court in Pennsylvania agreed, finding that the filtering software required by the statute would both fail to block all of the material targeted by the statute and would overblock protected speech. It struck down the law as a violation of library patrons’ First Amendment rights.
The Supreme Court reversed this decision on the grounds that the statute contained provisions allowing library staff to disable the filters at an adult patron’s request. While the statute provides that this request must be related to “bona fide research or other lawful purposes”, the Court relied on the Solicitor General’s statements that a patron does not need to provide any explanation at all for why he or she wants the filtering software disabled. Because the statute provides a fairly simple mechanism for adults to gain unfiltered Internet access, the Court ruled that CIPA did not violate the First Amendment.
Child Pornography
New York v. Ferber, 458 U.S. 747 (1982)
The question in this case was whether a legislature could prohibit the production and distribution of “material which shows children engaged in sexual conduct, regardless of whether such material is obscene.” Non-obscene pornography is generally protected by the First Amendment, but this law sought to ban any sexual material produced using children.
The Supreme Court upheld this ban due to the strong interest the state has in protecting children from sexual abuse, finding that “the use of children as subjects of pornographic materials is harmful to the physiological, emotional, and mental health of the child.” While the value of child pornography is “exceedingly modest”, the harm done to the children used in its production is severe, and the material itself serves as a permanent record of the abuse. In order to dry up the market for child pornography (and reduce the number of children abused to produce it), the Court held that it was permissible to prohibit the production and distribution of the material and set child pornography as a category of non-speech outside of the protection of the First Amendment. This rationale was later used to uphold laws that criminalized the possession of child pornography, as well (see Osborne v. Ohio, 495 U.S. 103 (1990)).
Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002)
This case challenged the Child Pornography Prevention Act of 1996, which included a prohibitions on “any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture” that “is, or appears to be, off a minor engaging in sexually explicit conduct” and on “any sexually explicit image that was advertised, promoted, presented, described, or distributed in such a manner that conveys the impression it depicts a minor engaging in sexually explicit conduct.”
The Free Speech Coalition argued that these two provisions were overbroad and likely to result in the suppression of non-obscene material that did not depict actual minors. The Supreme Court agreed, finding that the rationale for prohibiting child pornography was the significant harm done to children in its production and distribution, but that the virtual child pornography also prohibited by the statute “records no crime and creates no victims by its production.” Because the CPPA improperly prohibited protected speech, the Court struck it down as a violation of the First Amendment.
PROTECT Act of 2003
In response to the Court’s decision in Ashcroft v. Free Speech Coalition, Congress passed the PROTECT Act, which amended the 1996 Child Pornography Prevention Act to prohibit virtual child pornography images that are “indistinguishable” from true child pornography, as well as virtual depictions (including drawings, paintings, and cartoons) of child pornography that are also obscene. Further, the PROTECT Act revised the “pandering” provisions, which prohibit promoting material “in a manner that reflects the belief, or that is intended to cause another to believe,” that the material contains child pornography.
The “pandering” provision was challenged in the case United States v. Williams (444 F.3d 1286 (11th Cir. 2007), rev’d 553 U.S. 283 (2008)). The Eleventh Circuit struck down the provision as vague and overbroad, and found that the “First Amendment plainly protects speech advocating or encouraging or approving of otherwise illegal activity, so long as it does not rise to “fighting word” status. Thus, the non-commercial, non-inciteful promotion of illegal child pornography, even if repugnant, is protected speech under the First Amendment.”
The Supreme Court disagreed, however, and overturned the Eleventh Circuit’s decision in 2008, finding in part that “offers to engage in illegal transactions are categorically excluded from First Amendment protection.” The Court drew the distinction between proposals to engage in illegal activity, which may be prohibited, and mere abstract advocacy of illegality, which receives First Amendment protection. Because this provision of the Act only prohibits offers to provide or obtain illegal material, and does not prohibit advocacy of child pornography in general, the Court held that the provision did not violate the constitution.
Center for Democracy and Technology v. Pappert, 337 F. Supp. 2d 606 (E.D. P.A. 2004)
In 2004, the Center for Democracy and Technology led a challenge against a Pennsylvania child pornography law that required ISPs to block users’ access to websites that contain child pornography. In that case, CDT proved that the ISPs blocked access to more than one million innocent websites in an effort to comply with fewer than 400 child pornography blocking orders. The federal court held that blocking of substantial amounts of protected speech violated the First Amendment, and that the law was therefore unconstitutional.