Wednesday, December 17, 2008
Content-based restrictions on free speech are unconstitutional, but what about delivery-based restrictions?
In the COPA case, courts at all levels have said that, given any particular paradigm of distribution, legally driven censorship based on content is unconstitutional. Within the particular delivery method, one must use the least restrictive means in filtering out content from immature or minor users. That should mean that, given the premise that the “free entry” Internet is to be used to argue a particular position about something, all points of view have an equal chance under the law.
But is the “free entry” that we have become accustomed to itself constitutionally guaranteed? First, it exists because the business models that support it have worked so far, but they could be challenged by the economic collapse and by technology that enables visitors to block advertising. Second, it has stayed around for the past dozen years partly because of built in protections for ISP’s against downstream liability (like Section 230 of the 1996 “Communications Decency Act”, a section that was kept, and later the maligned safe harbor provision of the DMCA (Digital Millennium Copyright Act). Another threat, however, comes from the asymmetry inherent in free entry, which tends to suggest that in some situations unpredictable risks can occur for others. Some examples (a very few of them tragic) have to do with cyberbullying and “reputation” issues associated recently with youthful behavior on social networking sites. That could lead some day to requirements for third party supervision, bonding, for enough revenue stream to cover risks, or various forms of mandatory insurance, that could end the opportunity for free expression as we have become accustomed to it. I’m not sure that there is any reason that it would be patently unconstitutional to “regulate” self-promotion, independent of the content. So a law requiring bonding or insurance as a condition for access to free entry would not be unconstitutional (in comparison to laws like COPA) because it would not be content-based.
Thursday, December 04, 2008
While we wait to see what the government and Department of Justice does about the latest round of COPA (the appeals court ruling in July), I still wonder this: was the battle over the Child Online Protection Act in practice a cultural battle between those who have kids and those who don’t? Is it somehow a de facto cultural battle between singles and married? Between gays and straights?
The reaction that I have sometimes heard from a few people something like is, “how dare you engage in something that would endanger my child. You don’t know what “responsibility” is. You have no right to be heard from.”
But it seems that the “burden” of protecting children on parents seems more a matter of economic class and education than simply having children or not. By and large, parents in reasonable-income homes (that’s getting harder during the economic crisis) know how to use filters, and by and large, their kids do know how to use the computer responsibly. I disagree that the “family computer” always needs to be in a public area of the house; older teens who understand the responsibilities and risks and who have earned trust of their parents should be able to have access to the Internet on their own; they may need it for school work.
Learning to use the Internet responsibly is like learning to drive a car. Public school systems should be doing much more to provide instruction on responsible use of the Internet, but often they don’t have the expertise themselves within their teaching and administrative ranks. In harder economic times, it is up to the software industry to reach out and help them.
Sunday, November 16, 2008
There is a new free-market idea to the problem of properly identifying minors and adults on the Internet. A company called “eGuardian” (“Protecting your child on the Internet”) offers lifetime registration for $29 per child, and then offers commercial websites the opportunity to become partners so that they know that their children are minors. Then the commercial websites (especially social networking sites) can prevent age inappropriate material from reaching the minors. It is also possible to integrate the product into search engine results. Critics say that the commercial websites will use this to feed kiddie advertisements (for products like sweet cereals, toys, candy, etc.) Somehow the “Alphabit” cereal from the 1970s comes to my own mind.
The New York Times story is by Brad Stone, in the Ping Section, and is actually titled “Online Age Verification for Children Brings Privacy Worries,” on p 4 of Sunday Business, Nov. 16, 2008.
Tuesday, November 11, 2008
A post on a blog called “cyb3crim3” examines the question as to whether an computer file (or, for that matter, handwritten or manually typewritten letter or paper) that contains text alone and not pictures could ever be considered child pornography under the law. The blog is this and the relevant entry is dated Oct. 27, 2008.
The laws in some other countries, including Canada, Britain and some other commonwealth countries is, definitely, yes, in some circumstances. The blogger in this case discusses in detail a case in Canada, Regina v. Sharpe (2002). Wikipedia also points this out.
She (Susan Brenner, a law professor) quotes the Unites States statute 2256, link here on the Cornell Law School database. The text of the law quite clearly uses the term “visual depiction” which refers to “visual image” which itself is not defined. She hints that the possibility could exist that text that promotes the sale of such an “illegal” image could come under the penumbra of the statute.
It’s also important to note that the concept of “obscenity” does include text in American law. The concept refers to the lack of redeeming social value (“I know it when I see it”). CP, on the other hand, never offers the defense of “redeeming social value”.
She goes on to discuss Ohio v. Dalton (“793 N.E.2d 509 (Ohio App. 2003)”). There are more details in a book called “Cases in Communications Law” by Paul Siegel, 2007, from Rowman & Littlefield Publishers, Inc . The "books" link is here.
In this situation, someone already convicted of other crimes was charged with and plead guilty to a CP offense when private textual writings depicting such acts were found in his room. The facts are quite complicated, but eventually the Ohio appeals court allowed the withdrawal of the plea. The charge essentially amounted to punishing someone for fantasies or private thoughts (although these can become relevant if someone is in a treatment program after conviction for an actual offense). Applying the law this way did not have anything to do with preventing the use of actual minors. Some of the analysis had to do with the Surpeme Court’s ruling declaring the 1996 CPPA provision regarding computer generated images (not using minors) unconstitutional.
It’s important that had the textual materials been legally obscene, private possession of them would not be illegal for that reason, but distribution in public (including the Internet) would be illegal. But it is illegal to possess CP even in private. The other question is whether the distribution on the Internet of textual materials would be illegal. Wikipedia used to say that in the United States it was not illegal, but that sentence seems to have been removed. It might violate an ISP’s “terms of service”, but that is a private matter, not specifically a matter for criminal law. It could create issues on a public site when viewed from countries for which it would be illegal. Such a textual entity might well have redeeming social value, by showing the legal consequences following the depicted acts, say in a fictional story or screenplay. But an entity could run into other novel problems involving “implicit content” or even “enticement.”
These laws (even when viewed just in terms of images in their conventional meaning) could lead to certain exposures for wrongful prosecution, partly because of the “strict liability” doctrine for a possession offense (based on the idea of “conclusive presumption”). Hacking is a possibility, and may have happened in one case in 2006 in Arizona (see particularly Feb. 3, 2007 on my Internet safetly blog here. Bloggers could be inadvertently exposed to embedded images when moderating comments (they have to look at the comments first in order to determine whether to reject them), and even ordinary email home users could click on HTML email with embedded illegal images and then illegally possess the images. By and large, frivolous prosecutions have not occurred, but the exposure in the law is disturbing.
Virginia’s statute is here and is fairly specific with the terms “visual material” and “identifiable minor.”
All of this is a body of law somewhat distinct from COPA, although it got mentioned at the 2006 COPA trial.
Another note on Arizona:
This particular state his particularly draconian laws and penalties, and has been accused on "enemy jurisprudence," ignoring the "harm principle" and trying to use the law to destroy "undesirables." Consider the 200 year sentence (with no possibility of parole) in the case of Arizona v. Berger (twenty consecutive ten year terms for ten counts against former schoolteacher Morton Berger) , with this entry on the Law & Society Blog, here. "Enemy" thinking is what drove the ideology of Nazi Germany.
Tuesday, November 04, 2008
Australia has created a bit of a flap by trying to require mandatory content filtering throughout the country.
The filtering resides at two levels. One level filters content deemed illegal under Australian law. The second level, which apparently users must have on their computers, can be opted out by adults. The filters reportedly degrade network performance from by 20% to 75%.
Electronic Frontiers Australia has a report on “Labor’s Mandatory ISP Blocking Plan” here. Still, two thirds of parents don’t have the filters installed.
Ars Technica has a report by John Timmer titled “Aussie Govt: Don’t Criticize Our Terrible ‘Net’ Filters,” link here. The current Australian government seems determined to implement the program despite evidence developed by local journalists that it is flawed.
In the United States, the use of voluntary filters are the main way that parents can prevent objectionable content being viewed by their children. A more progressive system would be voluntary content labeling, with the cooperation of software developers and ISPs, as proposed and documented by the ICRA and discussed here before. It would appear that Australia is attempting to implement something like “COPA” in its filtering system.
Friday, October 10, 2008
A teenage girl in Licking County, Ohio faces juvenile c.p. charges for sending indecent cell phone photographs of herself to other teen classmates, and the other teens who received the photos may also face possession charges. It is not clear whether the law would punish the recipient of a cell phone photo who did not know what it was until looking or deleted the image. (The same concept could apply with IM’s and emails on a computer, or even website visits if one does not know what the site is or if the domain is spoofed in an phishing email.)
There have been other cases around the country, as with a teenager who engaged in “revenge porn” (previous post) against an ex-girlfriend on Myspace.
Many critics are condemning the “exhibitionism” or attention-getting of social networking site culture.
The ABC News story by Scott Michels is here.
Update: Jan. 18, 2009
Here is a similar case in Pennsylvania reported by the AP and on MSNBC, story by Mike Brunker, dated Jan. 15. The incident occurred at Greensburg Salem High School in Greensburg, Pa. Police Capt. George Seranko was quoted as saying "It's very dangerous. Once it's on a cell phone, that cell phone can be put on the Internet where everyone in the world can get access to that juvenile picture. You don't realize what you are doing until it's already done.
Saturday, October 04, 2008
On other blogs, I’ve talked a lot about online reputation, and the possibility that someone else – a social enemy – can trash it. The November 2008 issue of “Details” has an article by Richard Morgan (photos by Jamie Kripke) on “revenge porn.” Curiously, the printed article starts with a complex photo pattern on p 96 that appears to include part of Coors Field in Denver (for the Colorado Rockies) and the cover of the issue features the “Gossip Boys.” The printed story was hard to find, and started on p 105.
The online blog link is here.
The article discusses the practice of people (mostly straight, sometimes gay) of gaining “revenge” against ex-partners (sometimes spouses) by posting salacious and explicit videos and photos online. Some people have gotten into legal trouble, for identity theft (impersonating someone else – already a legal issue in the “Myspace Case” in Missouri) and in a few cases for child pornography, when teenagers were involved. Some have used sites set up to encourage the practice.
Generally, criminal law does not have specific provisions regarding this problem, apart from what was in the bricks and mortar world.
The obvious “civil” problem is online reputation. Presumably “victims” of this practice could contact a company like Reputation Defender for assistance.
Had COPA been upheld, would this practice have been affected? The motives of the persons posting the material are personal (“revenge”) and not commercial. However, it would seem that the law could have applied if hosted on commercial social networking sites set up themselves to make a profit.
Social networking sites (and some “revenge” sites) say they take down material of this nature if they received complaints or believe that the material is patently illegal.
Friday, September 12, 2008
Senator Lieberman urges YouTube to tighten standards for acceptable use; his concerns are specific, but what about COPA, and "implicit content"?
This morning (Friday Sept. 12, 2008) the Business Section D1 of The Washington Post reports a story by Peter Whoriskey, to the effect that YouTube will removed more “inciting” videos from its site and tightening its terms of service regarding certain kinds of enticing or hateful speech. YouTube has taken the action partly because of recent criticism by Senator Joe Lieberman (CT, now effectively “Independent”), who was specifically concerned about videos that appeared to be connected to Al Qaeda or various tribal or sectarian groups in Iraq and possibly Pakistan or Afghanistan, endangering American troops. The link for the story is here.
The print version of the paper includes a copy of YouTube’s terms of service. The online version did not. However, generally publishing services (including “free” services from Blogger or Wordpress and paid or subscribed hosting services from companies like Yahoo!, NetworkSolutions and Verio, as well as AOL) have similar rules in their terms of service.
One problem is that any publishing service, if it tightens acceptable use policies out of concern over one particular group (here the group could be troops overseas) it must enforce them in a uniform manner with respect to all issues. This is similar to the well known problem that airport screeners cannot profile individual travelers out of appearance and have to enforce the same rules for everyone.
Concepts like “enticement” or “hate speech” are particularly subject to interpretation. Actually, United States Code has some specific statutory federal laws regarding “coercion and enticement” of minors. The problem is that both concepts tend to live “in the eye of the beholder” and tend to relate to whom the subjects are, who the speakers are, the relationship between the speaker and subjects, and the manner of delivery of the speech. Asymmetric speech such as YouTube videos or blog entries may be more provocative than similar or identical speech that appears embedded in a commercial format, such as major motion pictures. (This reverses or contradicts a commonly held perception that the First Amendment protects individual non-commercial speech more thoroughly than corporate commercial speech through establishment channels. The opposite is sometimes true, because a lot of First Amendment protection involves collection action. It's also true that YouTube is a private enterprise and can theoretically restrict speech as it pleases, but in practice YouTube is trying to comply with what it believes the law requires.) A good example of such a situation would be a particularly “offensive” dialogue that occurs half way through the recent Dreamworks hit film directed by Ben Stiller, “Tropic Thunder.” Had that scene been posted separately on YouTube (and had it been an original scene, assuming “Tropic Thunder” did not even exist for copyright problems) it certainly would have violated YouTube’s “terms of service.” (As we know, there were demonstrations against the film and threatened boycotts, but Dreamworks did not pull it. I suspect the sequence would have violated the code for broadcast television, however.)
I discussed this problem in my blog posting here Wednesday, Sept 10. People often want to go into religious or “existential” moral “meta” arguments about a problem (say sexual orientation), but others may feel that the only (“disguised”) point of the speech is to “target” them. Conservatives often make this complaint, particularly in relation to campus speech codes. (John Stossel has pointed this out on his “give me a break” series.) Likewise, a video of a violent or disturbing event, objectively legal and comparable to a sequence that would occur in a Hollywood film, posted for “notoriety” but not for compensation, might be perceived as “enticing” because of external circumstances: the speaker has no believable motive other than to stir unrest in others. Maybe common sense (as with teen “fight club” videos – Hollywood again, with a famous film of that name!) applies, but it could be very hard to draw a line. I ran into a problem with a screenplay script (not a video and not a blog) on my own domain when I was substitute teaching because it was thought to depict a character like me as vulnerable to manipulation by students into illegal activities. I say, I posted it to demonstrate a problem in a work of fiction. Others say, if I am an “authority figure or role model” I have no business suggesting that my own credibility in that responsibility could be compromised. What if I had filmed the screenplay with actors (with no explicit scenes) and YouTube video? Theoretically, a particular person could be barred from asymmetric speech altogether because any controversial speech by the person could be construed as deprecatory and therefore potentially enticing.
With the COPA law taken and applied literally, it could not have created a COPA violation, even if COPA had been upheld. But would it have violated the “terms of service” as written, given this interpretation?
That also brings to mind still another question. If the government appeals the latest COPA opinion and the Supreme Court somehow upholds COPA, will YouTube and others have to incorporate COPA into their “terms of service”?
As I note, the article did not discuss blogs specifically, but blog entries often have embedded videos (from YouTube or otherwise) or still images. On my blogs, many of my images are just decoration and unrelated to the post; many others obviously relate. I do try to avoid picking an image that, in context, would cause misinterpretation of a particular post.
Wednesday, September 10, 2008
COPA and implicit content: it's going to take real effort to keep "asymmetric" free speech protected
This is a note today just to rehearse the idea that remaining committed to free speech takes real effort.
The Internet has opened up the idea of “asymmetric speech” where one individual can reach a large audience without “permission”. We’ve seen this in business (with the launching of companies like Facebook), but it is also true of speech itself.
The possibility is disorienting in some ways. As we know from the COPA trial, parents have to deal with the possibility that their kids will find “objectionable” material online, sometimes posted by individuals who have not “taken the dive” to have children themselves. Parents have to learn how to use filters and the near accountability software and even content labeling, techniques which do work. Parents (along with schools) have to teach their kids safe Internet use in a way comparable to teaching safe operation of an automobile. (Oh, yes, remember folks, Smallville Season 1 shows the gifted Clark Kent driving a car or truck at the legal age of 14; he also surfs the Internet.)
There is another problem, more subtle, that is emerging. That’s “implicit content.” The issue got mentioned in passing at least once from the bench during the COPA trial. Potentially, the concept means that a speaker could be held (criminally or in tort law) responsible for how another party (“the reader”) interprets is “intent” according to the socialization norms of the reader rather than the context intended by the speaker. One particularly troubling way this problem occurs is that the speaker presents himself as "vulnerable" according to the social (or even professional) norms of others but not of his own. Generally, in the United States, this sort of idea is supposed to be invoked only when there is a threat of “eminent lawless action.” (That may be less true overseas, as in Britain.) Recently, and especially in the past two or three years (as social networking sites became the norm), the major media has been representing the notion of “documenting one’s life online” as inherently dangerous to the self and to others – especially the family, and school. In fact, public school administrators are particularly concerned about this problem, partly as a result of a few sensational tragedies but also because of the way the major media outlets cast the problem. Public school principals and high school history teachers are generally not informed on the intricate theories on applying the First Amendment the way young lawyers are in urban happy hours. And they have real practical problems, related to the unequal incomes and circumstantial opportunities of their “customers” (the kids and their parents). It doesn’t help when a major US publisher refuses to publish a well-written controversial book because of perceived “threats” (there have been several other such problems around the world, mostly in Europe and Britain), and when the legal system invites subtle abuses (like “libel tourism”) among those who would subvert free speech for their own religious or political agendas. Protecting free speech from these more subtle threats is going to take real effort. Free speech (including free "meta-speech") remains an important fundamental right even when the "existential purpose" of the speech seems troubling to some people.
Thursday, September 04, 2008
I wanted to take a moment to note again a potential vulnerability in the legal system with regard to “implicit content,” an issue that got mentioned at least in passing during the COPA trial in Philadelphia in 2006.
It’s common for blogs and websites that deal with sexual subject matter (or with subject matter that seems “adult” to a casual observer or to a robot) to attract spammy comments and emails back to the sender. Practically all ISP’s offer the ability to trap and moderate comments, and some trap some spam in advance. Even so, there could exist cases where the federal government, or particularly prosecutors in some states, might try to claim that a website or blog had been set up “for the purpose of” attracting illegal materials.
Generally, if a user clicks on a link in an email or comment that downloads an illegal image (c.p.) onto his computer, that user has broken the law, in terms of the way “strict liability offenses” work. (There may be no offense if the user does not click; there could sometimes be a problem with embedded images if HTML email options are turned on, depending on the options of the email program viewer.) It’s getting easier for law enforcement to detect these events, and there is more political pressure on prosecutors to act “wherever they can get a conviction” than ever before. While generally prosecutors are conservative and cautious in the way they apply existing laws, in a few cases (like the “Myspace case”, or in another case about a blogger who made enticing posts as to where to go for “illegal” purposes in the LA area) they have reacted with “creative prosecutions.” The problem is that it can be very tempting to pursue a conviction based on images on a person’s computer (even if placed there by another party without that person’s knowledge) if they law makes the technical aspect of proving the offense easy. Conceivably, the attraction of a large amount of emails or comments of an illegal nature could, in the minds of politically ambitious prosecutors in some situations, set up an “easy to prove” case.
Countering these fears is that Section 230 of the 1996 Telecommunications Act appears to protect ISPs, “free service providers”, and individual bloggers from “downstream liability,” either criminal or civil, for wrongful postings made on their spaces by others. Some people think that “brother’s keeper” provisions should be re-introduced into the telecommunications world, however.
Tuesday, August 12, 2008
NBC4 in Washington DC today announced the arrest of Peter North, 54, a computer specialist from the Department of Homeland Security, for visiting child pornography on his computer, apparently at home. The NBC link is here. The DC Examiner story is here.
The Center for Missing and Exploited Children in Alexandria helped, as did his Internet provider, Comcast. It appears that he used P2P networks had a large volume of images. (Note on Missing Children's website: the ".org" domain is a parked domain exploiting this name to draw traffic; use .com.)
Nevertheless, civil liberties experts question involving a private company this way in an investigation, because the “First Amendment” defense does not apply with a private company. I’m not sure how that claim makes sense with material that is actually illegal and not constitutionally protected anyway (from government intervention, relative to the protections of the First Amendment).
ISPs nearly always have “terms of service” or “acceptable use policy” provisions that prohibit access of illegal content through their networks. They say that they are required to report material like this to the government by law.
The TOS concept would be more troubling if a law like COPA were in effect.
Civil libertarians also point out the possibility of accidental access of illegal material, such as if clicking on a link in a comment sent to a blog to see if it is appropriate to approve. Theoretically, possession of even one image on one’s hard drive is a crime. In practice, however, to date prosecutions have involved only repeated and massive infractions of the law. The incident in today's NBC story appears to be a massive violation of the law. The possibility of framing of someone with hacking activity would seem to exist, given some of the problems (like domain name manipulation, reported on my identity protection blog) being discussed recently at the Black Hat convention in Las Vegas.
An earlier story occurred in February 2008 with a Republican Maryland General Assembly delegate from the Hagerstown area, Robert A. McKee, who resigned after his home and computer were searched and physical videotapes and printed matter were looked at as well, as in this Washington Post story (by Philip Rucker) from 2/16/2008, link here. Despite the concern that this issue could lead to wrongful searches and even prosecutions, it seems that in the major cases reported in the DC area, there really was plenty of physical evidence, outside of just IP and network tracking.
Tuesday, August 05, 2008
The Third Circuit’s opinion is relatively brief but bears a reading. The PDF file (link in the last post) runs 57 pages. There was a three-judge panel comprising Justices Ambro, Chagares, and Greenberg.
The Opinion starts out with a discussion of the “law-of-the-case” doctrine and then reviews the long history of litigation and its two previous opinions. “The Supreme Court’s decision left untouched our conclusion in ACLU II other than our decision that filters are a less restrictive alternative than COPA for advancing the government’s compelling interest at stake in this litigation” (pp 17-18).
The Appeals Court agrees with previous comments that “taken as a whole” can, in a practical world, be reasonably interpreted only in relation to a specific web page that a visitor may encounter randomly, as from a search engine. A minor may not reasonably know the context intended by the author of a particular page; she may not bother to find out, or may not have the cognitive maturity to grasp the intentions of the expression on an isolated page relative to other materials. But the Third Circuit goes on to use this interpretation to build a case for a finding of overbreadth (pp 20-21). The Court also agreed that the language of the statute could allow the concept of “commercial” use to be interpreted broadly by prosecutors, to include selling of ad space to automated algorithms, or to the use of apparently free content to obtain a purchaser later, or even the earnings of small income to defray costs of maintaining the site (p 23). The Court dismisses the government’s claim that the statute applies only to “commercial pornographers” as the concept is usually understood by the public.
The Appeals Court also made important observations regarding the compromise of anonymity (and possible exposure to identity compromise) in using credit cards or adult-id schemes, and suggested that the requirement could end the practice of providing much free content on the web (which is, as we have noted in other blogs, largely possible because of advertising, or sometimes possible because publishers have stable other income) (p 31). The Court wrote an interesting comparison to id schemes in the digital world with “binder racks” in physical stores.
The Appeals Court also agrees with the District Court that the voluntary use of filters available now is more effective in protecting minors than would COPA be.
On p 57, the Appeals Court concludes “In sum, COPA cannot withstand a strict scrutiny, vagueness, or overbreadth analysis and thus is unconstitutional.”
Tuesday, July 22, 2008
The ACLU (in an email from Chris Hansen) has advised plaintiffs that the ACLU has won again this morning. The Third Circuit Court of Appeals in Philadelphia has upheld Judge Lowell Reed’s ruling that COPA, the Child Online Protection Act of 1998, is unconstitutional. I discussed Reed's ruling on this blog March 22, 2007
I do not have any legal analysis yet. The Third Circuit’s website for civil case opinions is this and I expect the PDF document for the opinin ACLU v Gonzales (originally ACLU v. Reno) will appear shortly. There is another page for recent opinions, here.
One plaintiff said he would light a candle at Independence Hall tonight.
I’ll provide more analysis when it appears. Watch the major news sites for the story today.
I have a copy of the original Third Circuit Opinion from 2000 here, but the Supreme Court in 2002 overruled its reasoning on the analysis of community standards. There was a second 3rd Circuit Opinion in March 2003, here.
Update (later Tuesday)
The ACLU Press Release is here. The arguments are very much the same as Reed's: it is overbroad, would chill constitutionally protected speech among adults, and is not narrowly tailored, and actually does not close major loopholes in protecting children.
Third Circuit Opinion text (PDF) is here.
ACLU blogger post is here.
The case was originally called ACLU v. Reno, then ACLU v. Gonzales, and finally (now) ACLU v. Mukasey.
Monday, July 21, 2008
The Third Circuit Court of Appeals in Philadelphia has tossed out the Federal Communications Commission’s $550,000 fine against CBS for “indecency” during the 2004 Super Bowl. During the half time show, Justin Timberlake accidentally invoked a “wardrobe malfunction” in Janet Jackson while “dancing” much as in a disco.
The three judge panel ruled that the FCC had acted “arbitrarily and capriciously” during the incident. The AP story is by Joann Loviglio and appears here.
Other stories indicate that the Court ruled that Timberlake and Jackson were "independent contractors" and that the network was not responsible for what they said or did, leaving opening the possibility that they could be so individually. Elizabeth Jensen covers this in her New York Times Business Day story July 22, here. That comment reverses a trend in recent concerns about downstream liabilities of communications facilitators.
Although not directly related to COPA, the ruling would tend to suggest that the Circuit is likely sympathetic to the current ruling striking down COPA, although different judges within the panel may hear it.
The words in the song that Justin was singing make contradict the idea that it was innocuous. The incident created a sensation, being reported on AOL before the Super Bowl was over.
Timberlake at one time had a clean-cut “All American” image in the early days of his ‘Nsync career, despite the hilarity of some of the numbers (one video takes place in a toy store and it appears that the band is making fun of “don’t ask don’t tell” by pretending to be the toy soldiers). I attended an ‘Nsync concert in Minneapolis at the Metrodome in June 2001.
Since then, Timberlake’s “image” has changed somewhat, shall we say. See how he looks in “Alpha Dog” and “Southland Tales.”
Wednesday, July 09, 2008
Visitors following the COPA appeal might want to read the “How Stuff Works” discussion of Internet filters here, especially Internet Censorship at Home. The two main techniques employed are blacklists (or URL’s) and keyword blocking. The weakness of these approaches, particularly with keywords, is that filters cannot discern context or intent the way human readers can. Sometimes, as with automated screening for advertisers, there is an attempt at context detection that is misplaced. For example, a discussion of how the Second Amendment should be taught in high school government might be blocked because the connection of particular subjects could be viewed as enticing – until a human being (like a teacher) can read the posting and understand the real point.
There is also some controversy of the practice by many web filtering programs of encrypting their blacklists. A sidebar in this link gives other examples of false context blocks.
Friday, June 13, 2008
Some companies are offering a supplement or alternative to Internet filters at home, called “accountability software”. This kind of software emails the parent a record of all of the child’s surfing activity. The parent makes an agreement with the child (usually a teen) to perform monitoring and review, in addition to or as a substitute for the usual use of filters. As the teenager gets older and displays more responsibility (and maintains grades in school and has some reasonable level of legitimate “real world” activity) the parent backs off.
It’s also possible for a spouse to install such a package on a computer clandestinely and get emails about a spouse’s website habits, such as addiction to pornography.
In opposing COPA now in the Appeals court (previous post), it sounds like it is possible to strengthen the ACLU's arguments about the general effectiveness of filters, properly used, with mention of the opportunity to use accountability software.
A typical site offering such a product is this. Another site is “Covenant Eyes” (not connecting this morning). That site gave a reference int he comments to the ACLU blog post (given in yesterday's entry on this blog, about the COPA appeal).
Thursday, June 12, 2008
The ACLU reports that oral arguments on the government’s appeal to the Third Circuit in Philadelphia took place yesterday June 12. This is the third time the ACLU has been before the Third Circuit in the nine year history of the litigation over the Child Online Protection Act, whose enforcement was first enjoined in early 1999. Chris Hansen (not the same person as the Dateline reporter) argued for the ACLU. The original opinion declaring COPA unconstitutional was rendered by Judge Reed March 22, 2007 (after a bench trial in the fall of 2006 in Philadelphia) and reported on this blog that day. The trial progress had been reported on this blog in the previous fall.
The Supreme Court has indicate that it would strike down the law on merits (outside of the community standards debate in 2002) if filters can be found to work. The trial showed that they work about 95% of the time. The government is left to argue that it has an “in loco parentis” responsibility to protect the children of parents who do not know how to operate their kids’ computers and install filters. And it seems to believe that it should control domestic “harmful to minors” material before it tackles the problems from overseas servers (the ACLU calls this the “belt and suspenders approach”.)
The efforts by New York state to get private communications carriers to remove CP materials from their servers may further back up plaintiff’s claims that private screening techniques work, even if there are other legal objections to depending on “private censorship.”
The ACLU’s blog entry yesterday (June 11) was “Take Three: Appellate Court Hears Challenge to Internet Censorship Again,” link here. I love the ACLU's title "Blog of Rights: Because Freedom Can't Blog Itself."
Wednesday, June 04, 2008
Parental monitoring: Kidzui site may provide a useful tool for parents to lock a kid's computer to specific sites
This morning (June 4), the NBC Today show presented a report on a new kids web service which is supposed to be set up to make it easier for parents to control what their kids may access on the web. The site is called “kidzui.com” ("The Internet for kids"). The parent can install it, free (I’m not sure if its just for Windows) and the site controls what web sites can be visited by the kid on that computer. The sites come from a “white list” made up by a panel of teachers and parents. The site home page has a promotional video with kids that does seem a bit silly. The site appears to be designed to benefit mainly parents of younger (elementary school and earlier) children.
Certainly, any reliable service that makes the job of parental monitoring easier is, given the legal struggles going on, a welcome development.
Tuesday, June 03, 2008
I still have not found any evidence of “progress” in the Third Circuit (in Philadelphia) of the Department of Justice’s appeal of Judge Lowell Reed’s ruling that the Child Online Protection Act of 1998 is unconstitutional (this blog, March 22, 2007). I’m trying to track this with some other news-tracking sites, like Mixx.
I did find some interesting discussion on the “how things work” sites about how filters for children work. There was a lot of testimony about this at the COPA trial in 2006. The main techniques are blacklists and using keywords. There is a “How Stuff Works” link by Jonathan Strickland that I found on Mixx, link here. But filtering software, like software trying to detect spam, has a hard time placing the context of words it finds, particularly in the English language itself, with its “analytic” grammar (Wikipedia explains that pretty well) and heavy use of idioms and slang and the use of context (rather than conjugation) to establish subjunctive mood or “fictive” speculation.
That’s why a cooperative venture among software developers and companies that facilitate user-generated content on the web to develop labeling schemes ought to be a good thing. Unfortunately, as noted before, the bills currently in Congress aiming at labeling look rather Draconian and inflexible (discussed on this blog previously). I checked recently, and both House and Senate versions have been sent to committees, but are not doing a whole lot (thankfully). (Look at the January 2008 entry on this blog for details for S. 1086 and December 2007 entry for H.R. 837; follow them on govtrack.us .)
There is no substitution for parental involvement in a child’s learning to use the Internet properly, any more than there is a substitute for parental supervision of a teenager’s learning to drive a motor vehicle. And for teens, there is no substitute for real-world success (in bricks-and-mortar learning and regular activities) in a school environment.
Monday, May 19, 2008
Supreme Court upholds law that arguably could affect legitimate movies (online, DVD's, theaters) when in commerce
The Supreme Court today upheld a controversial provision (called colloquially the PROTECT Act) of USC 2252 a3b. The Cornell Law School link for the law is here. The case is United States v. Williams and concerns only a portion of the conviction of Mr. Williams. The concern was that possession or exhibition of movies or videos, DVD’s, etc, or even website videos of some scenes from movies like Steven Soderebergh’s "Traffic" could lead to prosecution under the “pandering and solicitation” provision of the child pornography law, even when using youthful-looking but legally adult (18+) actors in explicit scenes. The concern has been raised about many other films (like the indie “Mysterious Skin” and maybe “The Deep End”). Some people had suggested that prosecutions could happen if merely talking about a c.p. item was "pandering" it, and it is true that one can be prosecuted for trying to distribute an item not physically in one's possession, or even an item that does not exist but that the speaker and / or message recepient believes to exist. Justice Scalia, in the affirming opinion in the 7-2 vote, said that it could not be construed this way. His opinion does stress the concept "explicit" is clearly written into the law (p 10 and 11 of the slip opinion, below), and therefore the implied but visual portrayal of teen sex in some "R" rated movies (the rating system is mentioned) is not at risk. In some way, the underlying illegal material (even if only "believed" to exist) must involve images. There was dissent. Justice Souter felt that possession of certain images could not be prosecuted, but pandering of them could be.
The text of the Opinion is here on the Supreme Court's own site (PDF format) as a slip opinion. It could have minor editing later.
The AP story is by Mark Sherman and appears here.
The CNN story is by Bill Mears, the CNN Supreme Court Producer, and appears here.
Linda Greenhouse has a straightforward story on page A17 of the May 20 New York Times, here, and explains the details and circumstances of Mr. Williams's federal criminal convictions.
Fred Barnes has a particularly cogent report on p A01 of the May 20 Washington Post, here. This is the clearest discussion so far of the somewhat confusing issues involved. The Post also published a cogent editorial May 22, p A24, "Safeguarding Children
The Supreme Court upholds a carefully crafted law targeting child pornographers," link here.
There remains a nagging concern whether a prosecutor might not agree with Scalia’s analytical comments that legitimate artistic films or videos (or advertisements for them) are “safe” and go after them. The Post editorial expresses a concern that prosecutors might not stay within the very narrow implementation given by the justices, who apparently didn't want to send this issue back to Congress (again).
However, in practice, the most common method of enforcement will probably be chat room stings by law enforcement, similar in technique to what we saw on Dateline. Courts have repeatedly upheld such stings (also buttressed by USC 2242, the "coercion and enticement" law).
The Supreme Court had overturned a "similar" law in 2002 dealing with computer simulation of c.p., which, it was said, could have made "Romeo and Juliet" movies illegal.
Thursday, May 01, 2008
As I noted earlier this week on my Major Issues blog (link here), school districts are becoming much more concerned about the “implicit” harm to minors that comes from the social context of much content on the Web, than they are about pornography specifically, or about the supposedly narrower definition of “harmful to minors.”
Recently, there has been a lot of media attention to cyber bullying, which usually takes place from home, but which concerns school systems if the effects come into the school campus. There has also been concern about content posted by students and often enough, teachers, which might be generally acceptable on the Net in an “open society” familiar with other media (like the content of movies) but which could affect a teacher’s or student’s “reputation” when found directly by searches and misinterpreted out of context.
Some teachers, including substitutes, have been disciplined or fired when content was found and generated complaints from parents. Some of the issues that have occurred included: use of certain words with double meanings, self-misrepresentation in “fiction” or “role playing”, underage alcohol consumption or drug use or promotion, semi-nudity that is provocative but that does not meet the usual meaning of pornography, or suggestive phrases used by people in looking for dates.
Substitutes can often be removed from lists at specific schools without cause or explanation, and Internet speech can definitely lead to this, leading to deadlocked situations where the substitute teacher cannot get a "rational" answer to what was objectionable about his or her speech, or have a normal opportunity to use First Amendment arguments.
School districts could consider written personnel policy changes that could entrench the teachers' role "in loco parentis" in balance to public employee First Amendment claims. Generally, state laws regard teachers as having "custodial" or supervisory relationships with students, but school districts could explicit say that this relationship extends to public spaces on the Internet (especially those accessible to search engines), even for subs (at least for long term subs). This sounds like the "at all times and in all places" doctrine of the military. They could hold teachers responsible for how students may interpret content found randomly out of context, as to "purpose" rather than just as an abstract artistic expression. This could lead to legal complications and in some cases might motivate enticement-based prosecutions. It might drive people away from teaching.
It's also noteworthy that a quiz at the recently opened Newseum in Washington DC specifically reiterates the idea that a principal may censor a student's personal Myspace or Facebook page only when school function or safety has been or is obviously likely to be endangered. It would sound as though a similar First Amendment interpretation would apply to teachers, but it sounds as though it is possible that the law could take into consideration the function of the teacher as a role model or custodial or authority figure.
Again, I wonder how content labeling could be used to help manage this kind of problem.
Monday, April 21, 2008
While we wait a report on the progress of the DOJ appeal of COPA before the Third Circuit, I think we can contemplate another “practical” feature of attempts behind laws like CDA and COPA. Particularly, I wonder if COPA was as much a MMOPA (“Married Men Online Protection Act”) or even an “Adult Online Protection Act” as one for children, despite all the pretense of adult-id’s and proposed verification schemes that the trial court determined could not work reliably (and better than filters).
Indeed, there have been some pretty tragic and disturbing cases of families hurt by the behavior of other adults online in the news recently, as on my Internet Safety Blog -- but that is the point, Internet safety is an ongoing effort.
More to the point, is that the availability of online materials for “fantasy” may be perceived as threatening to marriages as couples grow older and need to maintain emotional loyalty and interest – particularly if there are still minor children at home.
The new Newseum in Washington DC has a large First Amendment and free speech exhibit. I didn’t see any specific mention of COPA there yet, but there is a lot of material on student free speech. Perhaps that is coming soon.
Saturday, April 05, 2008
Yesterday, I described a new website safety rating mechanism, “Web of Trust,” here. Relative to this blog (originating with COPA and an interesting in labeling as a solution), the most important issue would be the reputedly “unsafe” nature of many supposed “adult sites.” It’s not clear if that represents primarily the risk to minors from them, or the fact that some of them may have malware on them.
But MYWOT sounds similar in concept to the idea of content labeling. It sounds like a way for the user community to effectively put a “label” on a website. One could imagine further steps to bring such labels into metatags.
One issue in “child safety” goes beyond the legal notion of “harmful to minors” as it was litigated with COPA (and is mentioned in a few other bills now in front of Congress – see earlier posts here). That is, as we have called it, “implicit content,” and that could become a relevant concept to a user-generated rating (or even labeling) process.
That is, some visitors wonder, when they find provocative material, what the “purpose” was of the posting, what the author had to “gain” from the posting. This could create legal issues with the concept of “enticement,” and raise a safety question. Some people see this sort of problem as analogous to requiring the leaving of "tempting" food out of reach from bears (in a tent) when camping. You have to take the time-consuming or "expensive" precaution of tying the food above ground, to protect self and the "resident population."
We’re seeing this sort of reaction in the world of social networking profiles, as employers and others have become very concerned about “self-defamation” in profiles. It gets perceived as a way to encourage others to imitate certain destructive behaviors (like the underage drinking and drug use).
I also would become concerned that religious or cultural “anti-gay” notions could cause a perception of “unsafeness.” Some people view openness to procreation as an intrinsic part of “respect for life” and see open “admission” of homosexuality as a deliberate attempt to make others uncomfortable with their own ability to form and raise families. That process is much more important than we have though for some time, and has become apparent again on the Internet, just as that kind of thinking drove some of the anti-gay fervor of McCarthyism until the early or mid 60s. That is one reasons why biological or immutability arguments seem much more socially acceptable to people than presentations about personal choice of “values.” Some people believe (perhaps for religious reasons) that those who “relinquish” their desire to have a lineage open themselves up to expropriation for those who will raise families, and this would be part of a natural moral order, quite separate from our system of law now which has become increasingly individualistic. That sort of thinking makes certain self-portrayal akin to "obscenity" in a psychological view that resembles religious notions of "blasphemy." It reminds one of the concept behind the two "Ring" movies! All of this can track back to concerns about the motives people have for their writings, especially online. I ran into this when working as a substitute teacher with my writings, and some of them were included in the submission to the trial in Philadelphia.
In the early days of COPA litigation, the plaintiffs sometimes advanced the theory that COPA could be used to suppress all commercial gay web material. That may not seem credible given the literal wording of the HTM concept, but if one considers “implicit content” as an issue, it seems more feasible. It will have to be watched in conjunction with new bills in Congress as well.
Monday, March 24, 2008
COPA and implicit content: social networking sites, even P2P, could answer "community standards" questions with affinity groups
In the first go-around before the Supreme Court with COPA in 2002, the Court, recall, ruled that the statute couldn’t be invalidated just on the basis of a community standards argument alone. In general, with troublesome issues, the Court held, it’s important, even with the Internet, to maintain the possibility of using the community standards concept (the Miller and Hamling tests, etc.). It’s been OK to use it with obscenity, so one cannot dismiss the concept out of hand. Justice Breyer had suggested the possibility of a “national community standard” for what “harmful to minors” might mean.
Eventually, as we know, COPA was returned to the district court in 2004, and the District Court in Philadelphia struck it down in March 2007, about a year ago today (see this blog), on other grounds, much of them having to do with the ambiguity of the HTM concepts and the impracticality of the “adult id” affirmative defense. It is now under appeal by the DOJ and we may learn something about how the oral arguments went soon.
But I wanted to note that the emergence of social networking sites, with the idea of affinity groups (a concept particularly promoted by Facebook) at least provides some practical implementation of the notion of “community standards” within online communities.
Consider the issue of the “screenplay” that got me in trouble (as I explain in the July 27 2007 entry on my main "BillBoushka" blog [links from profile]). I had posted it in the "scrplys" directory of my very public doaskdotell.com website in the early spring of 2005, just about the time social networking sites were starting to be noticed. Later, as I was still substitute teaching, the concern surfaced that, although the screenplay was fiction, the main character resembled me and that, when found by students at home, it might be viewed as “enticing.” Although the material probably did not fit the strict definition of HTM in COPA, the ACLU attorneys included the item in the materials submitted to the Court in the 2006 COPA trial.
The emergence of social networking sites and affinity groups raises the opportunity to circulate risqué materials in a closed group. Some cities have in-person screenwriting groups and even table readings, but the possibility exists for carrying out such activities online (even the table readings) with specialized web applications. The question arises, then, why wouldn’t such technology (which could either be web-based or perhaps P2P) provide an adequate opportunity to circulate such materials in places where they will get the constructive attention (leading to possible sale) that the writer really needs? One example would be the three Project Greenlight screenwriting contests held (by Miramax Pictures and Dimension Films) online a few years ago.
The issue here is not so much HTM as “implicit content” – imputing the risk that material will be misinterpreted (as to legal intention) when found by search engines out of context.
Facebook does have a group called “Hollywood Interrupted” which I have just started to explore. You have to join Facebook first to sign in, but anyone can join now and start by affinity just with the city of residence. The visitor may want to check “Fun Joey’s” blog entry on this.
Friday, March 07, 2008
Should public school systems be concerned about COPA? I think so, although it has to be put in perspective.
In the Spring of 2005, when I was substitute teaching, I did have the experience of briefing some social studies and history teachers on the issue in their teachers’ work room during the “planning period,” at a Fairfax County secondary school. We looked up the 2004 Supreme Court Opinion (which called for a “trial on the merits” while leaving the injunction in place) in work rooms in the computer, and then I showed them a link on my own site of my visit to the oral arguments at the Supreme Court on March 2, 2004, link here.
Later that Spring, in Arlington, at the Career Center, I would address some students on Internet safey and COPA was mentioned at least briefly.
This turned out to be dangerous. My openness about my material got me in trouble the following fall at another high school, here.
COPA, the government claims (and still does, in its latest appeal to the Third Circuit) was aimed at preventing commercial pornographers from displaying “harmful to minors” material on “teasers.” It was essentially a “son of CDA”, supposedly allowing an adult-id mechanism, which the trial record shows could not be very effective, to provide an affirmative defense.
It the DOJ is to be believed, it hardly seems that these “teasers” are the main threats to kids on the Internet. Anybody who watched NBC Dateline and Chris Hansen’s sensational TCAP series about chatroom stings knows that. Furthermore, one of the biggest problems is kids posting inappropriate personal information about themselves and their families on the Internet, especially in social networking sites, for the future viewing of employers and schools. The ethical issues underlying this whole problem are well beyond the scope of COPA (they invoke the idea of “implicit content”).
It’s interesting how lawyers on both sides are well paid to write detailed briefs on matters that now seem to miss the real points that we should be debating.
Were I to have a similar opportunity to speak at school today, I would have to stress these other matters, but one new wrinkle is also content labeling. That’s a great idea, but already some proposed legislation in Congress would muck it up, with mandatory features affecting even non-adult portions of small business websites. One must proceed very carefully with this. The visitor can keep track of the legislation at my Wordpress blog, here. That link also lists some other laws (regarding filters used in school libraries) that are sometimes confused with COPA.
Saturday, February 02, 2008
The previous posting notes that Congress has taken note of the content labeling concept, and proposing regulations of commercial sites requiring them to be labeled for HTM content. As noted, there is a large number of questions to be answered. For example, many web-authoring software packages don’t yet provide hooks for installing content labels, and there is confusion as to what labels would be recognized by the law (as several companies and organizations work on the issue now; I don’t yet know where Blogger and Wordpress stand on this issue). Search engines are starting to work with ICRA and provide filtered searches for labeled content. That could pressure more webmasters to use labels. Another problem is that sometimes every file (even “G rated” files) on a site must be labeled to certify a site, and that is still too cumbersome. And the new federal law, as proposed, would require that files with HTM content not even show up from a search, but be introduced by introductory warning pages. I wonder if the law would also require the webmaster to have all files labeled with something, even the "G-rated" files. For many older sites, this would be too cumbersome, unless more software products are developed.
It’s well to take a deep breath here and recall how we reached this point in our debate of internet censorship and filtering. The “family values” crowd is always pointing out that parents make real sacrifices to raise kids in a stable environment, and are burdened in keeping up with technology, and therefore that individuals without children should be expected to restrain themselves online to make it easier for parents, who should not be expected to keep up with every technological gimmick. Indeed, the wording of the 1998 COPA law tried to recognize this idea of “shared responsibility” with parents in its wording. One disturbing feature of this argument is that the Internet has allowed newbies to publish and gain global audiences with little or no capital or formal financial accountability. This, some say, should not be perceived as a fundamental right when it runs up against major social cohesion issues. Indeed, one wonders whether, had all of the issues around search engines, social networking, reputation, and security risks been anticipated and grasped around 1990, the general public would have been turned loose to “have at it” with the Web, at least without more formal requirements for training and capital resources. (One can imagine how a number of other social and political changes might have progressed differently.) The Supreme Court, however, in its rulings on the CDA (1997) and two rulings on COPA (2002, 2004), has come down forcefully on the principle that, in our implementation of the First Amendment (not always universally appreciated by the general public) government may not censor on conceptual content even when the means of dissemination is novel. The 1997 CDA opinion mentioned the risk of the “heckler’s veto” but does seem open to the idea that efficient labeling mechanisms might one day exist, and should be used if they actually work..
The labeling issue needs some real leadership and coordination, especially in the private sector.
Friday, January 18, 2008
Parties concerned about the appeal of COPA should also track some new proposed legislation introduced in April 2007. It would be the Cyber Safety for Kids Act of 2007, S. 1086, introduced by Senator Max Baucus (D-MT).
I have a Wordpress blog entry giving more facts here.
The bill uses the same concept of “harmful to minors” as did COPA. It would apparently require owners of commercial websites to shield material behind adult-id mechanisms, an HTM-free introductory page, and use of a content label to be administered by NTIA. The webmaster would also have to communicate with ICANN. It is not clear whether NTIA would work with companies or organizations with already developed content-labeling systems, including ICRA, Safesurf, and even AOL and Microsoft. Most reasonable people believe that private industry should develop and administer the means to do something like this. It is not clear if pages containing HTM content would have to be hidden from search engine robots so that the visitor would have to go through a warning introductory page.
Obviously, this bill will raise many troubling questions already familiar with COPA.
Wednesday, January 09, 2008
On December 27, I wrote on my this blog on the issue of “the whole” in COPA. In his March decision, Judge Reed was unwilling (based on previous Supreme Court opinions) to provide web speakers any reassurance that under COPA a speaker could reliably assume that the “whole” of a minor or immature visitor’s experience on the Web as anything more than the specific file or image that a visitor clicked on.
A troubling experience in my own background could shed light on this, and I’ve decided to say a little more about it. Back on July 27, 2007 I had written a blog entry (here) about a potentially serious incident when I was substitute teaching when a screenplay that I had written as apparently found by school administrators. The blog link is here.
From the viewpoint of COPA, and for purposes of this discussion, I’ll assume that a prosecutor could have claimed that the screenplay could appeal to the “prurient interest” of an immature minor. In fact, the work was mentioned in responses made to the Court during the trial. Some of the judge’s comments during the trial suggested a willingness to take seriously the possibility that COPA could intermingle with “implicit content” questions and could be applied to material like this. (There are other issues, though, like the “serious value prong,” and the range of ages and developmental maturities actually encompasses by the term “minor,” the so called “Clark Kent Problem” that I’ve mentioned before.)
To review what is relevant, in the screenplay an older gay man, employed as a substitute teacher, is accused of improper relations with a male student, prodded to plea bargain, labeled as an S.O., and dies in prison. The student plays the older man’s music in concert, a twist that I thought of as being like “Dorian Gray.” The man lives on through his music, despite the loss of his own life. The story is complicated from the fact that in the beginning, the student saves the teacher’s life with a defibrillator early in the story when the teacher has a cardiac arrest. (This has not happened in fact. This is one way you know the work is fiction.) Their lives intermingle after somewhat reckless behavior by both parties (it's important that the elderly man takes prescription medications that could impair his judgment), resulting in the boy’s parents bringing on a prosecution. However, relative to the fictitious universe of the screenplay, it is not clear that the teacher actually committed a crime, but he cannot defend himself against the accusations. Although there are scenes of some intimacy in the proposed film, there is nothing explicit; a film like this would probably receive an MPAA rating of “R”.
The problem at the school in the fall of 2005 was that the administrator apparently thought that the fictitious screenplay character mark was me, and that the screenplay amounted to an indirect “confession.” Had the fictitious incident been heterosexual in nature, then the character could not be me, and there would be no such problem. (The fictitious story, though, then could not work as an artistic matter.) This is the “Touching” problem discussed on the July blog. That comes from California, and would not be legally binding without a similar case in Virginia, although it could be considered “persuasive evidence” that Virginia could adopt it as a precedent.
But let me come back to the question of the “whole” in Internet literature. The administrator feels that a minor would be titillated by the belief that the character is apparently “me.” But in itself implies a paradox. The visitor minor would have to view many other files on the website to figure out that the character is “me”, if that is even true. There is the issue that the character is called “Bill” and my name is “John,” but, true, my given nickname was “Bill” (for “William”) and I’ve used it in my books as a pseudo-penname (a double take to be sure). The administrators actually viewed “about me” files on the site (I can tell from the IP addresses of the searches) to establish that it is me (they could have looked at WHOIS information, also, a more prudent thing to do). But, in any case, in viewing the other files, the visitor is supposed to develop a “context” that is larger than the universe of that one file or of the screenplay itself. So therein lies a paradox.
The administrators obviously must have believed that the material could prove enticing to a student who stumbled on it at home after googling my “pen name.” The administrators also probably believed that the circumstances could generate false accusations, that in practice could be difficult to defend (but that is the point of the screenplay!) From a practical viewpoint, there was tremendous media attention to Internet misconduct, but mainly in chat rooms (a different situation than this), some of it involving teachers, much of it from the sensational NBC Dateline TCAP series with Chris Hansen, whose broadcasts had just started. This caused legal attention to be focused on the “intent” behind Internet communications as well as its otherwise more objective legality, or the actual results of the conduct. State and federal “enticement” laws were used, one of the most sensational cases being one involving a rabbi, but the court opinion in that case, at least (U.S. v. Kaye – 2006 – must be downloaded from Pacer subscription) did place limits on how these laws could be applied outside the narrow situations where misconduct and personal contact had actually “started” (as in chat room cases); the use of the "fantasy defense" became quickly discredited. During the same time period, the media reported a continuous stream of teacher misconduct from around the country, making political climate on the issue dangerous. Little of this had happened when I posted the screenplay (in March 2005). I stopped teaching in December 2005, and removed the screenplay before starting another contract job with the school district in May 2006. There was only one case where a student made a remark in class that could reasonably be connected to having found the item (shortly before the problem with the administrators, and this was a different school). An unrelated problem involving spotting students who were in bars illegally occurred a year after the item was taken down (story here).
I see a lot of independent films that propose troubling “existential” questions, and I don’t take them as meaning that I am supposed to “do” anything. However, I live in an individualistic, adult world that accepts the idea of “thought experiments” (as Andrew Sullivan calls them). The world of public schools is one in which the “clients” (the students) live in a world of social hierarchy centered around the nuclear family (hopefully) and the then the authority structure set up by the school system with its teachers, who are supposed to represent the (balanced) interests of the parents. In retrospect, I can certainly understand how people who live in that emotional world would see such “works” as activity-provoking, Certainly, permanent teachers with real authority (including grading) over students must accept their responsibility this way and mediate their personal online behavior accordingly. I was, however, “just a sub.” Not more, at least not for now.
Of course, the other subject that this whole story brings to mind is “reputation defense”. Presumably, someone who is concerned with his “reputation” as a teacher wouldn’t voluntarily post material in public that could be misconstrued and undermine his effectiveness in the classroom. But, I was an interim, day-to-day “sub” with no authority. I don’t feel that “reputation” raises the same concern, but I see who others see it as a "social" concept. But one can see how these concepts about minors and Internet content blur: chat rooms, enticement, ads, “reputation”, as well as concerns about pornography or “adult content” (with the associated concerns about filters and labeling) in the sense of COPA.
It's interest that the FCPS school system had defibrillators in all the schools by 2007, although the plan had been announced in 2002. The school system even made a training video with live student subjects.
Visitors may want to compare this to a current case about a "fictitious blog" with respect to a divorce case, discussed recently on my main blog, here.
Saturday, January 05, 2008
There is a Fourth Circuit ruling in PSINet et al and United States Internet Service Provider Association v. Chapman and Cambloss, Commonwealth of Virginia, regarding Virginia Statute 18.2-391 (link which might be considered “Virginia’s COPA.” The law, in a manner similar to COPA, would penalized the posting of (effectively) “harmful to minors” material for commercial purposes, in addition to existing sales of these materials in the print, video or “bricks and mortar” world. The Fourth Circuit decided the case on March 25, 2004, and the link is here.
The opinion is quite complicated to read (more complicated than Supreme Court or Judge Reed’s District Court ruling(s) on COPA, and it mentions COPA only once, in a dissent.
There is a lot of discussion of an earlier case Virginia Booksellers Association v. Virginia The Appeals Court disagrees that earlier law would automatically include regulation of the Internet, and it also reiterates the now familiar arguments that make laws like CDA and COPA overbroad from a First Amendment perspective.
The text of the law, as written, does not appear to offer the use of an adult ID as an affirmative defense, but the Commonwealth apparently argued that this is implied. The credibility of these as a defense in a law like COPA has been successfully challenged.
It is not immediately clear if the Virginia Supreme Court still has other matters to decide with 18.2-391, and I’m still trying to find out. With Judge Reed's opinion on the books for COPA, it is hard to see that the Virginia law could stand as long as COPA itself is invalid.