Friday, September 12, 2008

Senator Lieberman urges YouTube to tighten standards for acceptable use; his concerns are specific, but what about COPA, and "implicit content"?


This morning (Friday Sept. 12, 2008) the Business Section D1 of The Washington Post reports a story by Peter Whoriskey, to the effect that YouTube will removed more “inciting” videos from its site and tightening its terms of service regarding certain kinds of enticing or hateful speech. YouTube has taken the action partly because of recent criticism by Senator Joe Lieberman (CT, now effectively “Independent”), who was specifically concerned about videos that appeared to be connected to Al Qaeda or various tribal or sectarian groups in Iraq and possibly Pakistan or Afghanistan, endangering American troops. The link for the story is here.

The print version of the paper includes a copy of YouTube’s terms of service. The online version did not. However, generally publishing services (including “free” services from Blogger or Wordpress and paid or subscribed hosting services from companies like Yahoo!, NetworkSolutions and Verio, as well as AOL) have similar rules in their terms of service.

One problem is that any publishing service, if it tightens acceptable use policies out of concern over one particular group (here the group could be troops overseas) it must enforce them in a uniform manner with respect to all issues. This is similar to the well known problem that airport screeners cannot profile individual travelers out of appearance and have to enforce the same rules for everyone.

Concepts like “enticement” or “hate speech” are particularly subject to interpretation. Actually, United States Code has some specific statutory federal laws regarding “coercion and enticement” of minors. The problem is that both concepts tend to live “in the eye of the beholder” and tend to relate to whom the subjects are, who the speakers are, the relationship between the speaker and subjects, and the manner of delivery of the speech. Asymmetric speech such as YouTube videos or blog entries may be more provocative than similar or identical speech that appears embedded in a commercial format, such as major motion pictures. (This reverses or contradicts a commonly held perception that the First Amendment protects individual non-commercial speech more thoroughly than corporate commercial speech through establishment channels. The opposite is sometimes true, because a lot of First Amendment protection involves collection action. It's also true that YouTube is a private enterprise and can theoretically restrict speech as it pleases, but in practice YouTube is trying to comply with what it believes the law requires.) A good example of such a situation would be a particularly “offensive” dialogue that occurs half way through the recent Dreamworks hit film directed by Ben Stiller, “Tropic Thunder.” Had that scene been posted separately on YouTube (and had it been an original scene, assuming “Tropic Thunder” did not even exist for copyright problems) it certainly would have violated YouTube’s “terms of service.” (As we know, there were demonstrations against the film and threatened boycotts, but Dreamworks did not pull it. I suspect the sequence would have violated the code for broadcast television, however.)

I discussed this problem in my blog posting here Wednesday, Sept 10. People often want to go into religious or “existential” moral “meta” arguments about a problem (say sexual orientation), but others may feel that the only (“disguised”) point of the speech is to “target” them. Conservatives often make this complaint, particularly in relation to campus speech codes. (John Stossel has pointed this out on his “give me a break” series.) Likewise, a video of a violent or disturbing event, objectively legal and comparable to a sequence that would occur in a Hollywood film, posted for “notoriety” but not for compensation, might be perceived as “enticing” because of external circumstances: the speaker has no believable motive other than to stir unrest in others. Maybe common sense (as with teen “fight club” videos – Hollywood again, with a famous film of that name!) applies, but it could be very hard to draw a line. I ran into a problem with a screenplay script (not a video and not a blog) on my own domain when I was substitute teaching because it was thought to depict a character like me as vulnerable to manipulation by students into illegal activities. I say, I posted it to demonstrate a problem in a work of fiction. Others say, if I am an “authority figure or role model” I have no business suggesting that my own credibility in that responsibility could be compromised. What if I had filmed the screenplay with actors (with no explicit scenes) and YouTube video? Theoretically, a particular person could be barred from asymmetric speech altogether because any controversial speech by the person could be construed as deprecatory and therefore potentially enticing.

With the COPA law taken and applied literally, it could not have created a COPA violation, even if COPA had been upheld. But would it have violated the “terms of service” as written, given this interpretation?

That also brings to mind still another question. If the government appeals the latest COPA opinion and the Supreme Court somehow upholds COPA, will YouTube and others have to incorporate COPA into their “terms of service”?

As I note, the article did not discuss blogs specifically, but blog entries often have embedded videos (from YouTube or otherwise) or still images. On my blogs, many of my images are just decoration and unrelated to the post; many others obviously relate. I do try to avoid picking an image that, in context, would cause misinterpretation of a particular post.

Wednesday, September 10, 2008

COPA and implicit content: it's going to take real effort to keep "asymmetric" free speech protected


This is a note today just to rehearse the idea that remaining committed to free speech takes real effort.

The Internet has opened up the idea of “asymmetric speech” where one individual can reach a large audience without “permission”. We’ve seen this in business (with the launching of companies like Facebook), but it is also true of speech itself.

The possibility is disorienting in some ways. As we know from the COPA trial, parents have to deal with the possibility that their kids will find “objectionable” material online, sometimes posted by individuals who have not “taken the dive” to have children themselves. Parents have to learn how to use filters and the near accountability software and even content labeling, techniques which do work. Parents (along with schools) have to teach their kids safe Internet use in a way comparable to teaching safe operation of an automobile. (Oh, yes, remember folks, Smallville Season 1 shows the gifted Clark Kent driving a car or truck at the legal age of 14; he also surfs the Internet.)

There is another problem, more subtle, that is emerging. That’s “implicit content.” The issue got mentioned in passing at least once from the bench during the COPA trial. Potentially, the concept means that a speaker could be held (criminally or in tort law) responsible for how another party (“the reader”) interprets is “intent” according to the socialization norms of the reader rather than the context intended by the speaker. One particularly troubling way this problem occurs is that the speaker presents himself as "vulnerable" according to the social (or even professional) norms of others but not of his own. Generally, in the United States, this sort of idea is supposed to be invoked only when there is a threat of “eminent lawless action.” (That may be less true overseas, as in Britain.) Recently, and especially in the past two or three years (as social networking sites became the norm), the major media has been representing the notion of “documenting one’s life online” as inherently dangerous to the self and to others – especially the family, and school. In fact, public school administrators are particularly concerned about this problem, partly as a result of a few sensational tragedies but also because of the way the major media outlets cast the problem. Public school principals and high school history teachers are generally not informed on the intricate theories on applying the First Amendment the way young lawyers are in urban happy hours. And they have real practical problems, related to the unequal incomes and circumstantial opportunities of their “customers” (the kids and their parents). It doesn’t help when a major US publisher refuses to publish a well-written controversial book because of perceived “threats” (there have been several other such problems around the world, mostly in Europe and Britain), and when the legal system invites subtle abuses (like “libel tourism”) among those who would subvert free speech for their own religious or political agendas. Protecting free speech from these more subtle threats is going to take real effort. Free speech (including free "meta-speech") remains an important fundamental right even when the "existential purpose" of the speech seems troubling to some people.

Thursday, September 04, 2008

Implicit content: is attracting "illegal" comments and emails a problem?


I wanted to take a moment to note again a potential vulnerability in the legal system with regard to “implicit content,” an issue that got mentioned at least in passing during the COPA trial in Philadelphia in 2006.

It’s common for blogs and websites that deal with sexual subject matter (or with subject matter that seems “adult” to a casual observer or to a robot) to attract spammy comments and emails back to the sender. Practically all ISP’s offer the ability to trap and moderate comments, and some trap some spam in advance. Even so, there could exist cases where the federal government, or particularly prosecutors in some states, might try to claim that a website or blog had been set up “for the purpose of” attracting illegal materials.

Generally, if a user clicks on a link in an email or comment that downloads an illegal image (c.p.) onto his computer, that user has broken the law, in terms of the way “strict liability offenses” work. (There may be no offense if the user does not click; there could sometimes be a problem with embedded images if HTML email options are turned on, depending on the options of the email program viewer.) It’s getting easier for law enforcement to detect these events, and there is more political pressure on prosecutors to act “wherever they can get a conviction” than ever before. While generally prosecutors are conservative and cautious in the way they apply existing laws, in a few cases (like the “Myspace case”, or in another case about a blogger who made enticing posts as to where to go for “illegal” purposes in the LA area) they have reacted with “creative prosecutions.” The problem is that it can be very tempting to pursue a conviction based on images on a person’s computer (even if placed there by another party without that person’s knowledge) if they law makes the technical aspect of proving the offense easy. Conceivably, the attraction of a large amount of emails or comments of an illegal nature could, in the minds of politically ambitious prosecutors in some situations, set up an “easy to prove” case.

Countering these fears is that Section 230 of the 1996 Telecommunications Act appears to protect ISPs, “free service providers”, and individual bloggers from “downstream liability,” either criminal or civil, for wrongful postings made on their spaces by others. Some people think that “brother’s keeper” provisions should be re-introduced into the telecommunications world, however.