Saturday, April 07, 2007
ICRA has provided a link to a new site Contentlabel.org that will promote further industry standardization of content labeling. The standardization will be based on the W3C semantic web with the Resource Descriptor Framework, here.
The effort would appear to lead to industry standards for operating systems, browsers, and webmasters for properly labeling content so that parents and schools can parse content properly for different levels of maturity of minors. There would be special requirements for patently "adult" websites. Remember in the COPA trial, there was a lot of discussion of what the concept of "harmful to minors" could mean, and there was a fear that it could be much broader than this.
The new contentlabel site has a blog and message boards. I'll have more about this later.
Sunday, April 01, 2007
Syndicated columnist Jacob Sullum (a senior editor at Reason magazine) furnished The Washington Times with a column, “Filters Better than Laws” in the Commentary Section, p A13, of The Washington Times, on March 31, 2007. Sullum summarized Judge Reed ‘s opinion on COPA, hitting particularly hard the ambiguity of the definition of minors, and of the actually meaning of “harmful to minors.” Sullum believes that the judge bought the idea (and he agrees) that the HTM definition could reasonably extend to subject matter when it has to be judge by the standards of any minor, so educational materials about STD’s, for example, could fall under it. He concurs that COPA was both too broad and too narrow (not applied to overseas sites) and that filters are reasonably effective if used properly.
The whole saga of COPA is a lesson in how quickly technology changes our perception of things. When the original Communications Decency Act was before the Supreme Court in 1997, I actually thought that an adult-id verification requirement for “adult” materials could be workable. I even said that in discussing my proposed “Bill of Rights 2” in the last chapter of my first book. At the time, I was finishing off my first “do ask do tell” book and I imagined a website as a supplement to a book, mainly to be used by purchasers of the book. I thought that “word of mouth” would spread the reputation of a book, which would then need to be kept up-to-date by a footnote website, possibly with a login for owners of the book. At the time (like 1997) the notion seemed credible. I got television exposure in the Twin Cities (MN) from my Hamline lecture (from crutches); I would meet other authors like Vince Flynn who promoted self-publishing of books. At the time, E-books and proposals like Softlock were circulating. Personally owned whole domain names still seemed a bit of a novelty, as I had purchased one from a coworker who ran his own company as an ISP. That was the nature of the early online world then.
What I didn’t see was how important search engines would get – simply because of the mathematics of binary searches (you don’t have to raise the number 2 to that high a power to get a billion – remember your logarithms in Algebra II?; search engines would provide a good critical thinking exercise for high school math students). By the end of 1998, they had obviously become the main way newbies on the web would get known. No longer did you need to code metatags on your web pages; the search engines picked them up anway. The trick to being found was to keep pages static (so that they load quickly) and use lots of proper names and technical buzzwords; search engines like less common words as keywords to identify a source, and engines prefer lots of content as opposed to gimmicks. I had all of this, so I started getting a lot of hits. By the end of 1998 I realized that COPA could shut down my whole future online.