Thursday, February 26, 2015

Blogger plans to ban porn raises question, why not resume the ICRA content-labeling project (was in the UK), have Google pick it up and run with it

The very recent controversy over Google’s plans to ban “sexually explicit” images and videos from public-mode sites on Blogger (apparently including those equated to domain names) on March 23, seems to short circuit a real debate we should resume, content labeling.
Right now, “adult” blogs are supposed to throw an interstitial web page warning viewers, who are required to sign on to Google to show they are adults.  “Adult” YouTube videos don’t throw the page, but do flash a requirement to sign on, too. 
One problem with this approach is that visitors tend to presume that this means the material behind the interstitial is pornography.  But, as explored earlier, many non-pornographic sites should not be seen by less mature minors, and it is possible for a site to be pornographic or adult with words alone, and no images (although c.p. laws in the US apply only to images or videos and possibly drawings or cartoons; overseas they sometimes apply to words as well).
AOL experimented with content labeling with its "Hometown AOL" blogging platform which it shut down in 2007, and provided a way for users to export to Blogger. 
Google could have an opportunity here, to pick up the work abandoned by the former Internet Contnt Rating Association and later the Family Online Safety Institute.  Google could develop a metatag or semantic web application to allow bloggers or web publishers to label their content (in a number of categories, including violence, and age range) and then make seamless changes to the Chrome browser (and its search engines – although the latter largely happens now) so that parents could set up the settings on kids’ computers or phones.  In a typical family (although a low-income family will have more problems with this), the parents could have, say, laptops or their own phones with full access.  Kids’ computers could be set up for more restricted access.  Users could reach age-appropriate content without the objectionable “pornography” warning.  Google would have to network with other vendors (Apple for Safari, Firefox, Microsoft for Internet Explorer, Wordpress for other blogging platforms, and even vendors of Website creation software like Microsoft Expression Web) to come up with consistency of standards. 
This effort was largely carried out in the UK before.  I don’t know why it was stopped. 
The effort would require a project team, a consortium, and hiring both systems analysts (to hammer down the requirements) and experienced coders on multiple platforms and hardware.  Yes, it would cost something, and yes, it would create more jobs, in places like Silicon Valley, Texas, North Carolina, and probably Canada and the UK.  It might be managed from Britain because it started there. 
I am retired now, and an independent writer.  But would I help with this?  Yes.   But it would sound like a job, at 71 for me 

Really, the pressure against service providers regarding terror recruiting from overseas will soon be a much bigger problem than porn. 

Update:  Feb. 27

Blogger has deferred the new policy;  see the Product Forums for Blogger today (link). 

Tuesday, February 24, 2015

Blogger announcement on banning porn from public weblogs reminds us of the battle over COPA

Today, there has been considerable uproar and controversy over Google’s announcement of plans to prohibit nudity and pornography on public blogs from March 23, 2015 on, and applied to blogs retroactively. Blogs that violate the policy will be marked private and removed from search engine results.  I gave a detailed discussion on mu main blog today.
What is remarkable is how this sudden announcement parallels the controversies over COPA, the Child Online Protection Act, when it was litigated. Of course, a private company can set up policies as it wishes.  But the real practical problem has to do with determining exactly what content meets the criteria for redlining (eventual banishment).
There is some confusion, too, over the fact that Blogger allows users to mark blogs as adult content, which results in an interstitial screen requiring sign-on to a Google account to view. Google says it retains the right to mark blogs as “adult” itself, and that it is possible for a blog to be marked adult without actual images containing nudity, based on other considerations.  However, adult-tagged sites without nudity, it has said, will not be marked private or restricted.
Through the time of the court trial in Philadelphia in 2007, a lot of the debate over COPA concerned the issue of how “harmful to minors” was defined, and whether adult verification schemes could be reliable.  Industry entities, like the ICRA, proposed voluntary content labeling schemes, that could require some sort of age verification.
Another problem is that “adult content” might be construed as content that is disturbing to some younger people because of ideas that it presents, not because of what it shows.  For example, discussions of “attractiveness” are disturbing to some people, especially women, because they could be construed as conveying that some people should be “left out”. 

My own experience, by observation, is that Google has accepted sexually explicit content on YouTube if it us marked as adult and throws the interstitial screen.  It says it wants the same policies across all its platforms, but if so, that would imply that many of these YouTube videos should be "private" too.  What am I missing?
Google has already been very pro-active in eliminating child pornography from its services, as documented here previously.

Note: Blogger has deferred the policy;  see posting Feb. 27.

Saturday, February 14, 2015

Man convicted of c.p. possession while "house-sitting" based on detection software that scans routers; could this be a frame-up?

Truthout has a disturbing story by Andrew Extein, about a man from Maryland who was convicted for possession of child pornography based on router tracking software, one image (probably with a watermark matching the NCMEC database).  It was allegedly found when he was house-sitting in Indiana.  He was arrested at an airport. The link here on “Digital Darkness” describes the complicated maze of rules for convicted sex offenders.
But the case is puzzling.  There have been a few other prosecutions based on router evidence, as in New York State and Florida, but in these cases the abuse came from an outsider logging on to the router (in one case, from a building 400 feet away).  Or course there is the issue of router passwords.  Police would know this by now.  Further, the offending image should have turned up on a computer (possibly deleted) unless it was really “erased”.   There are viruses (like the Moon Virus) that can cause redirection of a site, possibly to an illegal site or one with malware.  (One scam tried to get the user to download malware-laden Adobe flash updates.)  But the user would see the redirection. But “hidden redirection” (a new kind of malware in phishing attacks) is possible.  Rebooting a router (turning it odf and then back on so that it does a firmware update – about a five minute process) is supposed to clear the Moon Virus. 
It seems that authorities should talk about home user legal responsibilities in this area. 
The Wall Street Journal has a story by Gary Fields and John R. Ehmswiller on federal abuse of plea bargaining, here
NBC Dateline could look into this subject, given its previous “To Catch a Predator” series with Chris Hansen in 2005-2006. It could look at what the terms of probation for some of the offenders were, and also look at the router issue. 


Tuesday, February 03, 2015

Firefox "selective forgetting" or browser history and searches shows concern over prospective monitoring of borderline illegal behavior

There’s a story back in November, 2014 that Mozilla Firefox offers the ability to selectively forget a time range of browsing history.  And a facility called DckGoGo will let users control what Internet searches their browser “remembers”, news story in Linux Insider here
There is certainly more concern that governments could become more aggressive in monitoring not only possible terrorist connections, but also questionable searches that might be on the “borderline” of child pornography, particularly in the context of ephebophilia.   Police do use searches and browser history retrospectively for evidence when there is already probable cause.  The concern is that even companies could decide to “spy” to reduce even suspicious use of their networks, because of eventual growing concerns about downstream liability. This could, as noted before, lead also to scanning cloud accounts.
There was something indeed to Edward Snowden’s concerns.