Wednesday, December 16, 2015

Can terrorism promotion be screened for the way child pornography can be?


James Comey of the FBI gave a press conference this morning, where he said  “social media is a weapon”  speaking about the Chattanooga shooting.  It may be less so with San Bernadino because it appears that the husband-wife were radicalized before ISIL was prominent.

The biggest issues have to do with encryption, and with the use of social media accounts to broadcast messages that reach vulnerable people, including teens.  The process has been more dangerous in Europe but is obviously a problem in the US.

It is “illegal” to plan an illegal (violent) act or to recruit others to do so.  It is not illegal to express a particular religious point of view, as expression of ideas themselves are constitutionally protected in the US.

For this blog, the obvious question is comparison to past debates on “harmful to minors” (COPA) and, more recently, screening traffic for child pornography.

It is possible for service providers to screen posts for images (sometimes videos) that have digital watermarks that match those on a database maintained by the Center for Missing and Exploited Children. Gmail and YouTube do this, and there have been a few arrests as a result.  No such database exists for terror-related activity.  But it would sound conceivable that one could be created for specific images, like the beheadings, often used in “propaganda”.

Twitter has been the main platform used for recruiting, and Twitter is getting “better” at closing down terror-sourced (mostly overseas) accounts.  While they may get recreated under other names, it would take perpetrators some time to rebuild follower audiences.  It’s possible that the traffic source (by country) could serve as an additional item for screening.  Donald Trump may be right about that.
It also sounds conceivable, but mushy, that the “harmful to minors” concept could be extended to include violent materials.

Monday, November 09, 2015

States grapple with extreme legal consequences in sexting cases, with laws written before smart phones


A recent "scandal" in Canon City CO is refocusing attention on how child pornography laws are interpreted, according to a story on the Wall Street Journal by Scott Calvert, link here. .Although a local district attorney is trying to avoid major prosecutions and the possibility of sex offender registration, the law, taken literally and written before smart phones, make the issue legally very dangerous, he says. Such extreme consequences have occurred recently in North Carolina and Tennessee.  Others say serious charges are warranted if coercion is involved, or images are circulated behind a (usually female) subject's back, constituting bullying.


Tuesday, November 03, 2015

Many sites require Facebook or other signon to view individual files with adult content; does this raise the issue of user validation again?


I’ve noticed recently that a few sites require the visitor to log in with a Facebook or Google account to view marginally “adult” articles or videos.  YouTube will require this for some adult videos, and expects publishers of adult videos to so mark their content.  And apparently Blogger wants its publishers to avoid embedding content that would require sign-in on public blogs (although it dropped its threatened ban on explicit images last March).  Sometimes YouTube videos get marked as “adult” after being up for some time. 
   
The capacity for sites to do this does raise the question about the idea of voluntary screening of content, whether by voluntary content labeling (the ICRA idea which has disappeared) or new discussions of user sign-on protocols, which were seen as inadequate in the 2006 COPA trial.  But technology is changing. Could biometric identification protocols come into play?

Tuesday, October 27, 2015

Amazon Partners requires participants to sign e-documents regarding COPPA compliance


I got an unexpected email from Amazon Partners (“no-reply”) Monday warning that I had not signed an e-form stating that none of my sites listed are intended for minors 13 or under, as apparently required by COPPA, the Children’s Online Privacy Protection Act.  The legal requirement reminds me of a law that requires age verification (18 and older) for actors in “adult” films.

There is a deadline of Oct. 31, 2015 to sign the form, or permanently lose membership in the program. I’m not sure if one could re-apply with a new site.
 
Only my Book Reviews blog had been listed, and I do not have any revenues owed to me.  I went ahead and logged on, and found the form and “signed” it (and filled out an IRS form, too).  But the Amazon Partners widgets have not worked for over a year now, and I have removed them.  My understanding is that they no longer work on Blogger, possibly can be used only with Wordpress or properly hosted content.  This is a matter I have not yet had time to look in to.  I don’t intend to use it again until I can resolve the issue, which may not happen until early 2016.  I have considered conversion to Wordpress, and condensing to fewer blogs, but that’s a matter for which I’ll give more details later, around the beginning of 2016.  There is some more fact-finding to be done.

Thursday, October 08, 2015

How pervasive, really, is Internet c.p.?



How pervasive is Internet c.p.?

The question seems relevant since reports about arrests for possession (especially among respected people, often connected to school systems) seem to have increased in the local media a lot in the past ten years, with a rapid acceleration particularly around 2005 and 2006 (ironically, before COPA was finally struck down). 

In early 2014, Huffington published a long piece by Mary L. Pulido on the issue, here. The findings report that white men were by far the biggest offenders in possession cases, and consistently about one-third have conviction records for actual sexual assaults or misconduct (with children or usually adult females). Anecdotally, most of the offenses were heterosexual (as in the cases on NBC’s “To Catch a Predator” a decade ago, with Chris Hansen). 
  
There was a PDF report from NCEMC in Alexandria VA in 2005, which still pretty much holds today, here.  Not considered is the possibility, usually remote, of accidental possession by virus or malware, or even (in at least two cases) external misuse of a router.  

Monday, September 21, 2015

"User engagement" by companies of social media photos and posts could violate COPPA


A New York Times story (by Sydney Ember and Rachel Abrams) Monday about “user engagement” and social media, especially Instagram, does bring up the issue of child protection.
   
Companies are taking photos of social media accounts and sometimes using them to sell ads to friends, or in other promotional materials.

While this may breach terms of service and cross legal barriers with the FTC for users who use privacy settings (keeping their content “whitelisted”), the practice could run afoul of the Children’s Online Privacy Protection Act, or COPPA, for pictures of minors under 13, where parental permission is required.

It is also obviously sensitive for teenagers, because some mature more rapidly than others.  Some girls might look almost “grown” even by 12, and boys sometimes by 14 or 15.  Our culture has extended (legally) the time for which minors must be protected (relative to older cultures and past generations) to various ages for various purposed (from 18 to even 25) partly because of the amount of education and adult judgment it takes to function in an individualistic society like ours.

Tuesday, August 18, 2015

Fogle will plead guilty in controversial and legally troubling child pornography possession case


There is more development of the news story of Jared Fogle, who now will plead guilty to possession of child pornography, after his home was raided, following the arrest in July of Russell C. Taylor.  CNN has a detailed story by Dana Ford here. That story links to an earlier posting by Mark O’Mara warning computer users that they can be held responsible for child pornography placed on their devices by others (like roommates, spouses or children) unless they can show they didn’t know.  As noted in some news stories in the summer of 2013, the possibility of hacking (or malware) and defenses to it sounds troubling, even if actual incidents are infrequent.  Cases like Fogles are sometimes discovered by monitoring for digital watermarks on the Internet by CMEC in Alexandria, VA, but the government could become more aggressive in monitoring “the cloud” in the future.
  
CNN covered the latest story on AC360 Tuesday night.  Jeffrey Toobin reinforced what O’Mara says in his July 2015 op-ed on CNN.
 
Update:  As of Wednesday morning, the charges against Fogle became more "serious".  Follow the media. 

Monday, August 17, 2015

Reviewing the history of CPPA and "virtual child pornography"


When I saw the movie “The Diary of a Teenage Girl” Saturday night (review Movies blog Sunday morning), I wondered if the images in the film, purporting to be of a 15-year-old girl, could be illegal (child pornography).  I presume that since the actress who played her was actually 21, there would be no legal problem (as long as the production company collected evidence of her age, which all reputable US, Canadian, Australian and European movie studios do to comply with laws).

Previously I’ve covered COPPA and CIPA (as well as COPA) in this blog, but some time back, in 1996, there was also a law called CPPA, the Child Pornography Prevention Act, which was largely overturned by the Supreme Court in 2002 (Ashcroft v. Free Speech Coalition).  Some technicalities regarding expert testimony remained, and then Congress muddied the waters again with the PROTECT Act (Prosecutorial Remedies and Other Tools to End the Exploitation of Children), which would make the goading of customers a crime even if no real minors were involved.

All of this is explained pretty thoroughly in a somewhat rambling post by the First Amendment Center, “Virtual Child Pornography”, by David L. Hudson, posted first in 2002 and updated in 2009.  The article appears to show that, without the Supreme Court’s overturning of CPPA, movies like “American Beauty” and “Traffic”, let alone “Teenage Girl”, would be illegal.

The controversy over drawings or animations that simulate child pornography would be similar as Congress has tried to outlaw it.  If an actual minor had been the subject of such a drawing or painting, that would still seem to make the work or possessing it illegal.

The DOJ actually gives less detail in its own explanation.

 

Tuesday, August 04, 2015

UK activist proposes various child-protection measures like "delete buttons"; India enforces COPA-like measure on adults


A UK “peer” is calling for all websites allowing users to add content have a “delete” button for those under 18.  She wants rules to make the Internet more suitable and welcoming for “young people”.
  
But most comment systems and social media already allow users to delete their own posts now.  And it’s unclear how she would identify those under 18.
  
The story by Glyn Moody on Ars Technica is here.
  
The “baroness” wants young people to have other digital rights, such as what others can do with their information. But the biggest right should be digital education – to understand that what you put up probably can’t be disappeared.

The story also mentions Google’s polite refusal to extend “the right to be forgotten” to the whole world. One country can’t impose on another one. 
  
Along the lines of memories of COPA, India has suddenly cracked down on porn sites, Washington Post story here.  But the concern isn’t so much with “harmful to minors” as “harmful to the unstable”, as it goes much further than US law would allow with censhorship.

Saturday, August 01, 2015

Age verification issue with dating app leading to "sex offender" case in Michigan raises new questions about possible future COPA


A recent case where a teenager was plead out as a sex offender (after brief jail sentence) when his female partner had lied about her age, raises potential questions regarding any possible future COPA-like laws.  The female had told the boy that she was 17 in messaging, when she was actually 14, and she had signed up on the adult section (probably violating terms of service) of the “What’s Hot” dating app.  (I’m not sure which link from Google is correct given the information in news stories; it seems to be a downloadable app for smart phones.) 
  
The natural question is, could the app have verified her age?  We know from all the testimony of the COPA trial (2007) that reliable age verification or adult-id systems were viewed as insufficient at the time.  However, new biometric ID systems are appearing (especially with Windows 10).  It’s logical to ask whether they could be expected for dating sites, and ultimately for sites offering “adult” content and whether this issue could be opened up again. 

Biometric validation could be useful to sites like Facebook, in really ensuring that new users are old enough according to the site's policies.
    
The details of the Michigan case were posted today Aug. 1, 2015 on the TV Reviews blog (with story links).

Update: Sept 21

The judge has vacated the sentence, and it is like the mandatory s.o. registration will be removed. 

Monday, July 06, 2015

Could site owners be liable for illegal content placed by (enemy) hackers?


The presence of many old sites on the Internet, infrequently visited and updated by owners, could, over time, present new legal risks to owners.  This would follow from a story on the Internet Safety Blog June 25 that webhosting providers are stepping up services to protect sites from hacking and malware.
  
For example, it’s possible to imagine the government (or the NCMEC) running scripts to look for registered images of c.p. (according to digital watermarks) and go after owners.  Google can do this already with gmail attachments and at least one arrest in Houston, TX resulted a few months ago.  But an attacker could “frame” an owner by deliberately posting images that could be then picked up.  It might even add new images that aren’t linked but that could be picked up by a linear scan.  Anti-virus packages that use cloud-based security (like Webroot) could eventually scan for these images, also. 
    
A similar problem could develop with promoting terrorism. The legal liability of amateur site owners for content placed by hackers or enemies could become a problem in the future.  

Update: July 10

A piece on CNN by a defense attorney Mark O'Mara on the Fogle-Taylor matter, here, is troubling, but doesn't consider the possibility of framing or malware. Technology, he writes, can confer "grave responsibility" -- even for what others do?

Update:  July 20 

There are correlated posts on the International Issues blog July 19, and Internet Safety blog June 25 and July 18, as well as my main blog May 25.  

Wednesday, June 24, 2015

FBI uses drive-by malware to nab child porn servers and users "hiding" behind TOR


The FBI has been going into the malware business itself, offering dangerous drive-by downloads to anonymous visitors to certain sites through TOR, often serving child pornography.

The technical process is pretty complicated, and is explained in a long and detailed Wired article by Kevin Poulsen in Wired, Aug. 5, 2014, “Visit the wrong website, and the FBI could wind up in your computer”, link here
  
One server farm c.p. in Nebraska was tracked down this way, but the FBI sat on it for a year before making an arrest. 
  
The government has a “love-hate” relationship with TOR and “onion-like” products, realizing they are important for resisting authoritarian governments and a trove for intelligence collection, but also a harbor for crime, most of all c.p.
  
Wikipedia has noted before that sometimes people have been prosecuted for clicking on a single link leading to images with c.p.  The possibility of malware distributing it was discussed here in the summer of 2013, but another source could be spam, where it loads when the email is opened, but this seems to be relatively rare.  Most states (like Florida) have laws requiring users to notify police immediately if accidentally opening it from spam. A single incident would seem capable of causing someone’s computers and mobile devices to be confiscated.  On two or three occasions, I have marked email as spam because of suspicious titles like this;  it’s also possible to open or preview an email accidentally when intending to mark it as spam.

  

Friday, May 08, 2015

Police in northern Virginia deal with minor abuse through gaming, unusual social media apps


WJLA7 in the Washington DC area has a disturbing video story, “Stranger in the Console”, of how minors are now contacted wirelessly through gaming portals, as in this particular incident with a Fairfax Count family, story here. The gaming breach seems particularly disturbing because kids don’t know the difference between fantasy and reality.

The story also recounts several incidents in which minors were contacted in unusual ways through social media. 

Saturday, April 04, 2015

Parents turn in daughter for sexting in one of youngest cases ever, in Virginia


In Dinwiddie County VA, parents turned in a 13-year-old girl to the sheriff after they discovered she had been “sexting” selfies to other boys on a smart phone.  This sounds like one of the youngest sexting prosecutions ever.  The CNN story is here
   
The matter will be handled in juvenile courts and the prosecutor said that at this age it is usually handled with mandatory probationary counseling, classes, and community service as well as banning electronic devices.  But the girl will appear in court.  

Tuesday, March 10, 2015

Teenager arrested in northern Virginia for c.p. contest on Kik


NBC Washington has a story by David Culver about the arrest of a 14-year-old teen in northern Virginia for sponsoring a child porn contest on the Kik platform, link here.  The tip to Fairfax County Police came from Arizona. 

The spokesperson for Fairfax Police said that kids are turning to Kik (a mobile chat plaform) because “their parents are on Facebook”.  I’m not sure how Kik compares to Snapchat in supposedly erasing messages.  Kik is entirely legal, according to the story (based in Canada), and is normally used for legitimate purposes.  Police said that Kik was cooperating in the investigation, as will most Canadian and European companies.
  
It’s disturbing that teenagers would embark in something like this with no grasp of the consequences, or of the “karma” problem.  

Monday, March 09, 2015

Canadian police seize whole servers apparently hosting C.P. from P2P, raising downstream liability concerns


Police in Ontario (Canada) have 1.2 pegabytes of data (four times the data in the Library of Congress) from a server hosting company on a belief that it contains a lot of child pornography.  A lot of the material is in “RAR” compressed format, and most of the material seems to be traded over P2P networks.  So this case may not involve conventional shared hosting of websites.

Nevertheless, Ontario is working with the US Department of Homeland Securities and with police in many other countries, including the UK, Australia, and other biggies.  “Customers” in over 100 countrries might be implicated.

Observers are concerned over the “downstream liability” question, and the possibility of prosecuting the server hosting company, at least under Canadian law.  EFF posted a link to the story on “vice” recently.

Thursday, February 26, 2015

Blogger plans to ban porn raises question, why not resume the ICRA content-labeling project (was in the UK), have Google pick it up and run with it


The very recent controversy over Google’s plans to ban “sexually explicit” images and videos from public-mode sites on Blogger (apparently including those equated to domain names) on March 23, seems to short circuit a real debate we should resume, content labeling.
  
Right now, “adult” blogs are supposed to throw an interstitial web page warning viewers, who are required to sign on to Google to show they are adults.  “Adult” YouTube videos don’t throw the page, but do flash a requirement to sign on, too. 
  
One problem with this approach is that visitors tend to presume that this means the material behind the interstitial is pornography.  But, as explored earlier, many non-pornographic sites should not be seen by less mature minors, and it is possible for a site to be pornographic or adult with words alone, and no images (although c.p. laws in the US apply only to images or videos and possibly drawings or cartoons; overseas they sometimes apply to words as well).
 
AOL experimented with content labeling with its "Hometown AOL" blogging platform which it shut down in 2007, and provided a way for users to export to Blogger. 
      
Google could have an opportunity here, to pick up the work abandoned by the former Internet Contnt Rating Association and later the Family Online Safety Institute.  Google could develop a metatag or semantic web application to allow bloggers or web publishers to label their content (in a number of categories, including violence, and age range) and then make seamless changes to the Chrome browser (and its search engines – although the latter largely happens now) so that parents could set up the settings on kids’ computers or phones.  In a typical family (although a low-income family will have more problems with this), the parents could have, say, laptops or their own phones with full access.  Kids’ computers could be set up for more restricted access.  Users could reach age-appropriate content without the objectionable “pornography” warning.  Google would have to network with other vendors (Apple for Safari, Firefox, Microsoft for Internet Explorer, Wordpress for other blogging platforms, and even vendors of Website creation software like Microsoft Expression Web) to come up with consistency of standards. 
  
This effort was largely carried out in the UK before.  I don’t know why it was stopped. 
  
The effort would require a project team, a consortium, and hiring both systems analysts (to hammer down the requirements) and experienced coders on multiple platforms and hardware.  Yes, it would cost something, and yes, it would create more jobs, in places like Silicon Valley, Texas, North Carolina, and probably Canada and the UK.  It might be managed from Britain because it started there. 
  
I am retired now, and an independent writer.  But would I help with this?  Yes.   But it would sound like a job, at 71 for me 

Really, the pressure against service providers regarding terror recruiting from overseas will soon be a much bigger problem than porn. 



Update:  Feb. 27

Blogger has deferred the new policy;  see the Product Forums for Blogger today (link). 

Tuesday, February 24, 2015

Blogger announcement on banning porn from public weblogs reminds us of the battle over COPA


Today, there has been considerable uproar and controversy over Google’s announcement of plans to prohibit nudity and pornography on public blogs from March 23, 2015 on, and applied to blogs retroactively. Blogs that violate the policy will be marked private and removed from search engine results.  I gave a detailed discussion on mu main blog today.
  
What is remarkable is how this sudden announcement parallels the controversies over COPA, the Child Online Protection Act, when it was litigated. Of course, a private company can set up policies as it wishes.  But the real practical problem has to do with determining exactly what content meets the criteria for redlining (eventual banishment).
  
There is some confusion, too, over the fact that Blogger allows users to mark blogs as adult content, which results in an interstitial screen requiring sign-on to a Google account to view. Google says it retains the right to mark blogs as “adult” itself, and that it is possible for a blog to be marked adult without actual images containing nudity, based on other considerations.  However, adult-tagged sites without nudity, it has said, will not be marked private or restricted.
  
Through the time of the court trial in Philadelphia in 2007, a lot of the debate over COPA concerned the issue of how “harmful to minors” was defined, and whether adult verification schemes could be reliable.  Industry entities, like the ICRA, proposed voluntary content labeling schemes, that could require some sort of age verification.
  
Another problem is that “adult content” might be construed as content that is disturbing to some younger people because of ideas that it presents, not because of what it shows.  For example, discussions of “attractiveness” are disturbing to some people, especially women, because they could be construed as conveying that some people should be “left out”. 

My own experience, by observation, is that Google has accepted sexually explicit content on YouTube if it us marked as adult and throws the interstitial screen.  It says it wants the same policies across all its platforms, but if so, that would imply that many of these YouTube videos should be "private" too.  What am I missing?
      
Google has already been very pro-active in eliminating child pornography from its services, as documented here previously.

Note: Blogger has deferred the policy;  see posting Feb. 27.

Saturday, February 14, 2015

Man convicted of c.p. possession while "house-sitting" based on detection software that scans routers; could this be a frame-up?


Truthout has a disturbing story by Andrew Extein, about a man from Maryland who was convicted for possession of child pornography based on router tracking software, one image (probably with a watermark matching the NCMEC database).  It was allegedly found when he was house-sitting in Indiana.  He was arrested at an airport. The link here on “Digital Darkness” describes the complicated maze of rules for convicted sex offenders.
   
But the case is puzzling.  There have been a few other prosecutions based on router evidence, as in New York State and Florida, but in these cases the abuse came from an outsider logging on to the router (in one case, from a building 400 feet away).  Or course there is the issue of router passwords.  Police would know this by now.  Further, the offending image should have turned up on a computer (possibly deleted) unless it was really “erased”.   There are viruses (like the Moon Virus) that can cause redirection of a site, possibly to an illegal site or one with malware.  (One scam tried to get the user to download malware-laden Adobe flash updates.)  But the user would see the redirection. But “hidden redirection” (a new kind of malware in phishing attacks) is possible.  Rebooting a router (turning it odf and then back on so that it does a firmware update – about a five minute process) is supposed to clear the Moon Virus. 
  
It seems that authorities should talk about home user legal responsibilities in this area. 
  
The Wall Street Journal has a story by Gary Fields and John R. Ehmswiller on federal abuse of plea bargaining, here
   
NBC Dateline could look into this subject, given its previous “To Catch a Predator” series with Chris Hansen in 2005-2006. It could look at what the terms of probation for some of the offenders were, and also look at the router issue. 

Bill 

Tuesday, February 03, 2015

Firefox "selective forgetting" or browser history and searches shows concern over prospective monitoring of borderline illegal behavior


There’s a story back in November, 2014 that Mozilla Firefox offers the ability to selectively forget a time range of browsing history.  And a facility called DckGoGo will let users control what Internet searches their browser “remembers”, news story in Linux Insider here
  
There is certainly more concern that governments could become more aggressive in monitoring not only possible terrorist connections, but also questionable searches that might be on the “borderline” of child pornography, particularly in the context of ephebophilia.   Police do use searches and browser history retrospectively for evidence when there is already probable cause.  The concern is that even companies could decide to “spy” to reduce even suspicious use of their networks, because of eventual growing concerns about downstream liability. This could, as noted before, lead also to scanning cloud accounts.
     
There was something indeed to Edward Snowden’s concerns. 

Friday, January 23, 2015

Service providers still mention COPA in their AUP's


A few service providers still provide information about the Child Online Protection Act (COPA) in their acceptable use policies, even though COPA was finally overturned at trial in March,  2007, as reported here. They typically recommend such products as Netnanny, and CyberSitter (from Solidoak, here). , still no argument about that.  But in the past two or three years, there has been little interest in voluntary self-labeling, as had been proposed in the past by the now defunct ICRA. 

Tuesday, January 06, 2015

"Taken as a whole" could matter to users, too


In the COPA litigation of a few years back, we were concerned over idea that a particular image or posting might be “harmful to minors” when viewed out of context.  Publishers wanted their material viewed as a whole, not from the viewpoint of the worst interpretation of individual pieces.  (Actually, the original COPA law in 1998 did have a provision that accepted material that “taken as a whole” did have legitimate value for minors.)
   
A similar concept has applied to obscenity, and to deciding what is pornography.  An image or video viewed by itself with no knowledge by the viewer of the intended context might appeal to prurient interest, whereas it would not if viewable only when accompanied at the same time by a lot of other explanatory materials.  Logically, it sounds plausible that such a concept could apply to a user (rather than content creator or publisher) for content that is viewed or found to be possessed on a hard drive or in the Cloud.  It would sound possible that it could apply to child pornography, although not with respect to known watermarked images.   I don’t know whether any such cases have occurred, but they could in the future.


It’s possible, for example, that saving a single photo or unlisted video out of the context of a large gallery could be viewed as a violation, when the content had been intended to be viewed only “as a whole”.