Wednesday, December 16, 2015

Can terrorism promotion be screened for the way child pornography can be?


James Comey of the FBI gave a press conference this morning, where he said  “social media is a weapon”  speaking about the Chattanooga shooting.  It may be less so with San Bernadino because it appears that the husband-wife were radicalized before ISIL was prominent.

The biggest issues have to do with encryption, and with the use of social media accounts to broadcast messages that reach vulnerable people, including teens.  The process has been more dangerous in Europe but is obviously a problem in the US.

It is “illegal” to plan an illegal (violent) act or to recruit others to do so.  It is not illegal to express a particular religious point of view, as expression of ideas themselves are constitutionally protected in the US.

For this blog, the obvious question is comparison to past debates on “harmful to minors” (COPA) and, more recently, screening traffic for child pornography.

It is possible for service providers to screen posts for images (sometimes videos) that have digital watermarks that match those on a database maintained by the Center for Missing and Exploited Children. Gmail and YouTube do this, and there have been a few arrests as a result.  No such database exists for terror-related activity.  But it would sound conceivable that one could be created for specific images, like the beheadings, often used in “propaganda”.

Twitter has been the main platform used for recruiting, and Twitter is getting “better” at closing down terror-sourced (mostly overseas) accounts.  While they may get recreated under other names, it would take perpetrators some time to rebuild follower audiences.  It’s possible that the traffic source (by country) could serve as an additional item for screening.  Donald Trump may be right about that.
It also sounds conceivable, but mushy, that the “harmful to minors” concept could be extended to include violent materials.

No comments: