The immediate task is to detect and delete attacker-produced material.

By Professor Robert Pape*

Mike Schroepfer, Facebook’s chief technology officer, is leading Facebook’s efforts to build automated tools to detect and remove millions hate speech posts and content, according to a New York Times** report.
Content, however, is limited only by human imagination and, therefore, keeps evolving.
That’s a good thing when it comes to, say, medical research or community development.
In the world of security, it translates into an ongoing challenge in which we have to be successful all the time and the bad actors successful only once.
Who knows how long it will take Facebook’s massive investment in AI solutions to bear fruit.
Every time an AI solutions flags and deletes material, new posts pop up that the AI systems have never seen before. In any event, “bad activity” is in the eye of the beholder; getting people to agree what that means — far less incorporate it in an algorithm — may prove nigh on impossible.
This is not, however, about freedom of speech — it is about a clear and present danger.
The immediate challenge — and the foundational issue of the Christchurch Call — is to detect and delete attacker-produced material.
Confining the issues to attacker-produced material — regardless of their ideology or their sanity — would be a massive step towards the final goal of removing terrorists and murderers from social media, just as we do from the streets.
At CPOST, we have the world’s most comprehensive and detailed database of terrorist produced videos, which could go a long way to refining the parameters needed to take that first step.
We are also currently evaluating tech partner options to include in our human factor and content analysis approach to specifically address the online attacker-produced material.
We have several options, from partnering with existing tech companies to establishing an independent and transparent study of the limits of current AI algorithms to delete terrorist-produced content.
While the latter would be preferable in many ways, the former may be quicker and more palatable, especially to tech companies.
Together we may not cure social media of all hate speech and harmful material. We would, however, be one step closer to making the world safer by denying terrorists, murderers and others attackers the ability to use social media as a terror tactic.

* Professor Robert Pape, Director of the Chicago Project on Security and Threats (CPOST),  an international security affairs research institute based at the University of Chicago best known for creating and maintaining the most comprehensive and transparent suicide attack database available worldwide.  At the invitation of Prime Minister of New Zealand, Jacinda Ardern, Professor Pape participated in the Christchurch Call discussions in Paris.


Be the first to comment on "The immediate task is to detect and delete attacker-produced material."

Leave a comment

Your email address will not be published.