Mobilizing against Online Hate Speech

One of the more creative approaches to addressing online hate speech is the Umati project in Kenya which is dedicated to monitoring online hate speech, educating about how online hate speech can promote violence, and identifying ways that individuals and non-governmental organizations can combat hate speech.

The Umati project is one of many initiatives organized in response to the horrific inter-ethnic violence – over 1,000 killed –  following the 2007 presidential elections in Kenya.  (Another initiative is a series of “Healing and Rebuilding our Communities” workshops organized by Kenyan Quakers and others.)

Findings:  The results of Umati’s monitoring of Kenyan cyberspace (blogs, Facebook, Twitter, and online newspapers and video streams of the major Kenyan media)  for the Oct. 2012 through Jan. 2013 period contained several surprises (to me anyway):

  • About 90% of the examples of dangerous speech found by Umati were made by individuals who identified themselves (as opposed to anonymous commenters).
  • Analyzing the actions promoted by hate speakers, calls to discriminate against some group were the most common – over 5 times as frequent as calls to kill (the second most common).  Less frequently advocated actions included calls to forcefully evict, to beat up, to riot, or to loot.
  • Calls to take violent action were far more prevalent on Facebook than on other online venues – almost 400 instances in the 4 month period covered compared to less than 50 in comments to an online news article.

What struck me most about the Umati project was the way its report explained the different kinds of hate speech (giving examples) and how each could contribute to the growth of intolerance and violence.

KenyaHateSpeechProject-Screenshot

The final section of Umati’s report focuses on  actions that individuals and organizations can take to reduce or counteract hate speech.  One of the useful tactics suggested is the immediate dissemination of facts to correct a rumor or falsehood likely to inflame the audience to violence.

“Such responsible online activity was exemplified during the Mombasa violence that followed the death of Muslim cleric Sheikh Aboud Rogo, when inflammatory tweets were being spread that stated that a Mombasa church was being burned. A responsible social media user took a tweetpic of the church (which was not burning) and stated, “Stop the lies!”. This responsible action helped to quell the propagation of such inflammatory lies on social media.”

The overall context of the Umati project – based on Susan Benesch’s concept of dangerous speech – unites action and speech in a useful manner.

“Dangerous speech: This is a term coined by Professor Susan Benesch to describe incitement to collective violence that has a reasonable chance of succeeding, in other words speech that may help to catalyse violence, due to its content and also the context in which it is made or disseminated. This possibility can be gauged by studying five criteria that may contribute to the dangerousness of speech in context: the speaker (and his/her degree of influence over the audience most likely to react, the audience (and its susceptibility to inflammatory speech), the speech act itself, the historical and social context, and the means of dissemination (which may give greater influence or “force” to the speech).”

The project divides dangerous speech into three distinct categories, providing a framework for distinguishing between immediate threats of violence and comments which might be less likely to be recognized as dangerous by the speaker or the audience.  This framework implicitly recognizes that different educational activities and action strategies may be needed to address each type of dangerous speech.

The categories of dangerous speech addressed by Umati are:

  • Offensive speech:  comments mostly intended to insult a particular group
  • Moderately dangerous speech: “moderately inflammatory and are usually made by speakers with little to moderate influence over their audience.”
  • Extremely dangerous speech: “made by speakers with a moderate to high influence over the crowd, are extremely inflammatory” and are likely to include calls to violent action.

Source: “Monitoring Online Dangerous Speech: October 2012 – January 2013

For the results of a different online monitoring project in the Ukraine, see the 2012 Council of Europe’s report “Mapping Study of Projects against Hate Speech Online”  (pp. 30-31)

Advertisements

One thought on “Mobilizing against Online Hate Speech

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s