Dangerous Speech Amplified: How ‘Big Tech’ can be a responsible stakeholder in mass atrocity prevention

By Emma Milner

Emma completed her BA in History, International and Political Studies (IPS) in 2018 at the University of New South Wales, Australia, where she is currently pursuing a Masters in Cyber Security Operations. @EmmaMilner_

In the United States and across Europe, ‘Big Tech’1 – specifically social media companies – have engaged in conversations with governments, lawmakers, and their users regarding their role in moderating hate speech and/or speech that could incite violence. In May 2020, Twitter made the unprecedented decision to ‘hide’ one of former US President Donald Trump’s tweets on the grounds that it “glorif[ied] violence” (Yglesias, 2020). This decision sparked significant debate on the power of Big Tech over online speech yet was not the first conversation of this nature. In 2018, Facebook was under a similar spotlight for its role in waves of violence in Myanmar in 2016/17. The UNHRC Fact Finding Mission specifically called out the Big Tech company for hosting hate speech which “constitutes incitement to […] violence” (UNHRC, 2019, p. 130). In light of these two cases, it can be argued that social media platforms are not neutral parties in instances of violence. For this reason, the conversation centred on Big Tech’s power over speech should be extended to include the role of social media in instances of grave violence, specifically mass atrocities. Further, it should discuss what responsibility Big Tech holds in the prediction and prevention of mass atrocities in an increasingly digitalised world.

This paper will seek to demonstrate how such a conversation can be shaped. First, the paper will interrogate current methods for prevention, with an emphasis on early prevention and the role of ideology and speech in such prevention. Second, the role of mass media will be briefly discussed within the context of a changing media landscape. The paper will conclude with a discussion on the role of Big Tech in mass atrocity prevention, using Myanmar and Kenya as distinct case studies. These two examples demonstrate both the consequences of Big Tech failing to sufficiently acknowledge – and act upon – its role in mass atrocities, as well as the opportunities that exist for Big Tech to have a meaningful role in mass atrocity prevention. Where early prevention can be described as a toolbox of methods in an increasingly digitalised world, Big Tech holds a spot within this toolbox as a responsible stakeholder.

The case for early warning models as part of a prevention ‘toolbox’

Failure is often discussed in mass atrocity prevention literature. This is in part due to the difficulty in recognising and appreciating successful prevention (Paris, 2014, pp. 574-5), but also to a myriad of challenges. Some commonly cited failures include the United Nations failing to prioritise prevention (Bellamy and Lupel, 2015, p. 19) or generate sufficient political will to intervene (MIGS, 2009, p. 2), a lack of agreement within the UN Security Council (Morris and Wheeler, 2016, pp. 227-47), as well as intervention being deemed too risky (or costly). This is especially the case in instances where the determination to commit mass atrocities is too high for anything short of full-scale war to successfully intervene (Bellamy and Lupel, 2015, p. 18). Many of these issues stem from not taking action early enough. The longer a situation is left without action, the more likely it is that prevention will become intervention, increasing risk (Bellamy, 2015, p. 64). As a consequence of this increased risk, the will to act decreases (Staub, 2010, p. 290).

Based on this logic, early prevention can be seen as the best way of addressing mass atrocities. The use of risk-assessment models and early warning (EW) systems constitute one tool within the prevention ‘toolbox’ that enables such prevention. These can be described as “complex mode[s] of ascertainment” based on “shifts in social, political, and economic dynamics that might signal the likelihood of a situation escalating into mass atrocities” (Leaning, 2015, p. 353). Prominent examples include the model proposed by Harff (2003), as well as the model used by the UN Office on Genocide Prevention and the Responsibility to Protect (UN Office on Genocide Prevention and the Responsibility to Protect, 2014; Maynard, 2015b, p. 67). Among the many early warning models and systems, all employ a variety of inputs to generate their respective evaluations. Those of concern in this paper are those which use either speech or ideology as the key component of their monitoring criteria, as for example Benesch’s (2014) ‘dangerous speech’ framework and Maynard’s (2015b) ‘justificatory mechanisms’. These frameworks only address one element of EW and risk assessment models yet require further attention since they are, arguably, overlooked in prevention literature (Maynard and Benesch, 2016, p. 70). Further, these are not given due consideration for their utility in both the early identification of potential mass atrocities and subsequently their role in formulating methods for early prevention.

Ideology as a warning sign

Exploring the ideological roots of mass atrocities can explain why civilians are targeted en masse in some situations and not others (Straus, 2012, p. 549). Straus argues that it is “the ideological vision of the leadership [that] will shape how a state defines strategic enemies and strategic objectives, thus indicating which states are likely to respond […] with mass violence and which [will] not” (Straus, 2012, p. 549). Specifically, theories of ideology discuss ideologies that justify mass-violence against civilians (Maynard, 2015b, p. 67-84) in spite of widely agreed norms of civilian immunity (Bellamy, 2012b, pp. 935-6). These justifications are employed by individuals (usually elites) to make mass-violence in pursuit of their motives appear legitimate, which in turn can catalyse such violence (Maynard, 2015b, pp. 70-71).2 Maynard’s (2015a; 2015b) framework lists six ‘justificatory mechanisms’, the first of which is dehumanising potential victims. This mechanism involves a psychological process of moral evasion (Maynard, 2015a, pp. 189-99), meaning victims are stripped of moral protection so that persons who partake in (or stand by) mass atrocities can persecute without guilt (Maynard, 2015a, pp. 198-200; Bellamy, 2012a, p. 180).

The second mechanism, guilt-attribution, produces a similar effect by asserting victims “have committed moral or legal crimes” (Maynard, 2015a, p. 199). This “criminal” association moves victims “towards the periphery of the universe of obligations” where normal moral obligations do not apply (Maynard, 2015a, pp. 199-200). Together, guilt-attribution and threat construction – the third mechanism – generate fear, i.e. the fear that “guilty criminals may commit crimes again if they go unpunished” (Maynard, 2015a, p. 201). This is particularly potent since it enforces the consequentialist calculus of future-bias (the sixth mechanism). Here, future goods are privileged over other moral arguments (i.e. civilian immunity) (Maynard, 2015a, p. 212). Under this logic, “guilty criminals” must be “dealt with” so that the utopic future presented by influential elites can be brought to reality. On the other hand, deagentification – the fourth mechanism – aims to eliminate alternatives to mass-violence. It portrays perpetrators as “lacking meaningful agency” and, consequently, any responsibility for partaking in mass atrocities (Maynard, 2015a, p. 205). Here, primordial conceptualisations of ethnicity or ethnic grievances become understandably problematic, since the argument could be presented that violence is ‘inevitable’ as it is grounded in ‘ancient hatreds’ (and consequently not worth trying to prevent).3 

The last justification, “virtuetalk”, frames mass-violence as “demonstrating praiseworthy character traits [such as] duty, toughness, loyalty, patriotism” (Maynard, 2015b, p. 71). In a sense, this is a form of social learning whereby a society internalises virtuetalk related to mass-violence (Bellamy, 2012b, pp. 935-6; Maynard, 2015a, p. 209). This is particularly potent when embedded in existing socio-cultural norms, such as when virtuetalk “target[s] the insecurities prominent amongst the young men” who seek gendered praiseworthy traits such as toughness (Maynard and Benesch, 2016, p. 84). Bellamy (2012b) interprets these justificatory mechanisms as “anti-civilian ideologies” whereby these provide the normative grounds for the en masse killing of civilians inconsistent with the norm of civilian immunity. The justifications Maynard presents (2015a; 2015b) can therefore be seen as a contest between existing and constructed norms. The strength of some norms over others results in behaviours, including whether the members of one “in-group” resorts to mass atrocities to achieve its elite’s strategic objectives.4 When this ideology is then incorporated into hate speech (which in itself cannot incite mass atrocities) it becomes dangerous speech.5 The existence of this kind of speech in a state at-risk of mass atrocities can be interpreted as an increase in risk, or an indicator that a mass atrocity event is either occurring or about to occur.

Speech as a warning sign

Benesch’s (2014) framework offers a means of differentiating hate speech and dangerous speech based on intent, capacity and “rhetorical patterns”, or justifications (Davis and Raymond, 2014, p. 11). Combining Benesch’s framework with Maynard’s offers a valuable contribution to an often underdeveloped component of mass atrocity prevention: combining ideology and speech literature (Maynard and Benesch, 2016, p. 70). As both have acknowledged, neither ideology nor hate speech are dangerous alone (Ibid). Further, Maynard’s justificatory mechanisms cannot address how successful an elite using these justifications will be in inciting violence. To incite violence, as Benesch explains, “a speaker must [also] have authority or influence over the audience, and the audience must already be primed, or conditioned, to respond to the speaker’s words” (Benesch, 2008, p. 8). This “dangerousness” can be assessed using five factors: the speaker (whether they have power or influence), the audience (whether there are existing grievances or fears that the speaker can cultivate), the “speech act” (whether it be understood as “a call to violence”), the historical and/or cultural context (i.e. economic or political competition, or previous instances of violence), and finally the means of dissemination (including the diversity of dissemination types) (Benesch, 2014, p. 8).

The second aspect of Benesch’s framework focuses on “rhetorical patterns” (Benesch, 2014, p. 8). These align closely with Maynard’s justificatory mechanisms. Compared to Maynard’s framework, Benesch’s is more interpretative however it greatly assists in the practical application of Maynard’s (2015a; 2015b) justificatory mechanisms in identifying risks of mass atrocities within the speech of ‘in-group’ elites. If incitement, and dangerous speech as an extension, is considered to be a sine qua non in many instances of mass atrocities (Benesch, 2008, p. 498; Adams, 2020), early identification of these should, in theory, be invaluable for prevention. Furthermore, if this type of speech is such an essential ingredient in mass atrocities, it would be fair to argue that tackling the proliferation of dangerous speech in at-risk states provides an opportunity to address mass atrocities ‘at the roots’ (Bellamy, 2015, p. 64).

The role of modern media in mass atrocities

When discussing dangerous speech, it is important to appreciate the role of the media in disseminating or hosting said speech, particularly how the media landscape has changed in recent years.6 There is no shortage of literature pertaining to the role of traditional media in mass atrocities (e.g. Wolfsfeld, 2004; Thompson, 2007; Clarke 2017). In some instances, the media “reflect[s] elite consensus” or indeed “augment[s] it” (Wolfsfeld, 2004, p. 227). In such instances where local media is in effect controlled by ‘in-group’ elites, the usual journalistic principle of “do no harm” is no longer applied, since as a norm it has been deprioritized in the process of norm construction and prioritization described by Bellamy (2012b). The exploitation of traditional media in both the Rwanda genocide (1994) and the election violence seen in Kenya (2007 & 2013) – radio in this case –, are good examples of this. Kenya, however, is also a good example of how this issue is further complicated by the emergence of a new (increasingly digital) media landscape. Indeed, whilst radio still played a key role in inciting violence, so too did SMS messages, emails and “online bulletin boards” or blogs (Bellamy et al., 2016, p. 751-8; Goldstein and Rotich, 2008, p. 5). 

It is fair to point out that in many states where there is a risk of mass atrocities, the internet is not yet wide-spread and consequently cannot play a significant role. Kenya is such a state, with only 18% of the population ‘online’, as of 2017 (World Bank Group). But the accessibility of mobile phones and cheap SIM cards is an important shift that cannot be underestimated. This is the case in Kenya, as with most of Africa, where SMS messaging has been the most widely used digital application (Goldstein and Rotich, 2008, p. 5). A similar shift can be observed in Myanmar; due to a sudden increase in accessibility to cheap SIM cards, the population of Myanmar went from having less than 1% of people ‘online’ in 2010, to over 30% in 2017 (World Bank Group). It should be noted that ‘online’ in Myanmar commonly refers to Facebook usage (UNHRC, 2018, p. 14).

The advent of SMS messaging and social media in states at-risk of mass atrocities changes the role of media quite significantly. For example, social media is largely made up of ‘influential figures’, ‘citizen-journalists’ as well as the plethora of individuals who can either amplify or suppress a given message. Consequently, the new media landscape has some particularly dangerous characteristics, including, but not limited to, the creation of “ideological monopolies” that replicate ‘in-groups’ online, the facilitation of “radicalisation” within ideological “echo chambers” (Maynard, 2015b, p. 77) and the proliferation of dis- and misinformation. The murder of a Muslim tea-shop owner in Myanmar which, following the false accusation of raping a Buddhist employee, was widely shared on Facebook, is a good example of the very real consequences of these new characteristics (Callahan and Zaw Oo, 2019, pp. 9-10).

Myanmar: dangerous speech amplified

Myanmar is plagued with failure in the mass atrocity literature; neither early prevention nor any real intervention was achieved. The case of Myanmar clearly demonstrates the dangers of the new media landscape developing in a state at-risk of mass atrocities. In particular, a number of UN and non-governmental organisation reports have highlighted the role of Facebook in the persecution of the Rohingya minority. Several of their key observations will be discussed here.7 It is worth noting that neither Maynard nor Benesch’s frameworks were utilised in any of these reports to identify dangerous speech8 despite the large amount of open-source data available to do so.9 It could be argued this is because it is simply ‘too late’ to do so in Myanmar, or that even without these frameworks, it is already clear which speech by in-group elites is inciting violence. However, failing to anticipate further waves of violence in Myanmar is somewhat optimistic.10 In order to best identify an increased risk of a recurrence of mass atrocities, a nuanced appreciation for how dangerous speech is both constructed and disseminated on Facebook would be of great utility.

In their analysis of narratives in Myanmar, Matt Schissler et al. (2017) have identified several “contradictions” that could be categorised using the two aforementioned frameworks, such as “antagonism as primordial fact” (the diminishing of alternative narratives) and existing dissatisfaction with Myanmar’s democratic transition (Schissler et al., 2017, p. 390-1), which under Benesch’s framework would prime audiences to be more receptive to justifications. A further application of these frameworks using the data collected and archived by Facebook could prove invaluable to formulating an EW model specific to dangerous speech found in Myanmar. This could assist international humanitarian organisations in identifying heightened risk in Myanmar (BSR, 2018, p. 49) as well as assist Facebook in better monitoring and moderating content on its platform (BSR, 2018, p. 21). Lastly, this application could demonstrate the crime of incitement using data from Facebook (BSR, 2018, p. 47; Hamza, 2015, p. 190) that could be used in any future legal cases against Myanmar (BSR, 2018, p. 49).

In summary, Facebook could play a role in bridging the gap between warning and response (Mancini and O’Reilly, 2013, p. 92) by tackling dangerous speech “at the roots” through moderation in line with human rights law (Irving, 2019, p. 256; BSR, 2018, p. 42), informed by the two frameworks. This is an example of a targeted coercive “ideological intervention” (Maynard, 2015b, p. 77). There are also systemic persuasive methods that could be employed in Myanmar such as digital literacy campaigns (BSR, 2018, p. 13; Warofka, 2018), however for the purposes of this paper this type of “ideological intervention” is best exemplified by the case of Kenya.

Kenya: digital counterspeech

In the case of Kenya, unlike Myanmar, there existed a relatively early intervention to avoid potential mass atrocities. While the role of digital media in bringing relative peace to Kenya can be overstated, its role is nonetheless an important one. The catalyst for peace in Kenya was largely an array of domestic reforms that addressed many of the grievances which fuelled violence during and after the 2007 elections (Halakhe, 2013, pp. 8-9). Part of this legislation included laws prohibiting hate speech (Halakhe, 2013, p. 9), a systemic coercive “intervention” which played a significant role in combating dangerous speech. However, the action highlighted here is that undertaken and observed by Benesch’s ‘dangerous speech project’ and the ‘Umati project’, a local speech monitoring project, during the 2013 elections (Maynard and Benesch, 2016, p. 86). In her project’s final report, Benesch reflected on the success in identifying, reporting and blocking dangerous speech, but also on the effectiveness of “counterspeech” (Benesch, 2014). This can best be described as a combination of “peace propaganda” (Benesch, 2014, p. 21) (e.g. peace SMS messages produced by NGO Sisi ni Amani Kenya) (Davis and Raymond, 2014, p. 4), social cohesion projects on social media (e.g. such as the efforts of ‘I Have No Tribe’ and ‘I am Kenyan’) (Trujillo et al., 2014, p. 122), and more creative pursuits such as the popular television drama Vioja Mahakamani which aired four episodes on dangerous speech “designed to inoculate audiences against such speech” (Maynard and Benesch, 2016, p. 87).

Due to the variety and multitude of largely local as well as international speech and ideology related projects operating in Kenya, it is hard to identify which were more successful than others. However, these highlight some key elements for success that should be noted for future projects. First, there must be an appreciation for the “socioeconomic setting” of a given population (Mancini and O’Reilly, 2013, p. 89). For example, many projects in Kenya would likely not work in Myanmar due to the difference in digital literacy levels. Second, the best projects are run at the local level but assisted by the technical expertise of larger international organisations (Mancini and O’Reilly, 2013, p. 91), especially in the use of digital technologies such as artificial intelligence.11 Lastly, best results are achieved through the diversity of mediums used, including both new and old technology, (Mancini and O’Reilly, 2013, p. 90) and demographics – as exemplified by the range of differing voices using digital media for counternarratives in Kenya. “Ideological interventions” or “strategies”, as Maynard describes, are highlighted in the literature of both frameworks as the ‘solutions’ to the ‘problems’ which the frameworks highlight (Benesch, 2014, p. 10; Maynard, 2015b, p. 80; Maynard and Benesch, 2016, p. 87).

As demonstrated by Kenya, there is a role for Big Tech in supporting such strategies, albeit with caution not to infringe on the agency of locally-run projects and organisations. Discussions as to how to achieve this should be topics of priority in places such as the UNHRC’s open-ended intergovernmental working group on transnational corporations and other business enterprises with respect to human rights, UN Global Pulse, and NGOs such as Search for Common Ground. Through the correct organisations and institutions, Big Tech can be an active and responsible stakeholder in states at-risk of mass atrocities. Without this engagement, it may prove to be part of the problem more so than part of the solution.

Conclusion

There is very little compelling Big Tech to assume responsibility for the role it has in mass atrocities. However, if difficult conversations such as those which followed Trump’s ‘tweets’ continue, these organisations will eventually have to grapple with the uncomfortable reality that is their role in inciting violence. When this time comes, it would serve those who work in mass atrocity prevention well to have a plan for including Big Tech in a meaningful and considered way. This paper offers some analytical and, to a limited extent, practical considerations for such cooperation.

It should be noted that speech and ideology are both underdeveloped and underappreciated components of mass atrocity prevention – even more so how these intersect with the new media landscape. In acknowledging the increasing digitalisation of states at-risk of mass atrocities, it would be wrong not to amend this gap in the literature and, furthermore, not to include Big Tech in this interdisciplinary pursuit. The number of tools that could be added to the prevention toolbox in doing so is only limited by the creativity of the multitude of new stakeholders (local and international) who have joined the field of mass atrocity prevention. Arguably, with a greater number of tools to choose from, more tailored and effective approaches to mass atrocity prevention can be implemented.

References

1. ‘Big Tech’ colloquially refers to large and influential technology companies, namely the ‘big five’: Facebook, Microsoft, Google, Apple and Amazon. In this paper, ‘Big Tech’ is expanded to include the owners of major social media platforms (e.g. Twitter and TikTok) not included in the ‘big five’.

2. For more on how individuals such as elite decision-makers are influenced by ideology to form anti-civilian motives, see: Maynard, 2025b.

3. As discussed in ethnic conflict literature and theories of ethnicity, for more see: Kaufman, 2016, p. 92.

4. For more on the construction of “in-groups” and “out-groups”, see: Waltman and Mattheis, 2017.

5. For more on the definitional differences between hate speech, dangerous speech and incitement, see: Davis and Raymond, 2014, p. 2-9, and how these are interpreted by Facebook and Twitter, see: VanLandingham, 2019/20.

6. For the purposes of this paper, ‘new media landscape’ is an adaptation (a greater mention of digital media) of that described in Hamilton (2019).

7. Such as A/HRC/39/64; A/HRC/42/CRP; Callahan and Zaw Oo, 2019; Taylor O’Connor, 2018; Adams, 2019.

8. The one notable exception to this is Benesch’s ‘Dangerous Speech Project’ which has been operating within Myanmar for several years, running workshops on dangerous speech and conducting research.

9. As indicated by the extent to which public Facebook posts were utilised as evidence in A/HRC/42/CRP.

10. Since this paper, violence “on a massive scale” has occurred in Myanmar, following the 2020 elections and subsequent Tatmadaw coup (A/HRC/49/72).

11. For more on the use of AI in early warning systems, see: Yankoski, Weninger and Scheirer, 2020 and Paula Hidalgo-Sanchis, 2018.

Bibliography

Adams, S. 2020. Hate Speech and Social Media: Preventing Atrocities and Protecting Human Rights Online. [Accessed 18 June 2020]. Available from: https://www.globalr2p.org/publications/hate-speech-and-social-media-preventing-atrocities-and-protecting-human-rights-online/.

Adams, S. 2019. ‘If Not Now, When?’: The Responsibility to Protect, the Fate of the Rohingya and the Future of Human Rights [Online]. New York: Global Centre for the Responsibility to Protect. [Accessed 21 May 2020]. Available from: https://www.globalr2p.org/publications/if-not-now-when-the-responsibility-to-protect-the-fate-of-the-rohingya-and-the-future-of-human-rights/.

Anon 2013. New Technology and the Prevention of Violence and Conflict [Online]. New York: International Peace Institute. Available from: https://reliefweb.int/sites/reliefweb.int/files/resources/ipi-e-pub-nw-technology-conflict-prevention-advance.pdf.

Anon 2017. The Project. Dangerous Speech Project. [Online]. [Accessed 19 June 2020]. Available from: https://dangerousspeech.org/about-the-dsp/.

Bellamy, A.J. 2012a. Mass Killing and the Politics of Legitimacy: Empire and the Ideology of Selective Extermination. Australian Journal of Politics & History. 58(2), pp.159–180.

Bellamy, A.J. 2012b. Massacres and Morality: Mass Killing in an Age of Civilian Immunity. Human Rights Quarterly. 34(4), pp.927–958.

Bellamy, A.J. 2015. Operationalizing the “Atrocity Prevention Lens”: Making Prevention a Living Reality In: S. P. Rosenberg, T. Galis and A. Zucker, eds. Reconstructing Atrocity Prevention [Online]. Cambridge: Cambridge University Press, pp.61–80. [Accessed 18 June 2020]. Available from: http://ebooks.cambridge.org/ref/id/CBO9781316154632.

Bellamy, A.J., Dunne, T. and Sharma, S.K. 2016. Kenya In: A. J. Bellamy and T. Dunne, eds. The Oxford Handbook of the Responsibility to Protect [Online]. Oxford: Oxford University Press, pp.750–768. [Accessed 18 June 2020]. Available from: http://oxfordhandbooks.com/view/10.1093/oxfordhb/9780198753841.001.0001/oxfordhb-9780198753841-e-13.

Bellamy, A.J. and Lupel, A. 2015. Why We Fail: Obstacles to the Effective Prevention of Mass Atrocities [Online]. New York: International Peace Institute. [Accessed 18 June 2020]. Available from: https://www.jstor.org/stable/resrep09584.8.

Benesch, S. 2014. Countering Dangerous Speech: New Ideas for Genocide Prevention [Online]. Washington: United States Holocaust Memorial Museum. Available from: https://www.ushmm.org/m/pdfs/20140212-benesch-countering-dangerous-speech.pdf.

Benesch, S. 2008. Vile Crime or Inalienable Right: Defining Incitement to Genocide. Virginia Journal of International Law. 48(3), pp.485–528.

Benesch, S., Buerger, C., Glavinic, T. and Manion, S. 2020. Dangerous Speech: A Practical Guide. Dangerous Speech Project. [Online]. [Accessed 18 June 2020]. Available from: https://dangerousspeech.org/guide/.

Boru Halakhe, A. 2013. “R2P in Practice”: Ethnic Violence, Elections and Atrocity Prevention in Kenya [Online]. New York: Global Centre for the Responsibility to Protect. Available from: https://s156658.gridserver.com/media/files/kenya_occasionalpaper_web.pdf.

BSR 2018. Human Rights Impact Assessment: Facebook in Myanmar [Online]. San Francisco: Business for Social Responsibility. Available from: https://fbnewsroomus.files.wordpress.com/2018/11/bsr-facebook-myanmar-hria_final.pdf.

Callahan, M. and Zaw Oo, M. 2019. Myanmar’s 2020 Elections and Conflict Dynamics [Online]. Washington: US Institute of Peace. Available from: https://www.usip.org/sites/default/files/2019-04/pw_146-myanmars_2020_election_and_conflict_dynamics.pdf.

Clarke, J.N. 2017. British Media and the Rwandan Genocide [Online]. Milton Park: Routledge. [Accessed 22 February 2022]. Available from: https://login.wwwproxy1.library.unsw.edu.au/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=nlebk&AN=1554210&site=ehost-live&scope=site.

Davis, L. and Raymond, R. 2014. Preventing Atrocities: Five Key Primers [Online]. Washington: Freedom house. Available from: https://freedomhouse.org/sites/default/files/Preventing%20Atrocities%20Five%20Key%20Primers.pdf.

Goldstein, J. and Rotich, J. 2008. Digitally Networked Technology in Kenya’s 2007–2008 Post-Election Crisis [Online]. Cambridge: Berkman Centre for Internet & Society. Available from: https://cyber.harvard.edu/sites/cyber.harvard.edu/files/Goldstein&Rotich_Digitally_Networked_Technology_Kenyas_Crisis.pdf.pdf.

Hamilton, R. 2019. Atrocity Prevention in the New Media Landscape. AJIL Unbound. 113, pp.262–266.

Hamza, K. 2015. Social Media and the Responsibility to Protect In: D. Fiott and J. Koops, eds. The Responsibility to Protect and the Third Pillar: Legitimacy and Operationalization [Online]. London: Palgrave Macmillan UK, pp.190–207. [Accessed 19 June 2020]. Available from: https://doi.org/10.1057/9781137364401_12.

Harff, B. 2003. No Lessons Learned from the Holocaust? Assessing Risks of Genocide and Political Mass Murder since 1955. American Political Science Review. 97(01), pp.57–73.

Hidalgo-Sanchis, P. 2018. Experimenting with Big Data and Artificial Intelligence to Support Peace and Security [Online]. Kampala: UN Global Pulse. Available from: https://beta.unglobalpulse.org/wp-content/uploads/2018/12/experimentingwithbigdataandaitosupportpeaceandsecurity-print-final-181224205158.pdf.

Irving, E. 2019. Suppressing Atrocity Speech on Social Media. AJIL Unbound. 113, pp.256–261.

Kaufman, S. 2016. Ethnicity as a generator of conflict In: K. Cordell and S. Wolff, eds. The Routledge handbook of ethnic conflict [Online]. London: Routledge, Taylor & Francis Group, pp.91–101. Available from: https://www-routledgehandbooks-com.wwwproxy1.library.unsw.edu.au/doi/10.4324/9781315720425.

Leader Maynard, J. 2015a. Combating atrocity-justifying ideologies In: S. K. Sharma and J. M. Welsh, eds. The Responsibility to Prevent: Overcoming the Challenges of Atrocity Prevention [Online]. Oxford: Oxford University Press, pp.189–226. [Accessed 18 June 2020]. Available from: http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198717782.001.0001/acprof-9780198717782.

Leader Maynard, J. 2015b. Preventing Mass Atrocities: Ideological Strategies and Interventions. Politics and Governance. 3(3), pp.67–84.

Leader Maynard, J. and Benesch, S. 2016. Dangerous Speech and Dangerous Ideology: An Integrated Model for Monitoring and Prevention. Genocide Studies and Prevention. 9(3), pp.70–95.

Leaning, J. 2015. Early Warning for Mass Atrocities: Tracking Escalation Parameters at the Population Level In: S. P. Rosenberg, T. Galis and A. Zucker, eds. Reconstructing Atrocity Prevention [Online]. Cambridge: Cambridge University Press, pp.352–78. [Accessed 18 June 2020]. Available from: http://ebooks.cambridge.org/ref/id/CBO9781316154632.

Mancini, F. and O’Reilly, M. 2013. Conclusion: New Technology in Conflict Prevention In: F. Mancini, eds. New Technology and the Prevention of Violence and Conflict [online]. New York: International Peace Institute, pp.87-92. Available from: https://reliefweb.int/sites/reliefweb.int/files/resources/ipi-e-pub-nw-technology-conflict-prevention-advance.pdf.

MIGS 2009. Mobilizing the will to intervene: leadership & action to prevent mass atrocities [Online]. Montréal: MIGS, Concordia University. Available from: http://www.concordia.ca/research/migs/projects/will-to-intervene/about-w2i/report.html.

Morris, J. and Wheeler, N. 2016. The Responsibility Not to Veto: A Responsibility Too Far? In: A. J. Bellamy and T. Dunne, eds. The Oxford Handbook of the Responsibility to Protect [Online]. Oxford: Oxford University Press, pp.227–247. [Accessed 18 June 2020]. Available from: http://oxfordhandbooks.com/view/10.1093/oxfordhb/9780198753841.001.0001/oxfordhb-9780198753841-e-13.

Morrison, S. 2021. Facebook and Twitter made special world leader rules for Trump. What happens now? Vox. [Online]. [Accessed 27 February 2022]. Available from: https://www.vox.com/recode/22233450/trump-twitter-facebook-ban-world-leader-rules-exception.

O’Connor, T. 2018. Key Findings and Recommendations: Stakeholder mapping of countering hate speech in Myanmar – External Report [Online]. Washington: Search for Common Ground. Available from: https://www.sfcg.org/wp-content/uploads/2018/01/SFCG-Stakeholder-Mapping-Report-external-20Nov2017-FINAL-for-printing.pdf.

Paris, R. 2014. The ‘Responsibility to Protect’ and the Structural Problems of Preventive Humanitarian Intervention. International Peacekeeping. 21(5), pp.569–603.

Schissler, M., Walton, M.J. and Thi, P.P. 2017. Reconciling Contradictions: Buddhist-Muslim Violence, Narrative Making and Memory in Myanmar. Journal of Contemporary Asia. 47(3), pp.376–395.

SFCG n.d. Our Media. Search for Common Ground. [Online]. [Accessed 19 June 2020]. Available from: https://www.sfcg.org/our-media/.

Staub, E. 2010. Overcoming Evil [Online]. Oxford: Oxford University Press. [Accessed 18 June 2020]. Available from: http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195382044.001.0001/acprof-9780195382044.

Straus, S. 2012. “Destroy Them to Save Us”: Theories of Genocide and the Logics of Political Violence. Terrorism and Political Violence. 24(4), pp.544–560.

Trujillo, H.R., Elam, D., Shapiro, G. and Clayton, M. 2014. The Role of Information and Communication Technology in Preventing Election-Related Violence in Kenya, 2013. Perspectives on Global Development and Technology. 13(1–2), pp.111–128.

UNHRC n.d. Open-ended intergovernmental working group on transnational corporations and other business enterprises with respect to human rights. UNHRC. [Online]. [Accessed 19 June 2020]. Available from: https://www.ohchr.org/EN/HRBodies/HRC/WGTransCorp/Pages/IGWGOnTNC.aspx.

Thompson, A. 2007. The media and the Rwanda genocide. Ottawa: International Development Research Centre.

United Nations 2014. Framework of Analysis for Atrocity Crimes: A tool for prevention [Online]. New York: UN Office on Genocide Prevention and the Responsibility to Protect. Available from: https://www.un.org/en/genocideprevention/documents/our-work/Doc.1_Framework%20of%20Analysis%20for%20Atrocity%20Crimes_EN.pdf.

United Nations n.d. UN Global Pulse – Big data for development and humanitarian action. UN Global Pulse. [Online]. [Accessed 19 June 2020]. Available from: https://www.unglobalpulse.org/.

VanLandingham, R.E. 2019. Words We Fear: Burning Tweets & the Politics of Incitement. Brooklyn Law Review. 85(1), pp.37–84.

Waltman, M.S. and Mattheis, A.A. 2017. Understanding Hate Speech In: J. Nussbaum, ed. Oxford Research Encyclopedia of Communication [Online]. Oxford: Oxford University Press, pp.1–32. [Accessed 18 June 2020]. Available from: http://communication.oxfordre.com/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-422.

Warofka, A. 2018. An Independent Assessment of the Human Rights Impact of Facebook in Myanmar. Facebook. [Online]. [Accessed 19 June 2020]. Available from: https://about.fb.com/news/2018/11/myanmar-hria/.

Wolfsfeld, G. 2004. Media and the Path to Peace [Online]. Cambridge: Cambridge University Press. [Accessed 18 June 2020]. Available from: https://www.cambridge.org/core/product/identifier/9780511489105/type/book.

World Bank Group n.d. Individuals using the Internet (% of population) – Kenya. The World Bank | Data. [Online]. [Accessed 18 June 2020a]. Available from: https://data.worldbank.org/indicator/IT.NET.USER.ZS?locations=KE.

World Bank Group n.d. Individuals using the Internet (% of population) – Myanmar. The World Bank | Data. [Online]. [Accessed 18 June 2020b]. Available from: https://data.worldbank.org/indicator/IT.NET.USER.ZS?locations=MM.

Yankoski, M., Weninger, T. and Scheirer, W. 2020. An AI early warning system to monitor online disinformation, stop violence, and protect elections. Bulletin of the Atomic Scientists. 76(2), pp.85–90.

Yglesias, M. 2020. Twitter flags Trump for ‘glorifying violence’ in ‘looting starts, shooting starts’ tweet. Vox. [Online]. [Accessed 18 June 2020]. Available from: https://www.vox.com/2020/5/29/21274359/trump-tweet-minneapolis-glorifying-violence.