FOR EDUCATORS

January 8, 2025

The Accelerationism Research Consortium

What is ARC? The Accelerationism Research Consortium (ARC) is a cross-sector collaboration of researchers, practitioners, analysts, and journalists who are dedicated to understanding and mitigating the threat posed by militant accelerationist terrorism.
January 8, 2025

Brotherhood of Blood

No Lives Matter (NLM) will use encrypted messaging platforms to recruit like-minded individuals, partner with white racially motivated extremists (WMRE), and publish tactical guides.
January 8, 2025

No Lives Matter Updates Extremist Messaging and Publishes Tactical Guides

No Lives Matter (NLM) will use encrypted messaging platforms to recruit like-minded individuals, partner with white racially motivated extremists (WMRE), and publish tactical guides.
January 8, 2025

The Rise of Nihilistic Accelerationism: From Sextortion to Stabbings in Sweden

The Rise of Nihilistic Accelerationism: From Sextortion to Stabbings in Sweden
August 31, 2024

CSIS Public Report

Over the past decade, two major phenomena have developed in the digital realm. On the one hand, extremism has grown massively on the Internet, with sprawling online ecosystems hosting a wide range of radical subcultures and communities associated with both ‘stochastic terrorism’ and the ‘mainstreaming of extremism’. On the other hand, Artificial Intelligence (AI) has undergone exponential improvement: from ChatGPT to video deepfakes, from autonomous vehicles to face-recognition CCTV systems, an array of AI technologies has abruptly entered our everyday lives. This report examines ‘AI extremism’, the toxic encounter of these two evolutions – each worrying in its own right. Like past technological progress, AI will indeed be – in fact already is – used in various ways to bolster extremist agendas. Identifying the many opportunities for action that come with a range of AI models, and linking them with different types of extremist actors, we offer a clear overview of the numerous facets of AI extremism. Building on the nascent academic and government literature on the issue as well as on our own empirical and theoretical work, we provide new typologies and concepts to help us organize our understanding of AI extremism, systematically chart its instantiations, and highlight thinking points for stakeholders in countering violent extremism.
August 14, 2024

How Do We Know What Works in Preventing Violent Extremism?

Over the past decade, two major phenomena have developed in the digital realm. On the one hand, extremism has grown massively on the Internet, with sprawling online ecosystems hosting a wide range of radical subcultures and communities associated with both ‘stochastic terrorism’ and the ‘mainstreaming of extremism’. On the other hand, Artificial Intelligence (AI) has undergone exponential improvement: from ChatGPT to video deepfakes, from autonomous vehicles to face-recognition CCTV systems, an array of AI technologies has abruptly entered our everyday lives. This report examines ‘AI extremism’, the toxic encounter of these two evolutions – each worrying in its own right. Like past technological progress, AI will indeed be – in fact already is – used in various ways to bolster extremist agendas. Identifying the many opportunities for action that come with a range of AI models, and linking them with different types of extremist actors, we offer a clear overview of the numerous facets of AI extremism. Building on the nascent academic and government literature on the issue as well as on our own empirical and theoretical work, we provide new typologies and concepts to help us organize our understanding of AI extremism, systematically chart its instantiations, and highlight thinking points for stakeholders in countering violent extremism.
May 23, 2024

AI Extremism

Over the past decade, two major phenomena have developed in the digital realm. On the one hand, extremism has grown massively on the Internet, with sprawling online ecosystems hosting a wide range of radical subcultures and communities associated with both ‘stochastic terrorism’ and the ‘mainstreaming of extremism’. On the other hand, Artificial Intelligence (AI) has undergone exponential improvement: from ChatGPT to video deepfakes, from autonomous vehicles to face-recognition CCTV systems, an array of AI technologies has abruptly entered our everyday lives. This report examines ‘AI extremism’, the toxic encounter of these two evolutions – each worrying in its own right. Like past technological progress, AI will indeed be – in fact already is – used in various ways to bolster extremist agendas. Identifying the many opportunities for action that come with a range of AI models, and linking them with different types of extremist actors, we offer a clear overview of the numerous facets of AI extremism. Building on the nascent academic and government literature on the issue as well as on our own empirical and theoretical work, we provide new typologies and concepts to help us organize our understanding of AI extremism, systematically chart its instantiations, and highlight thinking points for stakeholders in countering violent extremism.
May 18, 2024

A Guide for Student Belonging in Public Schools in Manitoba – MASS

A Guide for Student Belonging in Public Schools in Manitoba Every student deserves to feel a sense of belonging in school; to feel safe, respected, and […]
December 21, 2023

The Evolution of Disinformation: A Deepfake Future

This report is based on the views expressed during, and short papers contributed by speakers at, a workshop organised by the Canadian Security Intelligence Service as part of its Academic Outreach and Stakeholder Engagement (AOSE) and Analysis and Exploitation of Information Sources (AXIS) programs. Offered as a means to support ongoing discussion, the report does not constitute an analytical document, nor does it represent any formal position of the organisations involved. The workshop was conducted under the Chatham House rule; therefore no attributions are made and the identity of speakers and participants is not disclosed. You can read the whole report at https://drive.google.com/file/d/1Jn9pVjmgbUlBdM98F74qX6GbFSsUsqpt/view?usp=sharing