By Rebecca Herbener, MAGG Student

The Artificial Intelligence and Human Rights Symposium in Ottawa was an incredible opportunity and we are very grateful to Global Affairs Canada for inviting us as well as the University of Waterloo and Balsillie School of International Affairs for their assistance and guidance. The symposium provided us with the chance to share ideas for our ongoing research project with other students from many different schools and fields of study across Canada. Throughout the day we listened to expert panels discuss the different dynamics of AI and human rights, presented our own research, and collaborated with other students. Not only were we able to learn from the experts and students but also we were able to promote the high quality research being undertaken at the University of Waterloo and Balsillie School of International Affairs.

Over the past few years, artificial intelligence (AI) has been increasingly called upon to counter violent extremism online. In light of recent terrorist attacks across Western Europe and the United-States, government officials in the West have pressured social media firms to create new tools in AI to remove extremist content faster. However, there is a delicate balance between removing extremist content and upholding essential human rights like freedom of speech. Many governments, particularly Germany and the United-Kingdom, have chosen to directly or indirectly regulate online speech, failing to achieve that essential balance. Therefore, our research has been based around understanding the role of the government in detecting and countering online violent extremism through the use of AI.

There have been three basic themes that have guided our research and recommendations. First, we have focused on addressing the intersection of rights for Canadians. Through the AI symposium we gained a great deal of insight and factors to consider as we wrestle with this specific theme. We are focusing on national security and rights to a safe and secure life, while also examining rights of digital privacy and freedom of expression. This is paramount in our recommendations as often measures to protect one right may infringe on another. This is especially important considering we are making recommendations to the Government of Canada.

Second, we have been working to understand the possibilities as well as limitations in applying AI to detect and counter online violent extremism. Although the internet plays a critical role in the path to radicalisation and pursuing violent extremism, it is only one contributing factor. Research consistently shows that radicalization is dependent on personal relationships and person-to-person connections. Furthermore, AI is not perfect, often misidentifying satirical images and dialogue as extremist. As a result, AI must be coupled with human oversight to ensure its proper functioning. Due to this reality, our recommendations are based on using AI as one tool in a much larger toolbox. However, we also don’t want to undermine the potential applications for AI in detecting and countering online violent extremism. At the symposium in Ottawa, expert panels of security experts and government officials supported these findings.

Third, our research increasingly suggests collaboration between government and private enterprise is the best way forward. Exactly how such partnerships emerge remains to be seen. Currently we are examining the role of the government as a funding body (hiring private Canadian firms to run AI to collect public information) or as a policy maker (regulating or overseeing social media platforms and Canadian data collection and storage). At the core of this is the desire to protect Canadians and their data, while allowing private innovation to flourish. The symposium was extremely helpful in this regard as we were able to get feedback and insight from representatives of AI firms (Advanced Symbolics and Element AI to name a few) as well as government officials (Department of Justice and Global Affairs Canada).

After a full day, we left having gained validation in our research findings as well as our recommendations while also receiving relevant and substantive feedback to help guide our next steps. The comments received from other students, government officials, and private companies assisted our multidisciplinary approach and opened our minds to problems and opportunities we had not yet considered. Examining how AI can fight online extremism is a complex and dynamic topic. As such, we appreciate the knowledge shared at the symposium that came from a variety of disciplines and backgrounds that all bring a unique outlook on the topic. We are truly grateful for this opportunity and how it will help guide our research going forward.