- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Type of Threat or Opportunity >
- Trend snippet: While there is great potential in AI, its use by law enforcement raises human rights concerns that can undermine trust in the government
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
While there is great potential in AI, its use by law enforcement raises human rights concerns that can undermine trust in the government
Responsible AI Innovation in Law Enforcement
Irakli Beridze Head, Centre for Artificial Intelligence and Robotics, UNICRI, United Nations
Artificial Intelligence (AI) is having an impact on many sectors and, if harnessed appropriately, this technology can deliver great benefits for our global society, for instance by helping us to achieve the 17 ambitious goals that world leaders committed to in the 2030 Agenda for Sustainable Development. While there is great potential in AI, the use of this technology by law enforcement raises very real and serious human rights concerns that can be extremely damaging and undermine the trust placed in government by communities. Human rights, civil liberties and the fundamental principles of law may be unacceptably exposed, or even irreparably damaged, if we do not tread this path with great caution.
In times characterised by more limited resources, no significant decrease in global crime rates and an increasingly complex operating environment – that now includes the emergence of the COVID-19 pandemic – law enforcement is increasingly being tested and tasked with doing more with less. As with many sectors, AI may present a solution, or at the very least some much needed support. In this regard, we have seen a significant growth in the adoption and integration of AI into policing in recent years, as it turns to AI to augment operational capacities or even just to facilitate ordinary administrative tasks. For instance, the Prefecture Police in Tokyo is developing AI-enabled tools in pilot form that focus on identifying areas of high crime risks, which can serve to support in determining optimal patrol routes and crime prevention techniques. Even more recently, in response to the COVID-19 pandemic, we have seen national authorities, Responsible AI Innovation in Law Enforcement Irakli Beridze Head, Centre for Artificial Intelligence and Robotics, UNICRI, United Nations including law enforcement, turn to AI to support it to push back against the spread of the virus and preserve social order. Reuters reported a case whereby the authorities in China relied on facial recognition cameras to track a Hangzhou man who had travelled an affected area. Upon his return home, the local police were there to instruct him to self-quarantine or face repercussions. Ultimately, the expansion in AI serves to underscore the importance of discussions on governance, as well as the ethical and human rights perspectives. These discussions on the responsible uses of AI are growing among States and throughout the private sector. A recent study by Nature identified 84 documents containing ethical principles or guidelines for AI. At the same time, more than 30 States to date have adopted national AI strategies or action plans since 2016, a large percentage of which highlight the importance of the ethical considerations to the use of AI.
At the United Nations Interregional Crime and Justice Research Institute (UNICRI), we have established a specialized Centre for AI and Robotics in The Hague and are one of the few international actors dedicated to specifically looking at AI vis-à-vis crime prevention and control, criminal justice, rule of law and security. We seek to support and assist national authorities, in particular law enforcement agencies, in understanding both the opportunities and pitfalls associated with these technologies and we are exploring their use for contributing to a future free of violence and crime. Together with the International Criminal Police Organization (INTERPOL), we have created a global platform to discuss advancements in and the impact of AI for law enforcement. We organize a global meeting on AI for law enforcement on an annual basis since 2018 – the third edition of which will take place this year in the Hague in November. The outputs of these meetings, which include a report in 2019 on AI for law enforcement, represents a contribution to advancing the AI governance panorama in the law enforcement community. At the second global meeting, law enforcement identified that the need for support and guidance to facilitate its adoption of AI and, in doing so, avoiding the many pitfalls. Responding to this request, we will be elaborating a ‘toolkit’ for responsible AI innovation by law enforcement that will contain valuable guidance and support for law enforcement in developing, deploying and using AI in a trustworthy and lawful manner. This toolkit will include the identification and compilation of major technology domains and possible use-cases, best practices of the responsible use of AI when a law enforcement agency intends to develop an AI-enabled project (in-house) or procure an AI-tool/system (external solutions) – and a series of recommended good practices that reflect the general principles and seek to build trust and social acceptance. Our main goal with the toolkit to produce a practical and operational oriented document that seeks to build upon work already done and avoid being ‘just one more set of guidelines’. The notion of a ‘toolkit’ has been identified as the preferred format as, departing from existing proposed approaches of ‘guidelines’, ‘regulations’ and ‘frameworks,’ it would seek to stimulate the positive potential of AI within the law enforcement community to develop, deploy and use AI systems, while providing guidance on preventing harmful effects. The positive power and potential of AI is real. However, to access it, we must first work towards ensuring its use is responsible, taking into consideration fundamental principles and rights and respect for the rule of law. Soft law approaches such as this toolkit can make a valuable contribution to AI governance, particularly in the law enforcement domain where the use of AI is truly an edge case.