- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Type of Threat or Opportunity >
- Trend snippet: The ethical and social issues in AI for law enforcement
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
The ethical and social issues in AI for law enforcement
Ethical and social issues in AI for law enforcement
As the previous section demonstrated, Artificial Intelligence is becoming an increasingly vital tool for policing across the European Union. Nonetheless, this brings forth a multitude of ethical and social challenges that require meticulous analysis. This chapter delves into critical areas of concern: the potential for data bias and the subsequent implications for fairness; the fine line between surveillance for security and the infringement of individual privacy;
the pressing need for accountability and transparency in AI deployments, with an emphasis on the ‘black box’ issue. Moreover, the chapter will discuss the potential for AI to either exaggerate or mitigate human rights issues and discrimination within the realm of law enforcement.
Data bias and fairness
Data is the core of any AI system, and the quality of the data directly influences the outcomes produced by the system. Any skew in data can unintentionally lead to unfair or biased outcomes. Fair and unbiased policing is a foundational pillar of democratic societies,
and, therefore recognising and eliminating bias is of particular concern to law enforcement.
Bias in data can emerge from numerous sources. Historical data used to train AI systems can embed longstanding societal biases, reflecting past prejudices and discriminatory practices. For instance, if a certain neighbourhood was historically over-policed due to
racial or socio-economic biases, an AI system trained on this data might suggest that the area is more prone to criminal activity. Such outcomes might create a feedback loop, leading law enforcement to continue over-policing that area, thereby finding disproportionate numbers of crime and reinforcing the biases present in the data.
Beyond historical biases, there is also the challenge of
representational bias. If data does not adequately represent all segments of the population, the AI system can make flawed predictions. Overrepresented groups can be disproportionately affected. For instance, a study by the EU FRA found that offensive
speech detection algorithms, such as those for identifying hate speech or harassment, had higher error rates for certain socioeconomic groups. A major contributing factor is the association of certain terms with ethnic groups (e.g., ‘Muslim’, ‘gay’, ‘Jewish’), which can cause the algorithms to mistakenly classify non-offensive phrases as offensive. Since these terms are more frequently utilised by the respective ethnic groups, there is an increased likelihood of their content being wrongly flagged as offensive and subsequently
removed, due to their overrepresentation in the training data. On the other side, groups that are underrepresented in the data may not benefit from the same level of policing protection.
It is worth noting that there is not a universal agreement on the precise definitions of fairness. Various interpretations exist. In some instances, it is justified to use protected categories like gender
and age; for instance, an AI system that deducts information about minors to ensure additional protection needs to be trained with relevant sensitive data. As such, these situations should be evaluated individually, and ultimately, humans must always determine how to act on the information provided by the AI.
Privacy and surveillance
In law enforcement, striking the right balance between public security and individual privacy has always been a challenge. As AI integrates more deeply into policing methods, this balance becomes even more delicate.
Historically, law enforcement agencies across the EU operate within a robust legislative and regulatory framework. The introduction of regulations such as the General Data Protection Regulation (GDPR) and the Law Enforcement Directive (LED) underscores the EU’s
proactive stance on safeguarding data protection and individual privacy rights. These regulations serve as foundational pillars governing the intersection of technology and citizens’ rights, fortified by robust enforcement mechanisms, human overview, and avenues for redress. They are not merely legal frameworks but embody comprehensive measures to ensure the responsible handling of personal data, fostering transparency, accountability, and trust in digital interactions.
While AI offers significant advantages for law enforcement, such as the ability to process vast amounts of data and utilise biometrics for rapid criminal identification and threat assessment, it also
brings with it complex challenges. Advanced technologies like facial recognition systems can dramatically enhance efficiency. However, without sufficient safeguards, such as human oversight to evaluate their outputs, these technologies risk infringing on fundamental rights, such as the right to private life and the right to personal data protection (Art. 7 and 8 of the EU Charter of Fundamental
Rights). This could manifest as disproportionate surveillance of innocent individuals or the potential for misuse targeting specific groups, raising concerns about privacy and the necessity of such monitoring.
As the world copes with the implications of AI and surveillance, the EU, fortified by its stringent regulations, institutional ethos, and a history of prioritising its citizens, is uniquely poised to shape a path where technological advancements strengthen security without compromising individual rights. This coexistence can serve as a global model, ensuring that technology remains a tool for the improvement of society.
Accountability and transparency
Accountability and transparency serve as cornerstone principles in democratic societies, ensuring that power structures remain in service to the community and function with integrity. As AI becomes a prominent tool within law enforcement, these principles must be at the forefront to maintain public trust and ensure justice.
Despite the benefits the technology brings, one of the
primary concerns is the potential for decisions, predictions or recommendations made by AI to remain unexplained or unjustified. When the output of AI is used to support decision making in law enforcement – be it biometric identification, or threat assessment–
it is crucial for both police officers and those affected by these decisions to understand the rationale behind. Without this clarity, the risk of mistrust, misuse, and potential injustices escalates.
In the EU, the demand for accountability and transparency is not new. However, AI’s unique nature, where algorithms often operate with layers of complexity beyond human comprehension,
introduces novel challenges. There is a pressing need for mechanisms that make AI’s decisionmaking
processes interpretable, especially in high-stakes environments like policing and criminal justice, not only in terms of how relevant evidence are collected, processed and presented before a court or tribunal, but also in a broader sense to ensure that citizens can comprehend, engage with, and challenge the use of AI.
Ensuring accountability also entails setting clear responsibilities. When an AI tool is used to generate recommendations or make predictions, who is to be held accountable if there is an error or if it results in injustice? Is it the software developers, the law
enforcement agency using the tool, or the overarching regulatory body? The definition of responsibility is vital to ensure that AI tools in law enforcement remain both effective and just. Returning to the broader landscape, it becomes evident that for AI to truly benefit law enforcement in the European Union and maintain public trust, a rigorous commitment to accountability and transparency is essential. The development of frameworks to explain AI’s decision-making processes, together with well-defined regulatory standards and clarity in assigning responsibility, are indispensable for establishing this balance.
Human Rights and Discrimination
In the EU, where human rights are deeply embedded in our foundational values, integrating AI into law enforcement brings forth several challenges. The primary concern is AI’s unintended reinforcement or amplification of societal biases due to reliance on
historical data. As discussed, such biases can lead to unjustified targeting of particular social groups, leading to disproportionate policing.
Furthermore, AI’s predictive capabilities can mistakenly categorise individuals based on broad data patterns. Such generalisations might risk infringing on the fundamental principle of “innocent until proven guilty,” raising valid concerns about the right to fair trial.
To foster a balanced integration of AI within this critical paradigm, law enforcement has an array of options. Firstly, the significance of undertaking comprehensive audits cannot be overstated. Every AI system, before its active implementation in law enforcement, should
undergo an in-depth assessment. While the technical robustness of these systems is essential, it is equally important to ensure their conformity to the relevant frameworks such as the ethics guidelines for trustworthy AI, introduced by the High-Level Expert Group on AI. By locating and addressing any inherent biases at this stage, we can set the foundation for fair and unbiased AI implementations.
Equally crucial is the need to facilitate community engagement. Certain communities frequently find themselves excluded from the mainstream of technological advancements, often facing unintended negative impacts as a result. Through fostering
continuous dialogue with these communities, law enforcement can gather unique perspectives other than purely technical evaluations. Proactive engagement not only improves trust but also ensures that AI systems are deployed in a way that resonate with the broader ideals of fairness, inclusivity, and justice.
Lastly, the dynamic nature of AI necessitates continuous monitoring and evolution. Technologies evolve, societal norms shift, and new challenges arise. In such a landscape, ensuring that AI applications
in law enforcement are subject to ongoing monitoring becomes essential. This iterative scrutiny and feedback enables real-time adjustments, ensuring that AI-driven initiatives in law enforcement consistently mirror and uphold the EU’s dedication to equal rights,
justice, and human dignity.