Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
The exceptions and challenges of the EU AI Act for law enforcement agencies
The Act leads to some implications for LEAs as well, such as the challenging task of re-evaluating current AI systems, ensuring conformity and conducting conformity assessments.
LAW ENFORCEMENT EXCEPTIONS TO PROHIBITED PRACTICES
Considering the specificities of law enforcement activities, the co-legislators agreed on some exceptions to the prohibited AI practices, as discussed above. Subject to appropriate safeguards, these exceptions are meant to reflect the need to equip law
enforcement with all available tools to be efficient against modern forms of crime, while also respecting the confidentiality of sensitive operational data in relation to their activities. For example, according to Article 46(2) of the EU AI Act, law enforcement or civil
protection authorities can put a specific high-risk AI system into service urgently for reasons of public security or in the case of a specific, substantial, and imminent threat to life or physical safety of individuals. This can be done without prior authorisation, provided
that an authorisation request is submitted during or immediately after the use of the system. If the authorisation is subsequently rejected, the use of the system must be stopped immediately, and all results and outputs from its use must be discarded.
Moreover, according to Art. 5(1)(h), the use of real-time remote biometric identification systems in public spaces is possible only for exhaustively defined law enforcement purposes. These purposes include the targeted searches of victims, the prevention of terrorist attacks and threats to life, and the localisation of criminals suspected to be involved in serious and organised crime.
The circumstances under which law enforcement agencies are allowed to use real-time RBI systems, are subject to specific conditions (Art. 5(2)(a)):
- Specifically targeted individuals: The use is limited to
confirming the identity of specifically targeted individuals. This implies that real-time RBI should not be used for indiscriminate surveillance or broad identification purposes.
- Limited scope: The use of real-time RBI must be strictly necessary and targeted. This includes limitations on the individuals to be identified, the location, the temporal scope, and being based on a closed dataset of legally acquired video
footage.
- Fundamental Rights Impact Assessment (FRIA): Law
enforcement authorities are required to complete a fundamental rights impact assessment prior to using these systems. This assessment would evaluate the potential impact on the rights and freedoms of individuals.
- Authorisation requirements: The use of such systems in publicly accessible spaces for law enforcement purposes must be expressly and specifically authorised by a judicial authority or by an independent administrative authority. While the EU AI Act foresees exceptions to this rule, this authorisation should
ideally be obtained prior to the use of the system (or within 24 hours).
- National laws: The exceptions for law enforcement use of realtime RBI will be possible only if there is national law in place explicitly foreseeing this, as outlined in the EU AI Act. As such, Member States have the flexibility to decide on whether the exceptions will be applicable in their country, introduce stricter
conditions, or even a horizontal ban of such systems.
- Notification of market surveillance authority: The relevant market surveillance authority and the national data protection authority should be notified of each use of the ‘real-time biometric identification system’.
The RBI exceptions outlined in the EU AI Act are welcomed from a law enforcement standpoint. These systems enable the targeted and effective interventions, while avoiding disproportionate stop
and search measures based on race or ethnicity or any distinctive physical characteristics. This strategic shift towards a more focused use of technology not only enhances the ability of law enforcement agencies to maintain public safety but also significantly reduces the likelihood of discriminatory practices that have historically marred policing efforts.
However, while these exceptions are seen as a positive
development, they also introduce a layer of complexity in the broader context of AI tool adoption and application within law enforcement. While the Act is designed to ensure that relevant technologies are used in a way that upholds fundamental rights and fosters trust among the public, this may also slow down the adoption process, as law enforcement agencies must navigate through the additional regulatory requirements, ensuring that their AI tools are compliant with the new standards.
This careful balancing act between leveraging AI for enhanced law enforcement capabilities and adhering to the ethical, legal, and regulatory standards set forth by the EU AI Act will likely influence how AI technologies are upheld and implemented by law enforcement agencies across the EU. The success of this endeavour relies on finding a middle ground that allows for the innovative use of AI for policing purposes while safeguarding against the misuse of the technology in ways that could infringe upon individual rights and freedoms.
[...]
Implications for law enforcement agencies
The introduction of the EU AI Act poses various challenges and implications for law enforcement agencies use of AI across the EU, specifically regarding the deployment and utilisation of AI-driven
tools that include AI. Firstly, the Act’s explicit position on prohibiting certain AI practices means there is an immediate imperative to stop deploying these technologies. Police forces, which may already be utilising certain AI systems, will now face the challenging task of re-evaluating these tools. Should any of these operational technologies fall within the prohibited category set by the Act, they would need to be deactivated, leading to potential challenges in maintaining operational continuity. This poses the question: how will the transition be managed for legacy systems under the new regulations?
Moreover, the process of conformity assessments for systems deemed high-risk by the EU AI Act will undoubtedly be intricate and time-consuming. LEAs will be required to comprehensively assess these systems against the stipulations set by the new regulation. In many instances, this could entail considerable modifications to existing systems to ensure alignment with the new standards.
Consequently, not only does this suggest potential changes to software, but it also highlights the need to allocate additional resources, in terms of both finance and staff.
Furthermore, the influence of the Act is not restricted to only the newly deployed AI systems. Considering the dynamic nature of AI, its continuous evolution and updates, systems that are already in operation will also be subject to these regulations. The evolving nature of AI means that LEAs will be in a perpetual cycle of reviewing and modifying, ensuring that their systems, even if previously compliant, remain in line with the regulations, especially if updates alter their functions or associated risks.
A particularly challenging scenario emerges for LEAs that have taken the initiative to develop AI tools internally. These agencies will confront the dual responsibility of ensuring compliance both as users and as developers. This essentially indicates a substantial investment in guaranteeing that every stage of the process, from development, data collection, training, to deployment, is in strict adherence to the EU AI Act requirements.