- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Type of Threat or Opportunity >
- Trend snippet: Awareness on algorithmic discrimination has not yet led to the introduction of legislation to counter the problem
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Awareness on algorithmic discrimination has not yet led to the introduction of legislation to counter the problem
In the majority of European countries, there is no case law available on specific cases of discrimination or inequality caused by algorithms.
3.3.2.2 Interactions between data protection law and gender equality and non-discrimination law in national scholarship For most countries (Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Hungary, Iceland, Ireland, Latvia, Liechtenstein, Lithuania, Malta, Netherlands, Romania, Slovakia, Slovenia, Sweden), national experts have reported limited or no legal scholarship analysing the potential interaction between national non-discrimination and data protection regulations in addressing issues of algorithmic discrimination
. . .
The national reports show that, so far, this awareness has not resulted in strong efforts to introduce legislation to counter such problems. In none of the countries studied has new equality or non-discrimination legislation been adopted, and existing legislation has not been amended to deal with the challenges of algorithmic decision making. Nevertheless, the national experts for Denmark, Germany, Greece, Malta and Norway have suggested that the dynamics of the policy and public debates (to be discussed in section 3.3.1) are such that some legislative proposals in this field can be expected in the future.519 More often, however, existing legislation in a variety of fields has been mentioned as relevant to help address certain aspects of algorithmic discrimination.
. . .
3.4.2.1 Relevant judgments and decisions by semi-judicial bodies in the European countries In by far the majority of countries, no case law is yet available on specific cases of gender inequality or discrimination caused by algorithms. To the extent that cases on algorithms are brought before the national courts, they usually relate to specific aspects of algorithmic decision making, such as transparency and data protection. For example, in the Netherlands, the highest administrative court has found that there is a general obligation for public authorities to ensure explainability, transparency and accessibility of algorithms in order for individuals to understand how they have been affected by an algorithm and to enable them to effectively contest that algorithm before a court.542 This position has been embraced by the Supreme Court of the Netherlands.543 In another recent Dutch judgment, the legislation that allowed the use of a predictive profiling algorithm in detecting social security fraud (SyRI) has been found to be incompatible with the general right to privacy, since individuals were given too little information about the way in which the algorithm operated and used their data.544 While discrimination complaints were made by the parties in the SyRI case, they did not play any significant role in the judgment, although the court in its reasoning showed explicit awareness of the potential risks of biased and stereotype-based decision making involved in the use of the particular profiling algorithm involved.