- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Type of Threat or Opportunity >
- Trend snippet: Media has reported an increasing number of cases of discrimination by algorithms
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Media has reported an increasing number of cases of discrimination by algorithms
2.1 The scope of EU gender equality and non-discrimination law in light of the problem of algorithmic discrimination This section discusses the issues of algorithmic discrimination that arise in relation to gender equality and non-discrimination and reviews the risks and challenges they pose in light of the current personal and material scope of the EU legal framework, offering specific examples and analyses where relevant. The media has reported an increasing number of cases of gender discrimination performed by algorithms over recent years. There are numerous examples, many of which relate to algorithmic applications in use in the United States, such as the Apple Card algorithm, which was found to grant higher credit limits to men than to women despite the latter having higher credit scores159 or Amazon’s algorithmic hiring prototype, which was found to discriminate against women.160 Similarly, numerous examples of algorithmic discrimination have been noted in relation to other protected grounds. A study by Obermeyer and others, for instance, shows how an algorithm used to predict patients’ healthcare needs led to widespread discrimination on grounds of race.161 Because the algorithm used healthcare costs as a proxy for illness risks, which reflected the unequal access to healthcare services of Black and White populations in the US, Black patients were rated as less at risk than White patients for similar levels of actual illness, leading them to receive a lesser allocation of resources. Scholars have also demonstrated, for example, that the mailing service Gmail uses protected grounds such as sexual orientation or religious beliefs in order to expose users to targeted ads and recommendations.162 As will be shown in Chapter 3, many such examples of (potentially) discriminatory uses of algorithms can also be seen in the various European countries.