- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Algorithms may reproduce and strengthen existing patterns of inequality by reifying discriminatory correlations
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Algorithms may reproduce and strengthen existing patterns of inequality by reifying discriminatory correlations
1.4.3 The correlation and proxies challenge Algorithms that are used for pattern recognition (see section 1.2.2.1) are often very good at detecting correlations and patterns in large volumes of data.120 However, correlations do not always correspond to causal relationships.121 For example, gender might negatively correlate with level of performance at work, not because of a causal relationship, but because women historically have been consistently evaluated more negatively than men for the same work performance.122 This example shows that decisions based on correlations found by an algorithm may not always be acceptable from a human perspective, since human thinking is informed by normative or ethical considerations and causation logic.123 Moreover, algorithms may reproduce and strengthen existing patterns of inequality by reifying discriminatory correlations. This correlation challenge is exacerbated by the fact that algorithms are very good at detecting ‘proxies’.124 For example, algorithms may be trained not to base an output on certain personal characteristics such as gender, ethnic origin or religion to avoid discrimination. Nevertheless, they may easily detect other variables and ‘neutral’ data points that are very closely related to those characteristics, ranging from certain types of clicking behaviour to zip codes and preferences for particular types or colours of cars.125 If algorithms take account of such ‘proxy variables’ in identifying correlations, they can approach the original prohibited characteristic very closely, with the same discriminatory outcomes but without this being highly visible.126 This can be coincidental or a result of deeply engrained, structural discrimination,127 but it can also be intentional, which is known as ‘masking’: a trivial and non-suspect proxy is used to mask a case of conscious discrimination based on a protected ground.128 This also makes clear that simply omitting certain personal data in the process of developing an algorithm, such as information about gender or ethnicity, does not guarantee that discrimination is avoided.129 Although that may help to reduce the possibility of ‘overt’ (as opposed to ‘covert’130) direct discrimination, due to the prevalence of proxies there may still be room for indirect discrimination.131 This is what is called the proxies challenge.