- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Difficulties in detecting and identifying algorithmic discrimination on national level
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Difficulties in detecting and identifying algorithmic discrimination on national level
3.2.4 Detecting algorithmic discrimination Even though many algorithms may lead to confirmation of biases or may have discriminatory or stereotyping effects, another major problem that is perceived on the national level relates to the difficulties involved in detecting and identifying algorithmic discrimination. Many national experts mention that it may not always be obvious if an algorithm really is discriminatory or generates discriminatory effects. In 2020, for example, in Germany the Conference of the Federal and State Ministers for Equality (GFMK) pointed out that, due to the complexity of the matter, it seemed unrealistic that those affected would be able to detect and pursue algorithmic discrimination.366 Some concrete examples of the problems that can arise in relation to detection of algorithmic discrimination can be seen in Poland, where algorithms are used in relation to the assignment of pupils to nurseries, kindergartens and schools.367 Taking into account the type of data collected and the opacity of the algorithm behind them, it has been observed that it is difficult to identify any discriminatory elements. Possible bias could result from a municipality’s assumptions as to which factors are to be taken into account when enrolling children in educational institutions (and what weights should be assigned to them), but it could also be the result of errors in the construction of the algorithm. Moreover, it has been remarked that the eventual decision on the assignment of an individual pupil will be based on the result of a combination of many factors of different weights. This could make it challenging to detect an error or a specific instance of discrimination. Another illustration is the ‘smiles’ ranking that is used in Poland, which involves a facial recognition system that detects the number of times a consultant smiles during a meeting with a client.368 As the expert for Poland has noted, in the absence of permanent recordings of the signals received by the facial recognition system, post factum verification of the adequacy of the collected data is in practice impossible, which makes discrimination very hard to detect.