- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Source of threat >
- Trend snippet: The involvement of human beings bears risk in relation to algorithmic decision making
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
The involvement of human beings bears risk in relation to algorithmic decision making
Humans thus play a crucial and essential role in the programming, training and use of algorithms. Indeed, many people trust algorithms only if and because there is a ‘human in the loop’.95 At the same time, it is important to understand that the involvement of human beings bears particular risks in relation to algorithmic decision making, particularly from a perspective of equality and non-discrimination. It is well known that human reasoning shows flaws, biases, logical errors and fallacies, which may have an impact on the programming of algorithms.96 Equally, the (personal, societal and therefore humanderived) data fed into algorithms in the training and use stages may also be non-neutral and biased, for instance because it reflects patterns of discrimination, as is further explained in section 1.4.2 below.97 For that reason, the perpetuation of human bias has been typified as one of the key challenges for modern algorithmic societies:98 algorithmic systems tend to simply ‘reflect the values of their creators’.99 Consequently, the mechanisms described here may easily lead to perpetuating prejudice, overbroad or harmful stereotypes and structural forms of discrimination. In other words, humans’ discriminatory attitudes risk being translated and reflected in the algorithms that humans build.