- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Due to plurality of actors involved in the design and use of algorithms, responsibility is a challenge
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Due to plurality of actors involved in the design and use of algorithms, responsibility is a challenge
1.4.6 The responsibility challenge Finally, it is important to note that a variety of different players are involved in the stages of algorithmic decision making discussed in section 1.3. Different people or companies are responsible for setting the objectives, deconstructing decision-making processes, programming and training algorithms, collecting and preparing the training data, using algorithms for decision making, monitoring and supervising their effect, and so on.143 Consequently, if at some point a discriminatory outcome is detected (for instance, because an algorithm systematically suggests that men should be promoted to a certain position rather than women), it may be very difficult for the victim of discrimination or for supervisory or monitoring bodies to know whom to hold responsible, liable and/or accountable for that discriminatory outcome among the various players involved (the developers, the sellers or the end user (in the example above, the HR service) of the algorithm).144 This is even more true in situations where different algorithms and enabling technologies work together, as is often the case in AI applications.145 Identifying the ‘agent’ (person, body, institution, technological application or company) responsible for a case of discrimination therefore poses a particular challenge in relation to algorithms.146