- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: The algorithmic decision-making chain is complicated when discrimination arises from technology that combines algorithms and enabling technologies
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
The algorithmic decision-making chain is complicated when discrimination arises from technology that combines algorithms and enabling technologies
In addition, this fragmented algorithmic decision-making chain is further complicated when discrimination arises from a technology that integrates various algorithms and combines them with enabling technologies (e.g. the internet of things). As explained in section 1.2.4, AI applications are often complex and made up of various algorithmic and data-generating components. For example, algorithmic decision making can involve situations where the output of one particular algorithm, which itself relies on the data generated by a given connected object (e.g. voice assistants such as Google’s Alexa, children’s toys or connected home appliances such as washing machines, heaters, etc.), is used as input for another algorithm. Such a situation creates manifold risks and problems. If one of the connected systems fails, for example because of a technical failure or a misinterpretation of data, this may have the effect that the other systems also fail – resulting in a process of cascading failures.287 Moreover, the interconnectedness of technologies and the fragmented nature of algorithmic decision-making processes multiplies the number of actors involved and makes the distribution of responsibility and liability even more obscure. If discrimination arises at the end of the chain, how can responsibility be traced? Will all those involved face collective liability? Should one particular person or organisation bear the liability burden alone? If so, who should that be? Moreover, when there is a ‘human in the loop’ during the actual decision-making phase, should the human bear responsibility, should it be the machine, or should it be both? This could produce hybrid liability situations that equality and non-discrimination law might not yet be fit to address.288