- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Complexity of algorithms poses a problem for human decision makers
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Complexity of algorithms poses a problem for human decision makers
1.4.4 The transparency and explainability challenge Another common characteristic of (and challenge related to) algorithms is that they are opaque and difficult to explain, especially to non-experts.132 Even relatively straightforward, rule-based algorithms may be so complex that outsiders cannot easily comprehend their workings.133 It is even more difficult to understand for people exactly how self-learning algorithms work, in particular deep-learning algorithms.134 Such algorithms might still be transparent to technical experts, especially if they are given all the necessary information on the relevant source codes, input variables, parameters and threshold values,135 but lay people will find it very difficult to understand how an individual risk or a specific pattern is identified by means of a self-learning algorithmic application.136 Obviously, this will be even more true of intricately interconnected sets of algorithms that function with some degree of autonomy and can almost mimic human intelligence, as may be the case for AI systems. The lack of transparency for outsiders and lay people, combined with the difficulties of explaining the workings of an algorithm make it difficult for human decision makers to identify any of the flaws, biases or ill-qualified correlations that may be part of the algorithmic process.137 Many people who are subjected to algorithmic decision making will never know exactly how the decisions that affect them on a daily basis are made, whether they relate to price-setting or an employment offer and whether they influence their insurance premiums or lead to the removal of their social media posts. This opacity and lack of information makes discrimination and bias difficult to discover.138 Hence, in the absence of algorithmic transparency and explainability (a process by which the ‘black box’ of an algorithm is made intelligible and understandable to human experts), it becomes a challenge for potential victims of discrimination, as well as monitoring and supervisory bodies and courts, to detect and provide evidence of discrimination.