- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Cognitive biases are at play when humans are assisted by algorithms
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Cognitive biases are at play when humans are assisted by algorithms
Another particular challenge arises from cognitive biases that are at play when humans are assisted by algorithms.100 It has been shown, for example, that human decision makers will trust the outcomes of the algorithm, being convinced that the algorithm probably ‘knows’ or performs better than they would do.101 This so-called ‘automation bias’ in favour of the algorithm may lead to ‘commission errors’ or rubber-stamping: trusting the quality and authority of the algorithm, human decision makers tend to embrace the decision it suggests.102 If a human decision maker wants to take a more critical stand and disagrees with the suggested outcome, she might feel an additional pressure to motivate her decision to deviate from the computer output.103 This may not be easy, even if the decision maker’s own intuition and experience inform her that a certain decision simply cannot be right.104 Similarly, algorithmic output may lead to ‘anchoring’, for example in imposing sanctions by judges.105 Based on algorithmic analysis of large amounts of previous cases, an application might suggest that in a particular case of shoplifting or burglary, a particular fine or prison sentence would be indicated. This then involuntarily forms an anchor for the judge, who will tend to stay relatively close to the indicated level of the sanction, even if she might have arrived at a very different sanction had she been able to make a decision fully on her own.106 When combined with the risks of bias in data or flaws in the programming of an algorithm, these cognitive phenomena of rubber-stamping, automation bias and anchoring pose an additional challenge in terms of non-discrimination.