- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Increased use of algorithms poses risk of automation bias and manipulation
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Increased use of algorithms poses risk of automation bias and manipulation
deepens biases when they depend on black-box algorithms developed using skewed historical data sets. Individuals and non-state groups have access to algorithms that can spread dangerous content with unprecedented efficiency, speed and reach. Malicious
actors are also becoming more capable of launching misinformation campaigns on a national and global scale.
Digital division
Digital division comes in many guises, from automated bias that can be manipulated to gaps in accessibility and capacity.
Automating bias and manipulation
Decisions historically made by humans— diagnosing health issues, choosing investments, assessing educational achievement and resolving legal disputes—are increasingly being made by sophisticated algorithms that apply machine learning to large data sets.3 In the US criminal justice system, for example, algorithms are being used to predict the risk of recidivism. In the private sector, more businesses are turning to algorithmic management to track employee productivity. Automating these decisions deepens biases when they depend on black-box algorithms developed using skewed historical data sets. The risks from automating bias are exacerbated by the amount of data now generated—predicted to nearly quadruple by 2025. The sheer volume of data drives down the cost and ease of using algorithms for malicious or manipulative purposes. Individuals and non-state groups have access to algorithms that can spread dangerous content with unprecedented efficiency, speed and reach. Malicious actors are also becoming more capable of launching misinformation campaigns on a national and global scale—and because individuals and small groups are difficult to track and prosecute, it is harder for authorities to stop the spread of misinformation. The number of countries experiencing organized social media manipulation campaigns increased by 150% between 2017 and 2019.