- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: New threats emerge from disinformation and technology evolution
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
New threats emerge from disinformation and technology evolution
1. Compromising geopolitics: New threats emerge from disinformation and technology evolution
Emerging technologies, such as artificial intelligence (AI), present new avenues of expression for potential geopolitical activity, including disinformation. One menacing use of AI is in the creation of ‘deepfakes’, which are high-quality forged images or videos that could be used for anything from discrediting or blackmailing a political opponent, rival company or extortion target, to causing worldwide panic with a video of a head of state purportedly claiming to have launched a nuclear weapon.
The propagation of synthetic media content, such as deepfakes, is likely to accelerate as fabrication tools become more accessible and widespread. This could spill over into the cyber domain, where both politically and financially motivated actors could leverage deepfakes during target reconnaissance on social networks or social engineering campaigns, for example.
As they focus more on interference with AI modelling, threat actors and groups are likely to deploy adversarial AI, corrupting the ability of machine learning algorithms to interpret system inputs and exercising control over their behaviour. Adversarial AI using deep- learning applications in natural-language processing could enable the manipulation of algorithms that determine sentiment, gather intelligence, or filter for spam and phishing.
We encourage organisations to combine multiple approaches to help ensure robust, secure AI, especially rate limitation, input validation, robust model structuring and adversarial training. Media sources have named various tools to help detect inauthentic videos.