- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Type of Threat or Opportunity >
- Trend snippet: Key transformative technology that will contribute to the changing dynamics of cyberspace: Artificial intelligence/advanced machine learning
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Key transformative technology that will contribute to the changing dynamics of cyberspace: Artificial intelligence/advanced machine learning
The increased pervasiveness of artificial intelligence (AI) across a range of often critical business processes and functions places a heavy reliance on the algorithms. However, there is a lack of assurance about how these algorithms are designed, developed and used. AI is already being deployed by both network defenders and those attacking them. It is difficult to tell where the balance of advantage lies.
New tools are required to protect AI-based processes and to enable defenders to collaborate against the whole range of AI-enabled threats. Security principles for AI are needed that cover secure design, life- cycle management and incident management. Such principles can provide the basis of a more robust assurance regime to support the governance of AI- associated cyber risks.
The growing intelligence of autonomous machines
The global race to develop AI technologies is accelerating, with rapid developments in its applications across swathes of the global economy. The field of AI aims to build reasoning systems: technologies that can perform tasks normally requiring human intelligence (such as decision- making, visual perception and speech recognition) and adapt to changing circumstances.
What AI is, and its applications
machine intelligence (“strong AI”) would be achieved when unaided machines can “think” exactly like humans, creating advantages across reasoning tasks in general. This is unlikely to be achieved in the near future. It is “narrow AI”, focused on creating reasoning systems that achieve specific advantages in specific applications, which is more immediately relevant.
Substantial investments are being made in AI research and development globally, in particular using machine learning techniques. Global spending on AI was estimated to be $37.5 billion in 2019, and is forecast to reach $97.9 billion in 2023, with China and the US dominating global AI funding. The emerging technologies are capable of faster, more precise analytics and decision-making, and of deriving insights from big data, outperforming traditional digital approaches and some aspects of human capabilities in diverse fields such as transport, manufacturing, finance, commerce and healthcare.
Large corporations in every industry are seeking to create value by taking advantage of new means of data exploitation, business-process improvements (e.g. in sales, production and supply-chain management), cost-efficiency gains and the ability to enhance customer experiences. It is anticipated that within the next 5–10 years, AI systems will play an increasingly critical and unsupervised role within organizations, including the use of robotics in physical manufacturing tasks, for example.
ARTIFICIAL INTELLIGENCE
The shifting attacker-defender balance
AI is already being deployed by both network defenders and those attacking them. It is difficult to tell where the balance of advantage will ultimately lie.
Dangerous attackers: speed and scale, precision and stealth
The first generation of AI-enabled offensive tools is emerging. Evidence of AI being used by attackers
in the wild is limited but growing.39 As the technology matures and becomes more widely accessible
over the next few years, the malicious use of AI will be accelerated and become increasingly sophisticated. Adversaries will take advantage of enhanced capabilities throughout the stages of a cyberattack.
• Speed and scale: By automating attacks or attack components, attackers will be able to speed up and scale up their operations. The range of threat is likely to expand as automation reduces the need for expertise or effort.
• Precision: Attackers will take advantage of the opportunity to craft more precise attacks, by using deep-learning analytics to predict victims’ attack surfaces and game their defence methods.
• Stealth: Attackers will exploit AI in order to evadedetection and elimination: to be “stealthy”. A range of evasion attacks in which malware evolves to bypass security controls have already been shown to be feasible. In the long term, offensive AI may create completely new ways of attacking (using reinforcement learning, for example) – similar to how AlphaGo found completely new tactics and strategies in the “meta-game” of Go.
Opportunities for defenders
While it creates clear opportunities for attackers, AI also has real potential to enhance the speed, precision and impact of operational defence, and support organizational resilience. AI-enabled defences are being researched and developed, and AI is also being used to support human defenders by augmenting and automating tasks usually performed by analysts (e.g. threat triage). These approaches are becoming increasingly deeply integrated into defensive responses within the cybersecurity ecosystem. The global value of AI in cybersecurity is predicted to reach $46 billion by 2027. As described, AI could be used by an attacker to predict the defender’s moves. For defenders, an improved analytical ability to predict threat actors and their attack strategies could enable better orchestration of defensive moves. This gamification is part of an accelerating arms race between AI attack and defence methods: Despite the promise of AI-based defences, it has already been shown that some can be circumvented by adversarial AI- based attacks. For example, intelligent agents have been developed, capable of manipulating malware to bypass machine learning-based defences), and attacks against machine learning-based security systems are becoming more prevalent.59 In fact, the community should explore the value of automating security policies, detection and mitigation more broadly, using AI.
Expanded attack surface and manipulating the algorithm
AI-driven systems and processes are quickly becoming part of the vital assets of major enterprises, performing increasingly critical functions with decreasing human oversight. This is expanding the scale and criticality of the attack surface that could be exploited through adversarial AI. Adversaries will seek to manipulate or disrupt the processes of organizations, and the infrastructure relied on by society, by altering the integrity of algorithms and of the data that feeds them. Some AI algorithms have already been shown to be open to manipulation and data-poisoning by attackers.
As these algorithms are used in increasingly critical functions, this could have grave consequences (including physical harm, as autonomous cyber- physical systems emerge). Furthermore, there is a risk that the decisions made based on complex probabilistic algorithms and huge quantities of data could lack “explainability”, leaving the leaders accountable for them unable to verify or justify their correctness, or identify subversion of them.
Attackers will be able to apply AI to get more value from stolen data, and also to create more harm by using it to refine cyberattacks. In a world in which the quality of AI algorithms’ training and accuracy is increasingly important, data becomes an enabler and has much greater economic value because of what it allows its owner to do; data is therefore likely to be increasingly heavily targeted.
What is truth?
As digitally manipulated videos, images and audio (“deepfakes”) become increasingly sophisticated, convincing and difficult to distinguish from reality,80 and also more widespread particularly as the technologies for creating them become more accessible,81 there is a risk that “the truth” will become increasingly difficult to establish. Actors may take advantage of the opportunity to generate realistic and finely targeted fake news and manipulated messaging, distorting public perception of the truth and altering political or economic outcomes. Uses in disinformation campaigns have already been seen. It is likely that deepfakes may become a tool in ransomware attacks aimed
at individuals.
Deepfakes may also be exploited to create new cyberattack vectors. For example, voice-mimicking software has allegedly already been used in a major theft. Targeted manipulation of victims to carry out an attacker’s goals may become increasingly convincing and effective as the underlying technologies develop.
While it creates clear opportunities for attackers, AI also has real potential to enhance the speed, precision and impact of operational defence, and support organizational resilience. AI-enabled defences are being researched and developed, and AI is also being used to support human defenders by augmenting and automating tasks usually performed by analysts (e.g. threat triage). These approaches are becoming increasingly deeply integrated into defensive responses within the cybersecurity ecosystem. The global value of AI in cybersecurity is predicted to reach $46 billion by 2027.
As described, AI could be used by an attacker to predict the defender’s moves. For defenders, an improved analytical ability to predict threat actors and their attack strategies could enable better orchestration of defensive moves. This gamification is part of an accelerating arms race between AI attack and defence methods: Despite the promise of AI-based defences, it has already been shown that some can be circumvented by adversarial AI- based attacks. For example, intelligent agents have been developed, capable of manipulating malware to bypass machine learning-based defences), and attacks against machine learning-based security systems are becoming more prevalent. In fact, the community should explore the value of automating security policies, detection and mitigation more broadly, using AI.