Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Criminals are taking steps in utilising AI-supported cyberattack techniques
On the other end of the spectrum, antivirus vendors look at ML as a tool to improve malware detection systems. ML has the ability to generalise new types of malware that have never been before.
AI Malware
The use of AI to improve the effectiveness of malware is still in its infancy. Research is still carried out at the academic level and attacks are mostly theoretical and crafted as proofs of concept by security researchers. Nonetheless, the AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI. Such attempts therefore warrant further observation to stop these current attempts and prepare for future attacks as early as possible before these become mainstream.
Currently, malware developers can use AI in more obfuscated ways without being detected by researchers and analysts. As a consequence, it is only possible to search for observable signs that might be expected from AI malware activity. In fact, one type of malware-related AI exploit involves AI-based techniques aimed at improving the efficacy of “traditional” cyberattacks.
For example, in 2015, a demonstration on how to craft email messages in order to bypass spam filters was made.16 This system, as demonstrated, uses generative grammar capable of creating a large dataset of email texts with a high degree of semantic quality. These texts are then used to fuzz the antispam system and adapt to different spam filters in order to identify content that would no longer be detected by the spam filters.
In 2017, during Black Hat USA, an information security conference,17 researchers demonstrated how to use ML techniques to analyze years’ worth of data related to business email compromise (BEC) attacks, a form of cybercrime that uses email fraud to scam organizations in order to identify potential attack targets. This system exploits both data leaks and openly available social media information. Notably, based on its history, the system can accurately predict if an attack will be successful or not.
At the same security conference, researchers introduced AVPASS,18 a tool designed to infer, for any given antivirus engine, its detection features and detection rule chain. The tool then uses this inference to disguise Android malware as a benign application. It should be emphasized that AVPASS achieved a 0% detection rate on the online malware analysis service VirusTotal with more than 5,000 Android malware samples. In other words, AVPASS created operationally undetectable malware.
At present, antivirus vendors also look at ML as their tool of choice for improving their malware detection techniques, thanks to its ability to generalize new types of malware that have never been seen before. However, it has been proven that ML-based detection systems can be tricked by an AI agent designed to probe and find weak spots.19 Researchers, for instance, have been able to craft malware with features that allow it to remain undetected even by ML-based antivirus engines.
The system uses reinforcement learning to develop a competitive, game-based technique between itself and the antivirus detector. It also selects functionality-preserving features of a malicious Windows file and introduces variants that increase the chances of a malware sample passing undetected.
Finally, AI can also enhance traditional hacking techniques by introducing new ways of performing attacks that would be difficult for humans to predict. At DEF CON 2017, one of the largest underground hacking conventions, participants Dan Petro and Ben Morris presented DeepHack,21 an open-source AI tool aimed at performing web penetration testing without having to rely on any prior knowledge of the target system. DeepHack implements a neural network capable of crafting SQL injection strings with no information other than the target server responses, thereby automating the process of hacking web-based databases.
Following a similar approach, DeepExploit22 is a system that is capable of fully automating penetration testing by using ML. The system interfaces directly with Metasploit, a penetration testing platform, for all the usual tasks of information gathering and crafting and testing an exploit. However, it leverages a reinforcement learning algorithm named Asynchronous Actor-Critic Agents (AC3)23 in order to learn first (from openly exploitable services such as Metasploitable) which exploit should be used under specific conditions, before testing such conditions on the target server.