- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Type of Threat or Opportunity >
- Trend snippet: AI-based content generation
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
AI-based content generation
Content Generation
Content generation refers to the ability of an algorithm to generate arbitrary content that would look
human-made. This algorithm would also make it possible to set constraints in content generation and
have a system imitating or cloning certain aspects of an existing piece of content. There are claims that
AI-based content generators are becoming so powerful that their release to the public is considered a
risk.49
Such claims refer to the Generative Pretrained Transformer 3 (GPT-3), a text synthesis technology released
by OpenAI50 in June 2020 that uses deep learning to produce human-like text and is able to adapt and
fine-tune its behavior with the addition of a few domain-specific examples. Bearing a capacity of over 175
billion machine learning parameters (10 times more than its closest competitor, Microsoft Turing NLG),
this technology is capable of synthesizing not only English text, but also code in several programming
languages and even guitar tablatures, allowing for applications such as:
• Generating a full, human-sounding text of a simple title
• Turning the textual description of an application into working code
• Changing the writing style of a text while maintaining the content
• Passing the Turing test for a human-sounding chatbot
Criminals could thus employ ML to generate and distribute new content such as (semi) automatically
created and high-quality (spear-)phishing and spam emails in less popular languages.51 In effect, this
would further automate and amplify the scope and scale of malware distribution worldwide.
Moreover, such content-generation techniques could significantly ameliorate disinformation campaigns
by automatically combining legitimate with false information while also learning which kinds of content
work best and which are the most widely shared.52
The ability to generate working code from a mere textual description lowers the knowledge barrier required
to become a programmer and could foster a new generation of “script kiddies” — that is, people with low
technical knowledge but malicious intentions who exploit ready-made tools to perform malicious actions.
Text content synthesis can also be employed to generate semantically sound content for fake websites
and more importantly, to reproduce a specific text style. In particular, style-preserving text synthesis is
a technique that employs an AI system to generate a text that imitates the writing style of an individual.
Notably, the AI system does not need a large volume but rather only a few samples of an individual’s
writing style for its training. As a result, the technique holds some implications specifically for BEC, as it
allows a malicious actor the opportunity, for instance, to imitate the writing style of a company’s CEO in
order to trick target recipients inside the company into complying with any of their fraudulent requests
Indeed, the capabilities of technologies such as GPT-3 lead us to believe that it could truly be, in terms of
impact, the next deepfake.