- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Public concerns surrounding the use of AI
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Public concerns surrounding the use of AI
The other side of the Artificial Intelligence coin
As is the case with various other technologies, data quality and privacy are both still challenging issues when it comes to AI. Perhaps more than any other innovation, though, AI instills much fear in people. There are the more widely expressed concerns – ‘Will we lose our jobs to robots?’ – and then there’s also anxiety about losing control over AI, its outcomes and impacts. As AI models become increasingly advanced, intelligent and complex, we as humans tend to understand them less – we need to be wary of possible biases that we don’t immediately recognise and can’t explain or even interpret. It’s even been found that AI-driven models can make choices that us humans aren’t capable of understanding – the “Computer says no” scenario is an example. Naturally, for many people, that’s a scary thought