Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Deployment of facial recognition technology (FRT) is on the rise globally
The deployment of facial recognition technology (FRT) is on the rise globally and is expected to rise in market value from $3.2 billion in 2019 to $7 billion in 2024.1 With this seemingly inevitable increase in FRT comes the inescapable concern of the misuse of facial data. Concerns are unsurprisingly associated with the unique and sensitive nature of face data along with the ease of use paired with the potential to misuse the data. Unsurprisingly retaliations against FRT have ensued. Anti-facial recognition makeup, camouflage, hairstyles,2 masks3 and glasses4 have all materialised as a reaction to the threat of public exertion of FRT. There understandably exist reservations about the legal parameters (or lack thereof) of the use of FRT. In regard to the use of AI-powered technologies, it has been noted that “[a]ny tendency to put blind faith in what in effect remains largely untrusted technology can lead to misleading and sometimes dangerous conclusions.”5 The same premise applies to FRT. With FRT, as a technology carrying an increased risk of mass surveillance, trust stems from the reliability and accuracy of the technology itself as well as the responsible, fair and transparent deployment of it.6