- Home >
- Services >
- Access to Knowledge >
- Trend Monitor >
- Domain of Application >
- Trend snippet: Ethical aspects of biometric identification, categorisation and detection
Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Ethical aspects of biometric identification, categorisation and detection
Ethical Aspects of Biometric Identification
The main ethical issue raised specifically by biometric identification is related to the enrolment phase, i.e. the creation and storage of a unique template that identifies a particular person. The enrolment phase and the deployment phase may overlap where templates are refined during deployment, e.g. through supervised learning in the field. Creating unique templates means transforming unique physical features of a human being into digital data, leading to a ‘datafication’ of humans. Since the features that uniquely identify a person are part of a person's body, their collection and use interfere with a human’s personal autonomy and dignity. Once this template is created and stored, anyone who comes into possession of it in the future has the power to trace and recognise that individual anywhere in the world and potentially for any purpose. There is no way for the individual to escape it as an individual cannot normally change ‘strong’ biometric identifiers. Considering also data security concerns, collecting and storing biometric templates has a significant potential for harm.
Apart from this, ethical issues raised by the use of biometric identification methods in public spaces do not only relate specifically to biometrics, but to large-scale surveillance of individuals as such (i.e., they are similar to issues raised by, for example, large-scale surveillance using mobile device signals), or otherwise to the purposes for which the technology is used, and how it is used. The dimension of ethical issues raised depends, in particular, on
-
the concrete purpose of identification;
-
the place, manner or dimension of identification;
-
the transparency of the identification measures taking place;
-
the reactions (e.g. arrest) triggered by a high matching score;
-
the evidentiary force ascribed to a high matching score and possibilities of the individual to demonstrate error or identity fraud; and
-
any storage and further processing of matching data (e.g. for the creation of mobility profiles).
Issues of discrimination or stigmatisation arise mostly as part of a more general deficiency of a system. For instance, facial recognition should not be less accurate with people of colour, and diminished accuracy must, in any case, be duly taken into account in the context of the last three points mentioned (as must any other lack of accuracy).
Ethical Aspects of Biometric Categorisation
The main ethical issues raised by the biometric categorisation of human individuals (e.g. allocation to risk groups within an airport security system, assessment of job applicants) are related to the development and concrete use of categorisation systems. In particular, ethical issues arise in relation to the definition of categories, the associated assumptions and the conclusions or reactions triggered by the system, leading to risks such as discrimination, stigmatisation, and the drawing of inappropriate inferences. Further risks include manipulation and exploitation of vulnerabilities.
Most ethical issues raised by the use of biometric categorisation do not relate specifically to biometrics, but to, in particular
-
the concrete purpose, context and conditions of categorisation;
-
the degree of sensitivity of data collected and of inferences drawn;
-
the accuracy of the system, the appropriateness of inferences drawn, and any control mechanisms, including human oversight;
-
the gravity (including potential irreversibility) of consequences triggered by the system;
-
the awareness of the individual of the categorisation and the possibility of the
individual to challenge the output; and
-
any storage and further processing of data for profiling purposes.
It follows that the fundamental rights risk to be addressed in this context is primarily associated with standardised profiling and/or scoring as a means to achieve a given end in a given social context. The fact that categorisation includes biometrics (e.g. that a person’s age is inferred from wrinkles in their face rather than from their shopping history) adds some ethical relevance, as an individual cannot easily change biometric traits, but is not the decisive factor (as compared, e.g., with age-specific targeting that might follow categorisation). Generally speaking, biometric inferences, i.e. inferences drawn with regard to permanent or long-term physical, physiological or behavioural characteristics, may be ethically even more relevant than the use of biometric techniques as such.
The Ethical Aspects of Biometric Detection
The main ethical issues raised by the biometric detection of human conditions (e.g. intention to commit a crime, fear, fatigue or illness) follow from its potentially intrusive nature, often analysing very intimate traits, some of them beyond the individual’s consciousness. Further risks include manipulation and exploitation of detected vulnerabilities. In addition, previously unknown conditions, when revealed to the individual, may cause stress or anxiety.
Most ethical issues raised by the use of biometric detection do not relate specifically to the fact that biometric data are used for inferring a condition, but to detection of that condition as such (i.e., they are largely identical to issues raised by, for example, detection on the basis of a shopping or browsing history). Again, the fact that an individual has little control over their physical, physiological or behavioural signals, many of which will be subconscious, may give their use to detect conditions a special ethical dimension.
Fundamental rights risks posed by biometric detection techniques are very similar to those posed by biometric categorisation, which does not come as a surprise as conditions detected often serve as a basis for biometric categorisation. However, within the field of biometric detection systems, it is systems detecting human emotions, thoughts and intentions that deserve particular attention from an ethical and regulatory perspective, potentially calling for a new set of ‘neuro- rights’ (such as the right to mental privacy and mental integrity).