Trends in Security Information
The HSD Trendmonitor is designed to provide access to relevant content on various subjects in the safety and security domain, to identify relevant developments and to connect knowledge and organisations. The safety and security domain encompasses a vast number of subjects. Four relevant taxonomies (type of threat or opportunity, victim, source of threat and domain of application) have been constructed in order to visualize all of these subjects. The taxonomies and related category descriptions have been carefully composed according to other taxonomies, European and international standards and our own expertise.
In order to identify safety and security related trends, relevant reports and HSD news articles are continuously scanned, analysed and classified by hand according to the four taxonomies. This results in a wide array of observations, which we call ‘Trend Snippets’. Multiple Trend Snippets combined can provide insights into safety and security trends. The size of the circles shows the relative weight of the topic, the filters can be used to further select the most relevant content for you. If you have an addition, question or remark, drop us a line at info@securitydelta.nl.
visible on larger screens only
Please expand your browser window.
Or enjoy this interactive application on your desktop or laptop.
Algorithms are actively used in many public sectors all over Europe
3.1.2 Examples of the use of algorithms in the public sector 3.1.2.1 Labour market policy In several countries, predictive profiling algorithms are used (or are projected to be used) by Government agencies and other public bodies to support their labour market policies.297 This is often done with the objective of being able to identify and predict the job opportunities for certain unemployed persons or estimate their need for training. In Austria, for instance, the Labour Market Service (AMS) has developed an algorithm that, based on previous statistical labour market data, can be used to determine future labour market chances of job applicants.298 The algorithmic prognosis can help determine the assignment of job applicants to one of three pre-defined groups. More expensive resources such as one-on-one job counselling and access to job training programmes are then allocated in considerably higher measure to persons in the first two groups, who would be supposed to have better chances in the labour market. The third group contains persons whose labour market chances are defined as low by the algorithm, and who will be offered a different type of support in accordance with their profile.299
Already in use by the public employment service of Flanders (VDAB), in Belgium, is a similar AI program, which can help predict the chance that an unemployed job seeker will not find a job within the next six months. The objective of this is to improve the identification of job seekers in need of personalised support.300 Another example of this type of algorithm can be seen in Poland, where the Ministry of Labour and Social Policy has introduced a system based on profiling the unemployed to decide on how to distribute labour market programmes.301 On the basis of the result, the system can assign the respondents to one of three profiles differing with regard to the degree of assessed readiness to start work and the type of assistance from the employment office. The use of algorithms in employment policy can further be illustrated by the online application ‘My employability’ used by the Croatian Employment Service (CES). This application has been developed to help job seekers calculate the probability of finding employment within the next 12 months, based on their replies to a set of questions that correspond to a number of given parameters.302 The parameters include county of residence, age, sex, age of the youngest child,303 applicability of special measures (e.g. for Croatian war veterans or persons with disabilities), work experience, level of education, field of education, unemployment history, reason for ending the previous job (e.g. dismissal or expiry of a fixedterm contract) and previous occupation. The probability is calculated according to input and is expressed in percentage points as a graphical display of two columns of probability: one for the specific designated county and one depicting the average probability at the level of Croatia. The results as shown by the app to the user highlight that work experience, level and field of education, history of unemployment and the reasons for ending the previous job are the most important indicators for the calculation. The app also invites users to check what happens when some of the input data is changed, e.g. occupation, level of employment, previous work experience, etc. Contrary to other algorithmic applications described above, which are used to decide on resource allocation, the Croatian app is not intended to affect the person’s position on the labour market in any way.
3.1.2.2 Social welfare
Predictive algorithms are sometimes applied in relation to social welfare issues, but this use is (still) limited and not many examples have been reported. For Finland, an analysis has been made of data on all child welfare clients in 2002-2016 and the whole local population’s health and social data in a big Finnish city in order to find factors that would predict marginalisation of certain persons or groups.304 The research found 280 factors that predict that a child will be in need of child welfare services. These factors and results can now be used in the allocation of preventive resources. In Spain, a programme for smart social home care has been developed for predicting the social aid needs of the elderly.305 According to the developers, the system aggregates data about social services, health, population, economic activity, utility usage, waste management, and more and uses this data to identify and predict groups and areas that will need urgent help.
3.1.2.3 Education
In some countries, algorithms are used in relation to education. This is exemplified by the Parcoursup system that was introduced in France for university admissions, as already mentioned in section 2.1.3.306 The system was used to allocate places in higher education institutions to incoming students. It raised concerns in relation to the use of certain income and residency data and was also criticised for its lack of transparency.307 The system randomly drew candidates for admission to universities when selective credentials were not required to avoid ranking. Poland also makes extensive use of this function of algorithms, for instance for the assignment of children to nurseries, kindergartens and high schools, as well as for admission to colleges. For example, in assigning children to nurseries, the city of Wrocław has introduced an algorithm that relies on data obtained from the parents’ declarations: the number of children in the family, a disability certificate, whether parents were employed or in education, the place of residence in Wrocław, and the age of the child.308 Based on this information it can be calculated whether children are qualified to access specific nurseries.309 3.1.2.4 Policing and fraud detection Notable use of (predictive) algorithms is made in policing and the detection of fraud, for instance in relation to taxation or social benefits. Sometimes these algorithmic applications are still at a pilot stage.310 In the Netherlands, for example, several pilots are being run where data derived from using sensing technology is combined with predictive algorithmic analyses to detect risk behaviour, e.g. for pickpocketing or mobile ‘banditism’.311 In Lithuania, police representatives have expressed their positive attitude towards the idea of preventing criminal acts by means of (i) automated analysis of data gathered by surveillance cameras located in different cities and (ii) the usage of object recognition algorithms, as well as the use of biometric data in voice, fingerprint, palm print and iris recognition systems.312 Already in use in the Netherlands is the criminality anticipation system (CAS).313 This system can used by the police to predict the risk of crimes being committed on the basis of an algorithmic analysis of data on crime reports. The analysis can help to identify ‘hot spots’ and ‘hot times’ that can be used to step up police presence and indicate interventions. In Germany, police authorities in the state of Hesse and North Rhine-Westphalia use a program called ‘Gotham’, which is provided by Palantir, a US-based private company.314 The Gotham program is used for ‘predictive policing’, in particular for preventive detection of possibly dangerous persons or situations. The program can, for example, determine whether a suspicious person has connections with so-called ‘endangerers’, based on information on, for example, the fact that these persons have stayed in the same house, have had cell phone contact, or even have sat in the same car during a police check.315 Spain is active in this field with applications such as an algorithm used by the police to evaluate the risk of women reporting gender violence (VioGén),316 a tool to spot false reports made to the police (VeriPol) and an application that helps to predict recidivism (e-Riscanvi).317 In a similar vein, the Durham Police in the United Kingdom uses a harm assessment risk tool to predict the risk of reoffending, using information such as postcodes (and possibly ethnicity data),318 and the South Wales Police made use of a predictive profiling algorithm based on automatic facial recognition.319 In France, some algorithms are in use to help detect social security fraud,320 as well as fight tax evasion, as part of the ‘Openfisca’ system.321 In the Netherlands, the system risk indication (SyRI) allows a predictive algorithm to search the data of residents in certain municipalities for patterns that could indicate social security fraud, although the system is currently being revised to make it more privacyproof.322 The state educational loan fund in Norway uses machine learning in order to discover fraud and cheating on student’s living allowances.323 In Poland, the clearance chamber ICT system (STIR) enables the exchange of information between banks and the National Tax Administration with the objective of combating VAT fraud.324 Financial data are derived from banks and cooperative savings and credit unions to conduct analyses of operations in order to determine whether account holders perform certain types of actions that indicate that they may be using their bank accounts for illegal activity. The STIR algorithm determines the risk indicator, which constitutes the central premise according to which the head of the National Tax Administration may request a block on the bank account of a given entity.325 Finally, local authorities in the United Kingdom are allowed to voluntarily adopt risk-based verification (RBV) in relation to housing benefits and council tax benefits.326 The RBV works by assigning a risk rating to each applicant for such benefits, which then determines the level of identity verification required.327 This allows the local authority to target and focus resources on ‘... those cases deemed to be at highest risk of involving fraud and/or error’.328 Someone with a high-risk rating might be subject to additional checks, visits and an increased requirement to provide documentation.329 3.1.2.5 Administration of justice Algorithms are not often used in courts as yet, although there are some experiments with algorithmsupported judicial decision making in the Netherlands. 330 Sometimes, moreover, algorithms are used to predict how judges will decide their cases. In France, for example, algorithmic applications have been developed to anticipate the future decisions of judges in civil cases and their allocation of remedies according to past behaviour. Using software like ‘Predictice’ or ‘Supralegem’, employers or other decisionmakers can try to avoid liability by knowing how judges will decide their future cases.331 In other countries, algorithms are used in the judiciary in a more administrative and organisational manner. An example of this can be found in Poland, where the algorithm-based system of random allocation of cases (SLPS) assigns cases to judges of the particular court on a once-per-day basis. This system has been implemented in all 364 ordinary courts.332 3.1.2.6 Media regulation It is well known that many media platforms make use of algorithms to identify and remove hate speech and other forms of discriminatory, insulting or defamatory expressions. France has recently adopted legislation to introduce a system that would allow the tracking of users of the internet who are responsible for hate speech, although this legislation has now been declared partially unconstitutional.333 In Spain, a tool to detect hate speech on Twitter has been developed with the help of the National Bureau for the Fight against Hate of the Ministry of Home Affairs.334