Close

Connecting the dots in the Area of Freedom, Security and Justice (AFSJ)– Part II

BY Diana Dimitrova - 30 March 2017

The Commission envisages to create interoperability between the different large-scale information systems in the Area of Freedom, Security and Justice (AFSJ). Part I of the blog elaborated on the interoperability concept. This piece will critically discuss several of the proposed interoperability elements through the lens of data protection.

Data Protection Issues

Data protection experts will immediately point out that the interoperability proposals, as presented in Part I of the blog,  challenge the purpose limitation principle. It remains to be seen whether and how the proposed interoperability would be implemented in a way which effectively controls the access to the data respecting the authorisations of the authorities and their employees to the individual databases and in how far the possible changes to the original purposes of these databases would pass the legality, necessity and proportionality test. The major issue is that the different AFSJ databases pursue different purposes – while some implement purely migration policies (administrative purpose), other implement police and judicial cooperation in criminal matters.

This purpose limitation discussion does not only concern the individual databases. It concerns the general trend of blurring lines between security and immigration policies and objectives. An example thereof is the potential interoperability between ETIAS and Europol data. The latter is an information system on individuals of interest to the law-enforcement authorities, mostly serious criminals and terrorists. This, together with the prospective establishment of the future Europol watchlist, which will be fed also by the UN and international partners and which will be continually matched against ETIAS applicants and those already granted authorizations, signals the increasing role Europol, EU’s law enforcement agency, would be playing in decisions relating to migration matters. While security is an important concern in migration, it is the intensity of the checks and the growing number of authorities influencing decisions is noteworthy. As the EDPS has noted, the ETIAS checks could be seen as more intrusive than the ones for Schengen visa applicants.

Interoperability, which would in effect allow complex data-mining operations, could thus create additional knowledge about the data subjects. How? For example, it could allow an official with access to one database to make assumptions about an individual, e.g. a passenger crossing a border, even based on a biometric hit/no-hit information via the common BMS and/or SSI. Such knowledge could be in conflict with the specific authorizations of border control officers. It should be further explored how the results of such data-mining could impact the data subjects via proper Data Protection Impact Assessments. One of the major risks is that factors about someone’s past might become visible to the border officers, which factors are not relevant for taking a decision concerning a passenger but which might prejudice this decision in an unfavourable way. For example, if someone’s biometric data are stored on a national law enforcement database, even if the individual was never convicted and the charges against him were dropped, a potential biometric “hit” might lead to unjustified second line checks and possible refusals of entry. Or if the SIS II is cross checked, not all alerts stored there would be relevant for the assessment of security, immigration or health risks of ETIAS applicants, e.g. alerts on witnesses needed in court procedures, as the EDPS points out.

Lastly, as the HLEG Interim Report acknowledges, a major issue relates to data quality. The report does not propose a definition of data quality, but provides examples such as incomplete or empty fields. I propose that a thorough analysis of what could qualify as breaches of the data quality principle is carried out and how/via what procedures such breaches shall be avoided and rectified, i.e. adequate safeguards should be enshrined. Here one can point out that a data quality issue could mean, amongst others, a low quality enrolment of biometric data, which could lead to a false match, leading also to the wrong the alphanumeric data in the different information systems. As the EDPS has pointed out, due to the probabilistic nature of biometrics, they cannot “deliver the unambiguous key” which is required of the data linking databases. Mismatches could thus create more inconvenience for both the authorities and data subjects.  A data subject might often not be in a position to prove that a biometric match was false, whether because of a poor enrolment quality or poor matching performance, while a decision affecting him is already being made automatically.

Data quality issues might arise also in respect of alphanumeric data, e.g. names, travel document numbers, vehicle information, etc. One could point here to transliteration cases or cases where information could be missing or incomplete, as mentioned in the HLEG Interim Report. Another accuracy issue is caused by the context. As the FRA argues, data accuracy should be examined in light of the purpose(s) of the data processing. Thus, while data could be accurate in one context, that does not automatically make them accurate in a different context, e.g. when data are collected in an immigration context but re-used in a law-enforcement context. Thus, while a TCN could be posing a risk of overstaying and could be flagged as “risky” in immigration databases, this fact does not automatically make him a risk for public or national security.

Finally, a point raised by the HLEG at one of its meetings is how end-users can check data quality if they do not know which databases are being searched. This also would lead to the problem of whether one can annul decisions based on wrong hits. Thus, interoperability should not turn into a black box, where data traceability is illusory and reliable decisions based on accurate data and matching algorithms cannot be guaranteed.  The lack of transparency and data traceability could possibly raise issues of allocation of responsibility: there could be confusion both about the responsible controller for the data (in)accuracy/correction or the inaccuracy of a certain result, as well as the responsible national or/and European supervisory authority that can remedy the situation.

The operational convenience of interoperability cannot be denied. It is important to point out, however, that interoperability should operate within the limits of the existing legal framework, e.g. on data protection, part of whose raison d’etre is to protect the data subjects from the negative consequences of illegal data processing which could result from interoperability.

This blog has been funded by the European Commission FP7 project FastPass (A harmonized, modular reference system for all European automated border crossing points) under Grant Agreement No: 312583. It reflects the personal opinion of the author and does not represent the views of the FastPass consortium.

This article gives the views of the author(s), and does not represent the position of CiTiP, nor of the University of Leuven.
ABOUT THE AUTHOR — Diana Dimitrova

Diana Dimitrova’s main research interests are privacy and data protection, especially in the field of the Area of Freedom, Security and Justice (AFSJ). Diana focuses on topics such as biometric technologies, in particular in the area of border control, large-scale databases (e.g. SISII, VIS, EURODAC), as well as data transfers. Diana works mainly on the FastPass and eVACUATE projects (FP7) at KU Leuven CiTiP - imec. Diana Dimitrova works also as a researcher at the Leibnitz Institut für Informationsinfrastruktur (FIZ) in Karlsruhe, Germany. At FIZ Diana researches on privacy and data protection topics and is engaged, amongst others, in the STARR project (Horizon 2020 Project).

View all posts by Diana Dimitrova

Comments

blog comments powered by Disqus