At the end of last year, the Court of Justice of the European Union rendered a judgment that is rarely discussed, but its implications for the operation of some businesses are potentially all the more significant. Below is a brief summary of the case and the main points of the judgment.
A German company, SCHUFA Holding AG, provided its clients (typically businesses) with information on the solvency of certain individuals by using mathematical and statistical methods to analyse the likelihood of natural persons defaulting on their payments. In other words, SCHUFA profiled the solvency of natural persons and sold the results of this profiling (the probability of default) to its clients (SCHUFA collected some of the data from public records).
A client of SCHUFA has rejected a loan application from a person based on a forecast (probability value) provided by SCHUFA. This person requested information from SCHUFA about the data it processes about them. In response, SCHUFA provided the probability value for the person concerned and how it had been calculated (but refused to provide information on the data on which the calculation was based and the weighting used for the calculation). The company added that it had not made any decision on the basis of the data, but had merely provided the probability value to the customer and the customer had made a decision on the basis of it.
The German court in the case referred the question to the Court of Justice of the European Union on whether automated decision-making can be considered to exist according to the GDPR even if the decision is taken by a third party (i.e. not SCHUFA but its client) on the basis of the probability value. In other words, the question had to be answered whether the use of the forecast by the client of the SCHUFA company to decide whether to enter into a contract with the natural person in question could be considered to be automated decision-making from its point of view according to the GDPR.
The GDPR contains the following provision on automated decision-making (Article 22(1)):
“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.“
This unconventionally worded general rule, which requires three cumulative conditions to be met, essentially means that decisions based solely on automated processing which produce legal effects concerning the data subject or similarly significantly affect him or her are prohibited. Article 22(2) provides for three exceptions to the above-mentioned general rule. That is, the prohibition does not apply where the decision:
(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;
(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
(c) is based on the data subject’s explicit consent.
If none of the exceptions apply, this type of processing is automatically unlawful. If one of the exceptions applies (which needs to be carefully examined previously, before the processing starts), the controller must comply with additional requirements. Thus, they must provide the data subject with information on the fact of automated processing, the logic used and the significant and likely consequences of such processing for the data subject. Likewise, the controller must inform the data subject that they may request human intervention, express their point of view and object to the decision, and these rights must be adequately guaranteed by the controller. (It is worth adding that decisions based on one of the exceptions should not be grounded on special categories of personal data, unless the data subject has given explicit consent to the processing or the processing is necessary for reasons of substantial public interest under Union or Member State law and appropriate measures have been taken to safeguard the data subject’s rights, freedoms and legitimate interests.)
The Court of Justice of the European Union has stated that
“In the light of all the foregoing considerations, the answer to the first question is that Article 22(1) of the GDPR must be interpreted as meaning that the automated establishment, by a credit information agency, of a probability value based on personal data relating to a person and concerning his or her ability to meet payment commitments in the future constitutes ‘automated individual decision-making’ within the meaning of that provision, where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person.”
In essence, the Court of Justice of the European Union has ruled that if SCHUFA’s client “draws strongly” on the probability value received to decide whether to enter into a contract with the person who approached him with a loan request, he had engaged in automated decision-making according to Article 22(1) GDPR.
The question is how exactly the term “draws strongly” is to be interpreted, what it means in practice, and what characteristics and features can be used to draw the conclusion that the business receiving the probability value is mainly basing its decision on the probability value whether to reject a loan application, for example.
In our opinion, the decision is a reason for caution. For example, if a bank automatically decides on the creditworthiness of a natural person (or “draws strongly “) on the basis of a probability value received from another person, the bank – whether or not it enters into a contract with that natural person – is making an automated decision within the meaning of Article 22 GDPR, with all the legal consequences that may entail.
In the case where there is relevant human intervention in the process, there is no automated decision-making. Thus, if, at the end of the process, a natural person at the bank decides whether or not to grant a loan to that person, the GDPR rules on automated decision-making will not apply. It is important to underline, however, that the human intervention must be significant, based on real reflection and deliberation, and not merely apparent or “pretended”, because then automated decision-making can be detected.
The judgment raises another question. The European Union’s AI (Artificial Intelligence) Regulation has not yet entered into force (the final text in Hungarian can be found here:
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_HU.pdf),
but it raises the question whether the AI Regulation, as a lex posteriori, will or may override the substantive content of the SCHUFA decision. The answer is clearly no.
According to the preamble paragraph (10) of the AI Regulation, “This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments.”.
This means that even if the AI Regulation allows machine profiling in the risk classification of AI systems – which is part of automated decision-making – the interpretation of the GDPR remains unaffected by the SCHUFA decision. This is explicitly confirmed in the above-mentioned preamble paragraph, which provides that “It also does not affect the obligations of providers and deployers of AI systems in their role as data controllers or processors stemming from Union or national law on the protection of personal data in so far as the design, the development or the use of AI systems involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling.“
According to the fundamental principle set out in Article 25 of the GDPR, the data controller must ask themselves certain questions when designing and using an AI-based software application, which are briefly summarised below:
a) What is the legitimate purpose of data processing? It should also be ensured that the use does not turn into processing for purposes that are incompatible with the original purpose of the processing.
b) What is the legal basis for the processing?
c) How is the accuracy of the data ensured from the data collection phase onwards? Without this, even software with an otherwise excellent algorithm will make wrong decisions, so it is important to take appropriate measures to avoid distortions, using sources with reliable information. In particular, the data that is fed into the software must be relevant and representative of the purpose for which it is being used.
d) How do I comply with the purpose limitation principle? The data may only be processed for the purposes necessary to achieve a legitimate aim, taking into account not only the amount of personal data collected but also the type of personal data.
e) How do I ensure transparency? Data subjects must be adequately informed about the processing of their personal data, which is essential because only if they are properly informed will they be in a position to exercise their rights.
f) How long will I store the data? The data controller shall adequately determine the period of the storage of the data and integrate erasure and anonymisation settings into the system, whereby, in view of the rapid evolution of technology, it is essential that the data controller pays particular attention to whether the data can be re-personalised and, if so, take the necessary measures.
g) Have I carried out a data protection impact assessment? The data controller should identify the risks to the rights and freedoms of data subjects posed by the processing, analyse and take all necessary measures to ensure the legitimacy of the processing in an effective and continuous manner (which necessarily includes the continuous monitoring of the functioning of the software). The availability of the technology alone does not justify the use of AI and the necessity and proportionality test cannot be excluded. In the case of the former, it is necessary to justify why a given legitimate objective cannot be achieved by less invasive means, while in the case of the latter, the risk of data inaccuracy, model bias, possible discriminatory operation, among other things, must be assessed.
h) Was there a testing phase? It is essential that testing is carried out prior to the actual implementation, so that the controller can observe and adequately filter out any system deficiencies. Depending on the data entered and the specificities of the algorithm used, the AI may discriminate against natural persons who do not meet certain characteristics, which may not be considered as lawful processing, since the processing does not comply with the principles of lawfulness and fairness (even if the purpose of the processing is lawful, the system collects only the necessary data, the data are accurate and the retention period is appropriate).
i) Is automated decision-making taking place? If so, the rights granted by Article 22 must also be guaranteed. In other cases, it may be worthwhile to include substantial human oversight in the process of software operation, so that a particular person can correct a decision that is inappropriate for some reason, for example if it is a discriminatory decision (in which case the AI would be a tool to assist human decision-making, not to replace it).
j) Are the rights of the person concerned guaranteed?
k) How can I ensure continuous monitoring of the model’s operation and correct any potential malfunction?
With the cooperation of the IT administrator of the AI service/software, our Firm can provide legal support to deal with the issues raised above.