Clarkslegal Law Bites

AI Podcast: AI, Discrimination and Automated Decision-making

January 26, 2024 Clarkslegal
Clarkslegal Law Bites
AI Podcast: AI, Discrimination and Automated Decision-making
Show Notes Transcript

In the first of our three-part ‘AI Podcast’ series, Lucy Densham Brown and Jordan Masters, members of the data protection team at Clarkslegal, discuss how using AI and automated decision-making could conflict with GDPR protections and lead to discrimination. This includes:

  • What is AI?
  • What is GDPR and how does the use of AI relate to it?
  • Examples of how problems can arise from AI learning from historic data.
  • What does Article 22 of the UK GDPR mean for data processers?
  • What are the implications of the Article 22 judgement? 

If you have any questions or want to discuss data protection law and how it applies to you in more depth, please contact our data protection team, who would be happy to help.

Lucy Densham Brown 0:03
Hello everyone and welcome to the first in our trilogy of Artificial Intelligence, or ‘AI’ podcasts. I am your host, Lucy Densham Brown. I am a solicitor in the Employment and Data Protection teams. With me I have my co-host, Jordan Masters, a trainee solicitor in our Corporate and Commercial Team.

Jordan Masters 0:21
Hello everyone

Lucy Densham Brown 0:24
To kick off this new series we will start with giving a brief overview of AI and the General Data Protection Regulation, better known as GDPR, as well as how the use of AI and automated decision making may conflict with GDPR protections and lead to discrimination. 

So, Jordan, to start us off, can you explain what AI is?

Jordan Masters 0:44
Thanks Lucy. So, AI is technology that enables a computer to compute information in a way which is more like how a human thinks, compared to a more conventional computer – AI systems are trained on large amounts of data, which enables a computer to make decisions based on the information it has been trained on. 

Lucy Densham Brown 1:04
Thanks Jordan. Because of this adaptability, we are seeing AI being adopted in more and more areas, including healthcare, video gaming, border control and even the legal profession. An example of its use would be by a credit agency to predict a person’s willingness and ability to repay a loan. 

Jordan, can you explain what GDPR is and how the use of AI relates to it?

Jordan Masters 1:28
So, we should draw the distinction between UK GDPR and EU GDPR – EU GDPR refers to the General Data Protection Regulation, which governs the protection and processing of people’s data, which applies in the EU – UK GDPR refers to the ‘version’ of EU GDPR which has been retained in UK law following the UK’s withdrawal from the EU – whilst there are some differences, the main data protection principles, rights and obligations of the EU GDPR are still in UK law under the UK GDPR.

In relation to AI, AI could be used to process data and even make decisions, such as accepting or rejecting an application for something. This processing of data in the UK will be subject to the UK GDPR.

Lucy Densham Brown 2:15
One area that is of particular concern in relation to AI decision making is that of inherent bias and resulting discrimination. To function effectively, AI systems need to be trained on a mass of data. Due to the volume of data required to train AI systems, this often necessitates using historic data, which itself may contain discriminatory bias. If this occurs the AI system can perpetuate or even exacerbate the biases from the learned behaviour. 

UK GDPR requires the processing of data to be fair, and to be done in a way which protects individuals’ rights and freedoms, which includes the right not to be discriminated against. The use of AI trained on fundamentally discriminatory data may therefore be at odds with the principles of UK GDPR. 

Jordan Masters 3:07
You mentioned that AI learns from historic data which can cause problems. Could you give an example of this?

Lucy Densham Brown 3:14
Of course. The ICO actually provides a good example of this on their website, in relation to loans from a bank. This example supposes that a bank uses an AI system to calculate the credit risk of potential customers, and to approve or reject loan applications. The AI system is trained on data about previous borrowers, including personal characteristics, and whether or not they repaid their loan. In the example, the bank discovers that the AI system is disproportionately giving women applicants lower credit scores.

Whilst there are a few reasons why this might be, the important one we are considering here, is that the data that the AI system was trained on, was historically biased. This example assumes that in the past fewer women applied for loans, and those that did were often rejected because of historic prejudices. The result of this, is that the AI system may favour male applicants both because it has more statistical data about them, and because it has more examples of women being rejected.

The AI system may therefore systematically predict lower loan repayment rates for women, even if women in the training dataset were on average more likely to repay their loans than men.

Jordan Masters 4:30
Thank you for that. So what does Article 22 of the UK GDPR mean for data processers? 

Lucy Densham Brown 4:37
Well, in addition to the UK GDPR requiring fair processing of data, Article 22 also limits the circumstances in which you can make solely automated decisions, including those based on profiling, that have a legal or similarly significant effect on individuals. The result of this is that anyone subject to automated decision making, has a right to understand how these decisions are made, to challenge them if they suspect unfairness, and to request human intervention. 

Where AI systems are used for algorithmic decision making, such as to make decisions regarding promotions and performance evaluations in employment, the need for transparency becomes paramount. Employers and other data controllers utilising these tools need to understand the data they are presented with and why it has come to the recommendations it has to be able to justify decisions and to respond to challenges if they are raised. 

Jordan Masters 5:32
Thanks Lucy. In fact, there was a judgment delivered by the European Court of Justice in December 2023 in relation to Article 22 and automated decision making.

Lucy Densham Brown 5:44
Oh yes, I remember hearing about this judgment; what happened in this case?

Jordan Masters 5:48
Well, Case C-634/21 is fairly complex and technical, but, in terms of key information, this case arose from a dispute in which an individual was denied credit based on a score determined by SCHUFA, a German company – the individual asked SCHUFA to send her information on personal data it held about her and asked SCHUFA to erase some of the data it held about her which was allegedly incorrect.  SCHUFA informed the individual of her score and outlined, in broad terms, the methods for calculating the scores it calculated. However, referring to trade secrecy, SCHUFA refused to inform her of the elements taken into account for the purposes of that calculation and their weighting. SCHUFA said that it sent information to its contractual partners and it was those contractual partners which made the actual contractual decisions. So the Court had to decide whether SCHUFA’s process of creating a score using automated processing amounted to ‘automated individual decision-making’ under Article 22, even though it wasn’t SCHUFA refusing the credit application based on the score, but in fact the third party, drawing on information provided to it by SCHUFA and refusing the credit application.

Lucy Densham Brown 7:04
Interesting, so what did the Court decide?

Jordan Masters 7:08
The Court ruled that the automated establishment by a credit information agency of a probability value (effectively a ‘score’) based on personal data relating to a person and concerning their ability to meet payment commitments in the future does constitute ‘automated individual decision-making’ under Article 22 of the GDPR, where a third party, to which that probability value is transmitted, draws strongly on that probability value to establish, implement or terminate a contractual relationship with that person.

And this interpretation is important because, as we mentioned earlier, people can request human intervention in relation to and/or can challenge decisions that have otherwise been lawfully made but are based solely on automated processing and which produce legal effects concerning them or similarly significantly affect them.

Lucy Densham Brown 7:57
Interesting, so what are the implications of this judgment?

Jordan Masters 8:03
So, even though this judgment isn’t binding on UK courts, it has persuasive value, meaning that UK courts may choose to interpret the UK GDPR in the same way, so those processing data in the UK should be aware of it, and this judgment is binding in the EU, so those who operate and process data in the EU must be aware that this case law applies to them.

Lucy Densham Brown 8:25
So, what does this judgment tell us about use of AI in particular?

Jordan Masters 8:32
So, entities must be aware that even if they are not making the final decision in relation to something, and are passing on determinations or information to another entity which is making a decision, they may still be found to be making ‘automated decisions’ which people can request human intervention in relation to or challenge. So if entities are using AI to make determinations or decisions, the entities utilising AI should ensure that they have a legal basis for making those determinations and establish safeguards, including enabling people to request human intervention in relation to or challenge those determinations.

Lucy Densham Brown 9:09
While AI provides massive opportunities for businesses to improve their data processing capabilities, such businesses must ensure that they have a lawful basis for processing such data and that people’s rights under the UK GDPR are upheld. This should always be an important consideration for processors, but this is particularly important when AI is used to process data and make determinations based on that data without human intervention. Businesses must understand what data the AI system is basing its decisions on, and be alive to and wary of the potentials of inherent bias and discrimination. 

Jordan Masters 9:46
Yes, businesses and other entities processing and utilising data should ensure that they have the policies and procedures in place to ensure compliance with the UK GDPR.

 Lucy Densham Brown 9:57
On that note, if you do need any assistance or want to discuss data protection law and how it applies to you in more depth, please reach out to our data protection team who would be happy to help.

Jordan Masters 10:08
Thank you for listening everyone, and please tune in next month for more information on AI and data protection.