Clarkslegal Law Bites

AI Podcast: AI and Data Security

March 26, 2024 Clarkslegal
Clarkslegal Law Bites
AI Podcast: AI and Data Security
Show Notes Transcript

In the third and final podcast in our ‘AI Podcast’ trilogy, Lucy Densham Brown and Rebecca Dowle, members of the data protection team at Clarkslegal, will be discussing how to use AI to process data safely. They will be looking closely at the risks for businesses and the types of data security protections you can put in place. This includes:

  • Regulation of AI
  • When using AI, do companies need to update their privacy notices?
  • How to ensure that data inserted into an AI database has the proper safeguards

Useful Links: UK Government’s approach to AI

UK Government’s plans for implementing a pro-innovation approach to AI regulation.

Employmentbuddy FREE HR Resource: Generative AI policy

As a special promotion, we have prepared a free Employmentbuddy Generative AI template policy for businesses. This policy sets out how Generative AI should be used in the workplace to ensure we enjoy the benefits of it without causing risks to the business or compromising our high standards. We hope you find it useful!

Download Here: Generative AI policy

Webinar: How do I protect my business in the event of a personal data breach?

A company may suffer disastrous consequences because of a personal data breach; they can seriously harm a company’s finances and reputation by enabling criminals to utilise personal information to commit fraud and identity theft. Join our data protection team, for a quick overview of how to protect your business.
 
Tuesday 30 April, 11:00 AM - 11:30 AM BST

Visit our events page to register: How do I protect my business in the event of a personal data breach?

If you have any questions or want to discuss data protection law and how it applies to you in more depth, please contact our data protection team, who would be happy to help.

AI Podcast Series

  1. AI, Discrimination and Automated Decision-making
  2. AI and Intellectual Property
  3. AI Podcast: AI and Data Security


Lucy Densham Brown 00:05

Welcome back everyone. Thank you for joining us for the third and final podcast in our Artificial Intelligence podcast trilogy. 

My name is Lucy Densham Brown, I am a solicitor in our Employment and Data Protection teams at Clarkslegal. Joining me today is Rebecca Dowle, who recently qualified as a solicitor in our litigation and data protection teams. 

Rebecca Dowle 00:24

Hi everyone!

Lucy Densham Brown 00:27

In today’s episode, we will be talking about how to use AI to process data safely. We have touched on this a little in our previous sessions, where we discussed inherent bias and the need for regulation where IP is being used by AI. But today we will look more closely at the risks for businesses and the types of protections you can put into place. 

We will consider if companies will need to update privacy notices where they are using AI, and how to ensure that data inserted into an AI database has the proper safeguards. 

Rebecca would you mind kicking off the podcast today with a little bit of background on the regulation of AI? 

Rebecca Dowle 01:07

Of course. So as you know, the use of AI in businesses is becoming increasingly popular. This is because AI can offer businesses a number of benefits, including automating routine tasks, which saves time, maximizes productivity, reduces the risk of human error, and drives team engagement by enabling employees to spend more time on meaningful work. 

So you must be thinking well why not right? Why wouldn’t I want to use AI to obtain those benefits for my business?

Lucy Densham Brown 01:41

I mean that’s what I am thinking… but I’m assuming you are about to tell me different? 

Rebecca Dowle 01:47

Well.. the benefits from AI are amazing and there is no denying that. The use of AI has the potential to drastically alter society. It has the ability to enhance the productivity of our workforces, improve our healthcare systems and increase access to education. 

However…

Lucy Densham Brown 02:05

Ah, I knew this was coming… 

Rebecca Dowle 02:09

Haha yes you got me. Currently AI is being used in areas beyond the scope of many regulator’s remits. So effectively, it is being used but is not being properly controlled. Therefore, in order to fully benefit from AI, the public must have the confidence that it is being applied ethically and legally, and that our data is not being unlawfully processed. 

Lucy Densham Brown 02:33

I agree. As we discussed last time, governments across the globe have recently begun drafting legislation and passing laws in order to regulate AI technology. For example, on 2 February 2024 the EU member states unanimously reached agreement on the text of the rules on AI, which will be known as the EU AI Act. The final draft will be adopted in April and will likely come into force in 2025 with a two year transition period. However, here in the UK, the government is taking a slightly different approach. The UK government stated in its AI white paper published in March last year, that it is using a cross-sectoral, principles-based framework to increase public trust in AI and develop capabilities. More recently, the UK government also said that it believes implementing new laws now is not the best approach, and that legislating for AI will only make sense once there is a better understanding of the risk that AI poses.

Rebecca Dowle 03:36

Yes that’s what they have said. The UK government are taking a more flexible approach to AI regulation compared to its EU counterparts. Now I won’t go into too much detail about the government’s approach, but for those of you that are interested, I have included a link in the podcast description to the Government’s response to consultation paper, which was published last month. I have also included the original paper called “A pro-innovation approach to AI regulation”.  So if you wish to read more about that, please follow the link to the papers.  

Lucy Densham Brown 04:47

The difficulty of course with this flexible approach, is that while the use of AI is largely unregulated, there is no framework for businesses and employers to work within to ensure that they are fulfilling their obligations regarding Data Security. 

Rebecca Dowle 04:21

So Lucy, would you mind now explaining how this relates to data protection and more specifically to data processing?

Lucy Densham Brown 04:28

Yes of course. The development and deployment of AI systems involve processing personal data. Now with any processing of personal data, there must be a lawful basis for the processing, and this is confirmed by article 6 of the UK GDPR. 

The ICO have issued guidance on this, which states you must break down and separate each distinct processing operation, and identify the purpose and an appropriate lawful basis for each one, in order to comply with the principle of lawfulness. It suggests businesses should consider carefully if the AI system is necessary for the task, which AI system is most appropriate and take a risk-based approach to the deployment of any system.

Now that we have the basis of how AI works under UK GDPR, lets consider some of the critical issues facing AI and data security today. For businesses, data breaches are undoubtedly a significant concern. Whilst this is true outside of the context of AI, with the increasing volume and sensitivity of data being processed by AI systems, the risk of breaches has never been higher.

Rebecca Dowle 05:38

Absolutely. And it's not just external threats we need to worry about. Internal vulnerabilities, such as inadequate access controls or insufficient employee training, can also pose significant risks to data security. For those that are concerned about data breaches in particular, be sure to join me for my upcoming Webinar], where Louise Keenan and I will be discussing Data Breeches and giving some helpful guidance for employers on minimising risks. 

Lucy Densham Brown 06:09

Speaking of internal risks, the unregulated use of AI in the workplace is another pressing issue. Many businesses are adopting AI technologies without fully understanding the implications for data security and privacy, and without providing sufficient training to employees utilising these tools.

Rebecca Dowle 06:28

Indeed. The rapid deployment of AI systems without proper oversight or governance can lead to compliance failures and increased exposure to security threats. It's essential for businesses to establish clear policies and procedures governing the use of AI in the workplace to mitigate these risks.

Lucy Densham Brown 06:47

And let's not forget about the concerns surrounding the use of generative AI. Inserting personal data into generative AI models can have significant privacy implications, as it raises questions about consent, data ownership, and potential misuse. Let's explore this a bit more with a scenario where we encounter generative AI in our professional environment, specifically within a legal context. So Rebecca imagine we're tasked with drafting a legal document, such as a non-disclosure agreement (NDA), for a client. How might we utilise generative AI in this situation?

Rebecca Dowle 07:24

Well, traditionally, drafting legal documents like NDAs can be a time-consuming process, involving a lot of manual work. Generative AI offers a potential solution by automating some aspects of the drafting process.         We could input the necessary details into the generative AI system, such as the parties involved, the scope of confidentiality, and the duration of the agreement. The AI could then generate a draft NDA based on these inputs, saving us time and effort.

Lucy Densham Brown 07:58

However, we need to be cautious when it comes to using generative AI, especially when dealing with sensitive client data. While the convenience of automating document drafting is appealing, there are significant risks associated with putting personal client data into generative AI systems. Client data, particularly sensitive information, must be handled with confidentiality and care. Putting this data into generative AI systems raises questions about how it's being used, stored, and secured. Where the AI system is a third party, it can be difficult to guarantee the security of these systems, and moreover, for UK GDPR purposes you will need to ascertain who is the data controller in these circumstances. 

There is also a risk of unintended consequences. Generative AI systems learn from the data they're trained on, and if personal client data is included in the training data, there's a possibility that the generated documents could inadvertently disclose sensitive information or reveal patterns that compromise client confidentiality. At the moment, unfortunately we do not yet understand enough about how the AI systems learn to be able to properly safeguard against this, and so these systems must be used with appropriate caution. 

Rebecca Dowle 09:17

Exactly. So, while generative AI holds promise for automating certain aspects of legal work, we must proceed with caution, especially when it comes to handling sensitive client data. It's essential to assess the risks carefully and implement robust safeguards to protect client confidentiality and privacy.

Lucy Densham Brown 09:38

So, how can business’ ensure the security of AI systems while complying with UK law and GDPR?

Rebecca Dowle 09:46

Well, it's a multifaceted approach, Lucy. Firstly, organisations must conduct a data protection impact assessment (DPIA) and security risk assessments to identify potential vulnerabilities in their AI systems. This includes assessing the security of the data they collect, store, and process. This might include those problem areas we discussed above, but depending on how the AI is used in the industry, there might be other risks unique to your business. For example, a graphic design team which utilises generative AI for its design work, make have risks of copyright infringement, which would not apply to businesses using AI only for administrative purposes. 

Lucy Densham Brown 10:35

Right. And once vulnerabilities are identified, organisations need to implement appropriate security measures, such as encryption, access controls, and regular security audits. These measures not only enhance data security but also demonstrate compliance with legal requirements.

Additionally, one of the best steps that businesses can take now to address these risks, is to implement policies on appropriate use of these AI systems, and to provide training to employees on this. For many, AI systems can seem intimidating and alien. This results in a general reluctance to learn about them properly, and this misunderstanding will result in misuse of the systems. A prudent and risk adverse business should be looking now at how to train up their employees, so they can start using these systems safely and correctly before bad habits are developed. 

Rebecca Dowle 11:32

That’s a really good point Lucy, luckily for listeners, we have prepared something special to help with this. Stay tuned and we will tell you all about it at the end of the podcast! Now back to data security, the other step that businesses need to take to improve data security is to update privacy notices. 

Lucy Densham Brown 11:51

That’s right Rebecca, organisations need to ensure transparency and accountability in how they use AI and handle data. This includes providing individuals with clear information about how their data is being used and their rights under GDPR. For most businesses, that means updating privacy notices to take account of the use of AI.

Rebecca Dowle 12:11

Absolutely. Privacy notices serve to inform individuals about how their personal data is collected, processed, and used. When updating privacy notices, businesses should clearly outline the use of AI technologies and how they affect individuals' data. This includes providing information about the types of AI systems used, the purposes for which they're employed, and any potential risks or implications for privacy. Additionally, businesses should explain how AI-driven decisions are made and the extent to which they impact individuals. 

It's an essential step in ensuring that individuals are informed about how their data is being used and empowering them to make informed decisions about their privacy. By keeping privacy notices up to date, businesses can maintain trust with their customers and stakeholders in an increasingly AI-driven world. Now, before we wrap up, do you have any final thoughts on AI and data security?

Lucy Densham Brown 13:10

Just that it's an ongoing journey, Rebecca. As technology evolves, so do the challenges and risks associated with AI and data security. Businesses must remain vigilant, continuously assess and update their security measures, and always prioritise the protection of individuals' privacy and rights.

Rebecca Dowle 13:30

Wise words, indeed. Well, that's almost all the time we have for today, however before we go, we promised earlier that we would have something special for listeners to help them get a head start on AI data security. As a special promotion we have prepared a free template policy for businesses for Generative AI in the workplace. This policy contains the bones you need to regulate use of generative AI. You will find a link to this on the podcast page. We hope you find it useful! And remember, if you would like a tailored policy for your specific needs, or any advice from our team, please do reach out and we would be happy to help. 

Lucy Densham Brown 14:11

Thank you for listening to this series, and keep an eye out for more exciting AI content in the future!