Skip to content
Featured Insight

AI and Compliance with Data Protection Laws: A Guide for Organisations

14-08-2024

Home / Insights / AI and Compliance with Data Protection Laws: A Guide for Organisations

The use of Artificial Intelligence (AI) tools is becoming increasingly prevalent in both organisational and personal contexts. The range of AI tools and the types of information they process are expanding rapidly. 

This article explores key considerations that organisations should bear in mind when utilising AI tools to ensure compliance with data protection laws. 

UK Data Protection Laws 

The UK’s data protection framework consists of the following: 

It is important to note that the EU’s General Data Protection Regulation (EU 2016/679) has extraterritorial effect and may also apply to UK data controllers and processors that operate within the EU. 

AI Tools and Data 

In recent years we have witnessed the development of AI tools that rely heavily on personal data. For example, Amazon’s Alexa, widely used by UK residents, routinely collects substantial amounts of data, including: 

  • Payment information 
  • Live location 
  • Records of communication requests, which may include personal data 
  • Purchase history 
  • Shopping habits 

Similarly, Meltwater, a media, social, and consumer AI tool, generates insights into customer behaviour by processing data from various sources, including live chats, social media accounts, and purchase histories. 

As AI tools evolve rapidly, data collection remains integral to their function. It is almost certain that the data collected by some AI tools will include personal data, special category data, and potentially data about children. 

Given this, it is crucial for organisations to ensure compliance with data protection laws. 

The AI Guidance 

The Information Commissioner’s Office (ICO) has produced guidance on AI and data protection (the “AI Guidance”) to help organisations interpret relevant data protection laws as they apply to AI. The AI Guidance also makes recommendations on good practices for organisational and technical measures to mitigate risks to individuals that AI might pose. 

The ICO defines AI broadly, recognising it as a standard industry term encompassing various technologies. A key area of AI is ‘machine learning,’ which involves using computational techniques to create statistical models from large datasets. These models are then used to make predictions or classifications about new data, and much of the current interest in AI revolves around machine learning. 

The AI Guidance therefore primarily addresses the data protection risks and challenges posed by machine learning-based AI. However, it also acknowledges that other forms of AI may present additional data protection challenges. 

Who Does the AI Guidance Apply To? 

The AI Guidance is intended for two broad audiences: 

  1. Those with a compliance focus, including: 
  • Data protection officers 
  • General counsel 
  • Risk managers 
  • Senior management 
  • The ICO’s own auditors, who will use the AI Guidance to inform their audit functions under data protection legislation. 
  1. Technology specialists, including: 
  • Machine learning developers and data scientists 
  • Software developers/engineers 
  • Cybersecurity and IT risk managers 

This article concentrates on aspects of the AI Guidance relevant to those with a compliance focus. However, organisations are encouraged to consider the AI Guidance in its entirety. 

Key Aspects of the AI Guidance 

The AI Guidance is extensive and detailed. Below, we highlight key points for organisations using AI. 

What are the accountability and governance implications of AI? 

Organisations must: 

  • Align their internal structures, roles and responsibilities, training requirements, policies, and incentives with their AI governance and risk management strategies. 
  • Continuously demonstrate how they have addressed data protection by design and default obligations. 
  • Ensure that their governance and risk management capabilities are proportionate to their use of AI. 
  • Develop a general accountability framework as a baseline for demonstrating their accountability under data protection laws, on which they can build their approach to AI accountability. 

How to ensure transparency in AI? 

  • Organisations are required to be transparent about how they process personal data within AI systems to comply with the transparency principle. 
  • Before commencing any processing of personal data, an organisation must consider its transparency obligations towards the individuals whose personal data it intends to process. 
  • The organisation must include in its privacy information details about the purposes for processing personal data, the retention periods for that data, and the entities with whom it will be shared. 
  • If data is collected directly from individuals, the privacy information must be provided at the time of collection, before it is used to train a model or apply that model. If the data is collected from other sources, the privacy information must be provided within a reasonable period, no later than one month. 

How to ensure lawfulness in AI? 

  • To comply with the lawfulness principle, an organisation must break down each distinct processing operation, identifying the purpose and an appropriate lawful basis for each. 
  • Whenever personal data is processed, whether for training a new AI system or making predictions using an existing one, the organisation must have an appropriate basis for doing so. 

What do organisations need to know about fairness and statistical accuracy? 

  • Statistical accuracy refers to the proportion of correct and incorrect answers produced by an AI system. 
  • AI systems must be sufficiently statistically accurate to ensure that the processing of personal data complies with the fairness principle. 
  • To comply with the fairness principle, organisations must ensure that personal data is handled in ways that individuals would reasonably expect and not in ways that have unjustified adverse effects on them. 
  • Organisations should avoid processing personal data in ways that are unduly detrimental, unexpected, or misleading to the individuals concerned. Consistently improving the statistical accuracy of an AI system’s output is one way to ensure compliance with the fairness principle. 

How should organisations assess security and data minimisation in AI? 

  • When processing personal data, organisations must ensure appropriate levels of security against unauthorised or unlawful processing, accidental loss, destruction, or damage. 
  • The security measures an organisation adopts should be proportionate to the level and type of risks arising from specific processing activities. 
  • To secure training data, technical teams should record and document all movements and storage of personal data. Additionally, any intermediate files containing personal data that is no longer required should be deleted. 
  • Regardless of whether AI systems are developed in-house, externally, or through a combination of both, organisations must assess them for security risks. 
  • Staff must be equipped with the necessary skills and knowledge to address any security risks. 

How do organisations ensure individual rights in their AI systems? 

  • Data protection laws afford individuals several rights concerning their personal data. In the context of AI, these rights apply whenever personal data is used at any stage in the development and deployment of an AI system. 
  • The right to rectification, for instance, may apply to personal data used for training an AI system. The greater the importance of accuracy in this data, the more effort organisations should invest in verifying its accuracy and rectifying it where necessary. 
  • Organisations may also receive requests for the erasure of personal data within training data. Although the right to erasure is not absolute, such requests must be considered unless the data is being processed under a legal obligation or public task. 
  • To ensure fairness and transparency, organisations must inform individuals if their personal data is being used to train an AI system. This information should be provided at the point of data collection. 

Contact Us – AI and Data Protection Solicitors 

Sali Zaher, Commercial Litigation Associate, can assist with disputes related to AI and data protection. Should you have any queries on this topic, please contact Sali Zaher via email at S.Zaher@rfblegal.co.uk or by phone on 020 7467 5766

Author

key person image

Sali Zaher

Associate Solicitor

Contact Us

Let us take it from here

Reach out to us for unparalleled legal solutions. Our dedicated team is ready to assist you. Connect with us today and experience excellence in every interaction.

Contact form
If you would like one of our staff to contact you, please fill out the form below

Please enable JavaScript in your browser to complete this form.
Which RFB office do you want to contact?