Cybersecurity

7 data privacy considerations in AI adoption

By:
insight featured image

AI and its impact on raising data privacy risks

This is the second article in our series on cybersecurity risks in AI adoption based on the whitepaper developed by our colleagues in Grant Thornton US which you can download in full.

Companies worldwide are adopting and implementing AI in solutions that are reshaping industries through improved efficiency, productivity and decision-making. However, the meteoric rise of AI can overshadow some valid concerns around security and privacy.

The use of personal data to train AI models has given privacy and security professionals a reasonable cause for concern. By incorporating personal data into the training process, developers risk creating models that inadvertently reveal sensitive data about individuals or groups. As AI models become more powerful and adaptable, they might also learn to extract sensitive information from users in the course of conversations. A failure to protect personal data sufficiently could lead to privacy breaches, phishing and other social engineering attacks.

Download Grant Thornton's whitepaper 'Control AI cybersecurity risks'

To mitigate these privacy risks, organisations must consider a range of factors and potential issues in AI technology, including:

1. Loss of sensitive information

One of the most pressing concerns is the potential exposure of sensitive information that end users input into AI systems. This could lead to a serious breach in the individual’s data privacy and could allow attackers to create a detailed profile of the individual for use in social engineering attacks, identity theft, etc.

2. Ability to explain the AI model

Many advanced AI models are so complex that even their developers see them as a “black box.” That makes it challenging for organisations to explain the models and their results to key stakeholders, such as regulators and shareholders. In heavily regulated industries such as financial services and healthcare, regulators often require clear explanations of a model’s outputs and decision-making processes. Inadequate understanding of a model can lead to an inability to identify, diagnose and resolve issues such as undetected biases and ethical implications. It also introduces complexities in governance, for example, ownership and accountability for the solution and its decisions. These risks can be minimised by adopting ethical AI development principles, introducing strict testing, promoting transparency, and improving user awareness and vigilance. However, mitigations must continue to evolve as AI use continues to expand in scale and complexity.

3. Data sharing and third-party access

AI platforms can involve collaboration between multiple parties, or use third-party tools and services. This increases the risk of unauthorised access and/or misuse of data. Organisations must pay particular attention to sensitive and personal data that is transferred outside of the EU or to jurisdictions with different privacy regulations.

4. Data retention and deletion

Some AI solutions store data for extended periods so that they can continue referencing, analysing and comparing it as part of informing their machine learning, predictive and other capabilities. This long-term data storage increases the risk of unauthorised access or misuse. The context and complexity of AI solutions can also make it challenging to ensure that data is deleted when it is no longer required or when a data subject exercises their right to erasure (‘right to be forgotten’).

5. Impact assessment requirements

When AI is used to process personal data, organisations will be required under GDPR to complete Data Privacy Impact Assessments (DPIAs) and Fundamental Right Impact Assessments (FRIAs) Human Rights Impact Assessments (HRIAs). Due to the complex nature of AI, this will prove very difficult for many organisations. Without a comprehensive understanding of their AI models, organisations will struggle produce impact assessments that meet regulators’ standards.

6. Inference of sensitive information

Increasingly sophisticated and pervasive AI capabilities can connect and infer sensitive information about users based on inputs that seem innocuous on their own. For instance, inferences could combine inputs to identify political beliefs, sexual orientations or health conditions, posing a layer of risk that is hard to identify without a comprehensive analysis across potential data connections. Even when data is pseudonymised, there is potential that AI could use advanced pattern recognition or combine datasets to re-identify individuals without permission.

7. Surveillance and profiling

AI technologies like facial recognition and social media monitoring can enable invasive surveillance and profiling of individuals that endangers rights to privacy, anonymity and autonomy.

Regulatory considerations for AI use

In addition to complying with current and upcoming cybersecurity and data privacy regulations such as Network and Information Systems Directive 2 and the General Data Protection Regulation, organisations will also need to consider AI-specific regulations.

As AI use and adoption grows, regulations will continue to evolve to help ensure ethical use, data protection and data privacy. In 2021[1], the European Union (EU) proposed the Artificial Intelligence Act (‘AI Act’), which aims to introduce obligations for AI technology, dependent upon the potential risks it poses to the individual and their right to privacy. The proposed regulation will require organisations to conduct risk assessments of their AI solutions and determine its risk categorisation. In order to do this, organisations will require a thorough understanding of the AI models that underpin their solutions.

Under the proposed AI Act, solutions where the potential risk to the individual and their privacy outweighs any perceived benefit are classified as ‘unacceptable risk’ and use of them is strictly prohibited e.g. social scoring systems. High-risk AI solutions, including solutions designed for recruitment, education, medical assessment or creditworthiness evaluation, are subject to requirements in relation to risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness and cybersecurity.

The EU is not the only jurisdiction to propose new regulations to govern the use of AI. Other territories, such as the USA, have proposed regulations to mitigate AI bias, increase transparency and introduce AI auditing. Organisations that fail to comply with these regulations could face strict penalties, including monetary fines.

Download Grant Thornton's whitepaper 'Control AI cybersecurity risks'

How Grant Thornton can help

Grant Thornton works closely with regulators and is at the forefront in assisting organisations to prepare for and comply with current and upcoming regulations. By working with Grant Thornton, organisations can reduce their exposure to AI privacy risks while staying on the right side of current and upcoming regulations. For more information on our service offering, get in touch with our cybersecurity and data protection leaders.

[1] As of June 14, 2023 - The European Parliament has approved its negotiating position on the proposed Artificial Intelligence Act.