Digital resilience: what recent web outages reveal about systemic risk
ArticleRecent cloud outages expose systemic risks in digital infrastructure. Learn how to build resilience and reduce third-party dependency.

This is the third article in our series on cybersecurity risks in AI adoption based on the whitepaper developed by our colleagues in Grant Thornton US which you can download in full.
The use of artificial intelligence continues to spread at a staggering speed. Companies worldwide have adopted and implemented AI, in solutions that are reshaping industries through improved efficiency, productivity and decision-making. However, many organisations have integrated AI into their business processes more quickly than they have updated security strategies and protocols. Your risk, technology and cybersecurity leaders must find, understand and mitigate these exposures.
Download Grant Thornton's whitepaper 'Control AI cybersecurity risks'
To mitigate the cybersecurity risks in new AI solutions, organisations should review and update their existing cybersecurity programme to safeguard data and systems from inadvertent mistakes and malicious attempts.
At a minimum, organisations should consider the following when building security and privacy practices in the age of AI:
Use effective data governance to help ensure that data is properly classified, protected and managed throughout its life cycle is critical in the effective use of AI technologies. Implement secure data management and governance practices to help prevent model poisoning attacks, protect data security, maintain data hygiene and ensure accurate outputs. Good governance can include:
Conduct threat-modelling exercises to help identify potential security threats to AI systems and assess their impact. Some common threats to models include data breaches, unauthorised access to systems and data, adversarial attacks and AI model bias. When you model threats and impacts, you can identify a structured approach with proactive measures to mitigate risks. Consider the following activities as part of your threat modelling:
To control access to your AI infrastructure, including your data and models, establish appropriate identity and access management policies with technical controls like authentication and authorisation mechanisms. To define the policies and controls you need to consider:
Reassess and update policies and technical controls periodically, to align with the evolving AI landscape and emerging threat types, ensuring that your security posture remains robust and adaptable.
Encryption is a technique that can help protect the confidentiality and integrity of AI training data, source code and models. You might need to encrypt input data or training data, in-transit and at-rest, depending on the source. Encryption and version control can also help mitigate the risk of unauthorised changes to AI source code. Source code encryption is especially important when AI solutions can make decisions with potentially significant impact. To protect and track AI models or training data, you can use steganographic techniques such as:
End points (like laptops, workstations and mobile devices) act as primary gateways for accessing and interacting with AI systems. Historically, they have been a principal attack vector for malicious actors seeking to exploit vulnerabilities. With AI-augmented attacks on the horizon, end-point devices warrant special consideration as part of the potential attack surface.
User Entity and Behaviour Analytics (UEBA) enabled end-point security solutions can help detect early signs of AI misuse and abuse by malicious actors. UEBA is known for its capability to detect suspicious activity by using an observed baseline of behaviour, rather than a set of predefined patterns or rules.
AI systems can be vulnerable at many levels, like the infrastructure running the AI, the components used to build the AI or the coded logic of the AI itself. These vulnerabilities can pose significant risks to the security, privacy and integrity of the AI systems, and you need to address them through appropriate measures.
Ensure to:
These measures will ensure that patches on infrastructure are working as intended, access controls are operating effectively and there is no exploitable logic within the AI itself.
To improve resilience against threats and safeguard sensitive data, you need to foster a culture of security awareness. As with the advent of any new technology, you must ensure that all users understand the appropriate uses and the risks posed by AI technologies, and that security training materials are kept up to date with the rapidly evolving threat landscape. Read Grant Thornton's whitepaper for more information on what board member and executives, users, system engineers and developers need to know about the risks posed AI.
Your security team needs to select and design the right mitigation strategies to define a clear roadmap, with prioritised milestones and timelines for execution. The whitepaper discusses in detail some important security questions to help your team take the next steps in updating their cybersecurity strategies to mitigate AI risks.
In today's increasingly complex cybersecurity landscape, organisations must prioritise a proactive approach to risk mitigation. By fostering a culture of security awareness and regularly updating security training materials, organisations can empower their employees to identify and prevent potential threats.
By partnering with Grant Thornton, organisations can gain access to a team of experienced cybersecurity professionals who are dedicated to helping organisations protect their valuable assets and navigate the rapidly evolving cybersecurity landscape with confidence, while staying compliant with current and upcoming regulations. For more information on our service offerings, get in touch with our cybersecurity leaders and data protection leaders.
Get the latest insights, events and guidance for Cybersecurity, straight to your inbox.
Recent cloud outages expose systemic risks in digital infrastructure. Learn how to build resilience and reduce third-party dependency.
Fota Wildlife Park hit by 3-month cyberattack, exposing customer data. Learn how to protect your organisation from similar threats with robust cybersecurity measures.
Big Tech faces a new regulatory era in Europe with the Digital Services Act, demanding transparency and stricter compliance, reshaping the industry’s approach to content moderation and competition.