
- AI is moving from pilots to practice across banking, with rising confidence but fragile trust.
- Regulators including the BIS, FSB and ECB are warning of systemic risks and tightening oversight.
- Adoption is accelerating, driven by advances in technology and demand for efficiency and personalised services.
- Key risks include model opacity, cyber threats, algorithmic bias, and reliance on third-party providers.
- Banks must balance innovation with governance, explainability, and robust risk management frameworks.
- Success depends on aligning finance, risk, and operations to deliver accountable, sustainable value.
- Institutions that embed transparency, resilience, and oversight will be best positioned to compete.
AI is rapidly transforming banking
From credit and fraud detection to onboarding and growth, AI is moving from pilots to practice across banking. Confidence is rising, but trust remains fragile.
In Ireland, the number of business executives calling AI “over-rated” has halved in six months. Over half of medium-sized businesses now require generative AI policies, yet six in ten leaders still avoid uploading sensitive data.
These concerns are especially pertinent for the financial services sector, and regulators have taken notice. The Bank for International Settlements (BIS) recently warned of systemic risks and urged central banks to build internal AI capacity. The Financial Stability Board (FSB) has identified vulnerabilities, including model opacity and third-party dependencies. Meanwhile, the European Central Bank (ECB) has tightened supervisory expectations for machine learning explainability and oversight.
As customers and investors demand faster adoption, executives must balance supervisory demands and value. Successful institutions are the ones that view AI as a strategic capability, anchored by robust risk management and governance.
Why adoption is accelerating
Businesses are becoming more adept at identifying value from AI, according to Grant Thornton’s International Business Report. Only 22% now struggle to identify productive AI use cases, down from 48% six months ago.
The FSB frames the growth of AI as the result of a dynamic interplay between supply-side innovation and demand-side pressures.
- On the supply side: advances in computing power, the explosion of available data, and breakthroughs in generative AI research are lowering barriers to adoption. Open-source models and AI-as-a-service offerings are enabling even smaller firms to experiment at scale.
- On the demand side: institutions seek efficiency, sharper risk management, and ways to meet customers’ needs for personalised digital services. Competitive and regulatory pressures add momentum. But this acceleration brings systemic risks.
U.S. research aligns. Grant Thornton’s recent Digital Transformation Survey reveals that most banks are increasing their AI investment by at least 11% this year. Our U.S. colleagues found that scaling these tools is rarely a question of technical capability. The challenge lies in governance: aligning finance, risk and marketing on shared profitability metrics, integrating fragmented data across silos, and ensuring lead-scoring models are explainable and defensible.
Without that discipline, pilots often stall because no one can prove the models are generating sustainable, risk-adjusted value.
Systemic risks regulators are watching
Banks are also balancing evolving regulatory expectations. Recent publications by the FSB and BIS offer two complementary perspectives: the FSB maps the systemic drivers and vulnerabilities, while the BIS drills down into sector-specific risks and the supervisory response required.
The FSB warns of heavy dependencies on dominant cloud and AI providers, alongside market correlations that could lead to “herding” behaviour. It also warns of cyber threats, including model poisoning, opaque model governance, and the growing risk of AI-enabled fraud. It has recommended enhanced data monitoring, updated regulations specific to AI, and strengthened supervisory tools.
The BIS takes a different lens, examining AI’s impact across core financial sectors:
- Payments: AI supports back-end automation, fraud detection, and AML/KYC compliance, but also raises concerns about liquidity shocks and sophisticated cyber-attacks.
- Lending: Machine learning improves credit risk analysis and expands financial inclusion, yet creates risks of algorithmic bias and privacy breaches.
- Insurance: Smarter pricing and claims automation unlock efficiency, while systemic risks emerge from algorithmic coordination and competitive “arms races.”
- Asset management: AI-driven portfolio allocation and trading broaden investor tools, but also heighten risks of herding, volatility, and network interconnectedness.
It also highlights broader financial stability challenges, including systemic herding, procyclicality, single points of failure, and the risk of decisions made based on narrow or non-representative data.
In July 2025, the European Central Bank also published a guide that establishes explicit expectations for banks utilising machine learning in their internal models. Our Cypriot colleagues have provided a more detailed update on that topic here.
Different lenses, same message
For institutions, prioritising governance, transparency, and resilience is essential. Central banks face a dual role: they must monitor the macroeconomic and financial impacts of AI while also adopting the technology in their own operations, from forecasting to supervisory analytics.
That dual role brings new demands:
- Data governance and infrastructure must be strengthened through secure IT investments and robust data lakes.
- Talent pipelines of data scientists and AI-literate economists must be built in a highly competitive market.
- Model transparency and risk management require careful trade-offs between outsourced models (cost-efficient but opaque) and in-house development (transparent but resource-intensive).
- Cross-border collaboration is essential, with the BIS encouraging a “community of practice” to share standards and tools.
A framework for responsible AI
Understanding and strengthening AI practices
Regulators’ expectations will quickly set new requirements for all stakeholders. For commercial banks, this means they must prepare for stricter demand for explainability, resilience, and governance across all business lines. These are no longer just supervisory issues. They are operational imperatives for every institution leveraging AI.
The first step to responsible use is to develop a clear understanding of the current state of AI across the institution. That requires an end-to-end evaluation of use cases, infrastructure, and risk management practices, with the aim of creating a targeted roadmap for improvement. This review should include policies, capabilities, and the identification of new opportunities where AI could deliver business value.
Once the baseline is established, attention should turn to strengthening the framework itself. That means embedding AI and machine learning into business and risk processes in a way that is risk-aware by design. Clear policies, standards, and lines of accountability are essential. Robust model risk management and continuous monitoring should be treated as foundational, not optional.
Validation, assurance, and ongoing improvement
A third priority is validation and assurance. AI processes and models should be explainable, defensible, and aligned with regulatory expectations. This requires structured testing, bias mitigation, and monitoring that can withstand supervisory scrutiny.
Finally, a forward-looking framework should support continuous automation and improvement. By treating AI as a living system, one that is subject to ongoing enhancement, assurance, and governance, banks can maximise efficiency while maintaining resilience.
The destination is a robust, risk-based AI environment: one that strengthens trust with regulators, protects against systemic vulnerabilities, and positions institutions to compete in a financial sector increasingly shaped by intelligent automation.
What must leaders do now?
Many banks see AI-driven lead generation as a powerful growth tool. Yet, as recent research by our Grant Thornton U.S. colleagues shows, success depends less on algorithms than on governance. It depends on aligning finance, risk and marketing metrics, integrating data, and embedding oversight.
The BIS, FSB, and ECB are aligned: the opportunities are significant, but so are the vulnerabilities. The challenge for leaders is not to choose between innovation and supervision, but to advance both at once within a robust risk management framework.
This means investing in explainability, resilience, and oversight alongside advancing use cases that drive growth, efficiency, and inclusion. The institutions that thrive will be those that treat AI as a strategic capability underpinned by rigorous governance.
In banking, the winners won’t be those who adopt AI fastest. They will be those who prove it can be both profitable and accountable.
Sign up for expert insights, industry trends, and key updates—delivered straight to you.