Ireland Update: AI Insights for Irish Regulated Entities
There has been rapid adoption of artificial intelligence (“AI”) technologies in Ireland. According to the Irish Central Statistics Office, the number of Irish businesses using AI increased from 8% in 2023 to 15% in 2024.1 The Irish financial sector has adopted AI to enhance efficiency, increase innovation and improve customer satisfaction.
However, with this increased use also comes significant regulatory risks due to the adoption of the European Union’s (“EU”) Artificial Intelligence Act (“AI Act”) and increased focus from industry regulators.
- Published
- in Industry Updates
ESMA Report
In February 2025, the European Securities and Markets Authority (“ESMA”) published its Trends, Risks and Vulnerabilities risk report on AI in European Union investment funds (“Report”) which evaluates the impact of AI and recent advancements (such as the use of large language models and generative AI).2
Use of AI by Investment Funds
In the Report, ESMA observes that there has been an increase in the development of general-purpose and finance-tailored natural language models and generative AI-based tools. While there has been an increase in asset managers using AI for investment decisions, the focus is still mainly on enhancing existing capabilities and supporting, rather than making, final investment decisions.
In a study conducted by ESMA, it was found that most investment funds do not explicitly promote the use of AI, and that there is limited data at present to indicate that the use of AI leads to a significant competitive advantage.
It did, however, find that Irish and EU funds have significantly increased their portfolio exposure to AI-related companies. Since 2021, the value of shares held in AI companies has doubled.
AI Risks
The Report also highlights two concerns arising from reliance on third-party AI providers. Firstly, risk arises from a growing dependence on outsourced expertise and possible disruptions to service delivery. Secondly, risk may arise from the concentration of service delivery among the same service providers, resulting in exposure to systemic and operational vulnerabilities.
The Financial Stability Board sees service provider concentration in the large language model and generative AI market as a growing concern from an operational vulnerability perspective.3
ESMA Public Statement
ESMA’s public statement on the use of AI in the provision of retail investment services (“Statement”)4 provides the following guidance for investment firms using AI in light of their obligations under MiFID II:
- Client Best Interests and Transparency: Investment firms must prioritise clients’ best interests and clearly disclose how AI is used in decision-making and client interactions.
- Governance: Management should oversee AI with strong governance and risk management, including thorough testing, monitoring, documentation, due diligence on AI providers and staff training on risks and regulations.
- Conduct of Business: Investment firms should ensure robust quality assurance and regular stress testing of AI tools, while strictly complying with data protection laws.
- Record Keeping: Investment firms must keep detailed records on AI use, related client complaints, decision processes, data sources, algorithms and any changes made.
ESMA notes that the Statement also covers staff use of third‑party AI, whether or not senior management is aware of or has approved it. It reminds investment firms to implement appropriate controls governing employees’ use of AI systems, regardless of whether such use has been authorised.
European Central Bank (“ECB”)5
The ECB has highlighted the potential for AI to boost productivity and improve risk assessment and planning in banking but has cautioned that there are significant risks relating to privacy, misinformation, cyberattacks and groupthink. It stresses the importance of human oversight, transparency, record-keeping and maintaining critical thinking and diversity of thought.
Central Bank Focus
In February 2025, the Central Bank of Ireland (“Central Bank”) published its Regulatory and Supervisory Outlook (“Outlook”)6 and, as per its outlook from 20247, the Central Bank has dedicated an entire spotlight section to AI.
The Central Bank notes that in the financial sector, AI adoption is generally confined to the following uses: anti-money laundering and fraud prevention, cybersecurity, customer service delivery, market trading, insurance underwriting and reserving, credit scoring and computer code development and testing.
In the Outlook, the Central Bank sets out its approach to AI as a regulator and the risks associated with the use of AI.
Central Bank’s Approach
One of the Central Bank’s key objectives is to keep up to date on the development and impact of AI and it will continue to engage with regulated entities through its regular supervisory engagement and the Central Bank’s Innovation Hub. This indicates that the Central Bank is still seeking to gather information and learn more about use of AI in the financial sector.
The Irish government has officially appointed the Central Bank as a market surveillance authority under the AI Act under S.I. No., 366/2025 European Union (Artificial Intelligence) (Designation) Regulations 2025.
As a market surveillance authority, the Central Bank will be responsible for implementing, supervising and enforcing the AI Act. In the Outlook, the Central Bank emphasises that Irish and EU supervision of the AI Act will be a multi-lateral and interdependent system and that close cooperation and collaboration will be required to ensure proper implementation of the AI Act, in particular with the EU’s AI Office.
A harmonised approach will need to be adopted to ensure a level-playing field for regulated entities.
AI Risks
In the Outlook, the Central Bank identifies the following types of risks arising from the use of AI throughout an AI system’s lifecycle:
- Input Risks: This includes risks related to the origin and quality of the data, bias within the data and data protection.
- Algorithm Selection and Implementation Risks: This includes risks such as the inappropriate use of black-box AI in high stakes settings and incorrect parameter selection.
- Output Risks: This includes risks relating to decisions made or informed by AI leading to harm such as bias leading to financial exclusion.
- Overarching Risks: This relates to cyber resilience, operational resilience and governance.
Appropriateness, Transparency and Accountability
The Central Bank notes in its Outlook that, while AI may offer potential solutions to certain challenges, its use is not always appropriate—particularly in light of the risk classifications set out under the AI Act. Regulated entities should always analyse (i) whether its proposed use of AI is permitted under the AI Act and (ii) if so, what risk category it falls under which will then inform the obligations to which the regulated entity will be subject.
In line with the principles under the AI Act and the GDPR, the Central Bank emphasises that regulated entities must be transparent about their use of AI, in particular where AI is used in decision-making processes, and must ensure that there is clarity of accountability and responsibility for each use of AI.
Guidance from Other Regulators
European Insurance and Occupation Pensions Authority (“EIOPA”)
In August 2025, the EIOPA published its Opinion on AI Governance and Risk Management (“Opinion”).8 This Opinion is addressed to the relevant EU member state competent authorities (such as the Irish Pensions Authority and the Central Bank) and covers the activities of both insurance undertakings and intermediaries.
The Opinion aligns with much of the guidance published by other regulators and focuses on the key themes of risk management, fairness and ethics, data governance, transparency and explainability, human oversight, accuracy and robustness and cybersecurity.
The EIOPA also notes that under other legislation applicable to the insurance industry, insurance undertakings and intermediaries are already obliged to implement the key themes discussed in the Opinion (in particular in relation to fairness and ethics, transparency and cybersecurity).
The EIOPA is currently engaging with the AI Office on use of AI in the insurance industry. In two years’ time, the EIOPA will review the supervisory practices of competent authorities and will develop more detailed analysis on specific AI systems or issues arising from the use of AI systems.
Next Steps
By now, regulated entities within scope of the AI Act should:
- have a process in place for assessing its use of AI to determine its risk category under the AI Act;
- be rolling out AI literacy training for its personnel;
- be conducting due diligence on current and new providers of AI systems to ensure that they comply with the requirements under the AI Act; and
- have appropriate AI policies and governance procedures in place.
We expect the EU’s AI Office, the Central Bank, ESMA and other regulators to issue further guidance as the phased implementation of the AI Act continues.
Further Information
For more on our AI updates, please see our knowledge page9 and our AI advisory page.10
If you would like to discuss the topics considered here or require any further information, please liaise with your usual Maples Group contact or any of the persons listed below.
1 CSO: Statistical Release on AI
2 ESMA: Trends, Risks and Vulnerabilities Risk Report
3 Financial Stability Board: The Financial Stability Implications of Artificial Intelligence
4 ESMA PR: ESMA provides guidance to firms using artificial intelligence in investment services
5 ECB: Artificial Intelligence – A Central Bank’s View
6 CBI: Regulatory & Supervisory Outlook
7 Maples Group Legal Update: AI – Risk and Regulatory Considerations for Irish Regulated Firms