There has been rapid adoption of AI technologies in Ireland. The Irish financial sector has adopted AI to enhance efficiency, increase innovation and improve customer satisfaction. However, with this increased use also comes significant regulatory risks due to the adoption of the AI Act and increased focus from industry regulators.
The AI Act came into force on 1 August 2024, with provision for phased implementation. The phased roll-out is scheduled to complete by 2 August 2027, when the law will apply in full across the EU.
The AI Act regulates AI systems placed or deployed in the EU. The Act has set out clear definitions for the different actors involved in AI: providers, deployers, importers, distributors, and product manufacturers. This means all parties involved in the development, usage, import, distribution, or manufacturing of AI models will be held accountable. Obligations are imposed depending on the classification of the actor’s rile and risk of AI system.
The first phase, which started on 2 February 2025, introduced rules banning certain types of AI systems – “prohibited AI systems”. The AI Act takes a risk-based approach – systems with unacceptable risk pose threats to fundamental rights and safety and are banned outright. The high-risk category is where the majority of the AI Act’s obligations kick in. Systems with limited risk have specific transparency obligations. Understanding these categories of risk and satisfying obligations accordingly are crucial in ensuring compliance.
There are also obligations surrounding AI literacy that ensure employees using AI systems have the requisite skills and training to do so in an appropriate and ethical manner.
GPAI Models are defined broadly as AI models capable of carrying out many different tasks using large amounts of data, often trained using self-supervision. These models can be integrated into a wide range of other software and applications. Well-known large-scale AI tools, such as ChatGPT, fall within this category. Under the AI Act, providers of GPAI Models have specific responsibilities, which are outlined in detail in the AI Act. Additionally, if the European Commission deems that certain GPAI Models pose a “systemic risk”, extra obligations will apply.
To ensure compliance with the AI Act, it is important to identify AI used by your business and identify what risk categories will apply. AI governance frameworks will be valuable in ensuring compliance and should be appropriate to the relevant risk categories. Focus should also be placed on developing AI policies, staff training and vendor AI due diligence.