¾«Æ·¹ú²ú×ÔÏßÎçÒ¹¸£Àû

We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Rise of the machines

13 Oct 2025 technology Print

Rise of the machines

The rapid advancement and widespread adoption of generative artificial intelligence (AI) in recent times presents a broad range of considerations for the financial-services sector. Clodagh Ruigrok and Barry Scannell input the prompts

Regulated financial-services entities, including investment funds and their service providers, must adapt to a rapidly evolving regulatory and supervisory environment surrounding the use of AI.

To remain compliant and foster responsible innovation, firms should proactively assess emerging obligations, strengthen governance frameworks, and ensure the ethical and effective integration of AI technologies.

The Central Bank of Ireland has been designated as the market-surveillance authority for the financial-services sector under the EU Artificial Intelligence Act.

In this role, the bank will oversee the implementation and enforcement of the AI Act in areas such as algorithmic trading, credit-risk assessment, and other AI applications within financial services.

As part of this responsibility, the bank is preparing to enhance its supervisory approach to ensure that the adoption of AI supports investor protection and does not introduce new systemic risks or vulnerabilities.

The Central Bank is actively enhancing its AI expertise, including through a research partnership with University of Limerick, indicating its commitment to understanding and regulating AI applications in financial services.

For example, the bank’s ‘Innovation Sandbox Programme’ was launched in December 2024, and artificial intelligence and its application in financial services was identified as one of the main areas of focus.

Affirmative

The AI Act establishes a comprehensive regulatory framework for AI across the EU and imposes new obligations on both AI system developers and users, such as financial-service providers.

The act provides a structured foundation for organisations to develop and align their internal AI governance and compliance policies.

While the provisions of the AI Act are being gradually phased into effect, key obligations – such as providing AI-literacy training to personnel, the legal definition of what constitutes an AI system under the act (thereby enabling firms to determine whether their technologies fall within scope), and prohibited AI practices – came into force on 2 February 2025.

In February 2025, the EU Commission published guidelines on AI-system definitions to facilitate the first AI Act’s rules application. The guidelines explain the practical application of the legal concept, as anchored in the AI Act.

By issuing guidelines on the AI-system definition, the commission aims to assist providers in determining whether a software system constitutes an AI system to facilitate the effective application of the rules.

Firms will need to be aware of their responsibilities and obligations under the act, including evaluating the risk level of their AI systems, conducting risk assessments, and considering transparency, accountability and robustness measures to meet the act’s obligations.

The AI Act’s comprehensive framework creates multiple potential compliance touchpoints for investment funds, ranging from universal literacy requirements to specific restrictions on certain AI applications.

Computer says ‘no’

The key prohibitions include subliminal manipulation causing harm to individuals, exploitation of vulnerabilities due to age or disability, social scoring by public authorities, and real-time biometric identification in public spaces.

Firms are more likely to encounter classification under high-risk AI systems, which pose significant risks to health, safety, or fundamental rights.

The AI Act specifically classifies as ‘high risk’ AI systems intended to be used for evaluating the creditworthiness of natural persons, or establishing their credit score and risk assessment and pricing in relation to natural persons in the case of life and health insurance.

Significantly, article 4 of the AI Act establishes universal AI literacy requirements and requires that users and developers of AI systems must ensure an adequate level of AI literacy among their staff.

The requirement applies to all AI systems, meaning that investment funds using any AI systems must ensure their staff possess adequate AI literacy, extending beyond technical knowledge to include the broader implications and risks of AI deployment.

Additional considerations include, for example, transparency obligations, and potential impacts from general-purpose AI provisions.

File not found

The Central Bank’s Regulatory and Supervisory Outlook Report, published in February 2025, recognises that while AI tools and technologies can deliver significant benefits for consumers and the financial sector, risks arise that have the potential to adversely affect firms, their customers, and wider society in various ways.

The report highlights that the potential use of outputs from AI-based tools in fund-management decision-making processes – for example, stock selection or for the application of portfolio diversification rules under the UCITS Directive – can lead to unwanted bias and poor investment decisions that could harm both investors and firms.

The bank further highlights that human oversight of AI systems will remain crucial, given the risk of such scenarios occurring.

While the bank accepts that AI has the potential to enhance all areas of securities markets, it stresses that the use of AI technologies could increase market volatility, facilitate market abuse, introduce systemic risk, increase cybersecurity exposure, and reduce market transparency.

In June 2025, the Central Bank invited certain financial-services firms to participate in a survey released by the European Securities and Markets Authority (ESMA) on the topic of AI.

The objective of the survey is for ESMA to gain a better understanding of how financial entities are using AI, including their AI strategies and policies, levels of investment, and details of specific-use cases.

In February 2025, ESMA published its Trends, Risks and Vulnerabilities Report, which assessed the integration and impact of AI within EU investment funds.

The report highlighted that AI presents new forms of risk to investor protection and financial stability, tied to third-party dependencies and service provider concentration, cyber-threats, model and data governance, and increased market correlations.

Notably, the report observed a sharp increase in investment in AI-focused companies. Since 2023, actively managed equity funds have increased their exposure to AI-driven firms by over 50%, resulting in a doubling of the market value of these positions.

This growing concentration raises concerns about market volatility and shifts in fund-risk profiles, underscoring the need for robust oversight and risk management as AI adoption accelerates across the industry.

ESMA emphasised that the growing interconnectedness of AI-related firms with broader economic activities further elevates AI risks, underscoring the need for the ongoing monitoring of investment trends in AI-related companies, as the sector’s rapid growth continues to reshape the composition and risk profile of equity-fund portfolios.

Resistance is futile

While AI technologies present opportunities to enhance automation, operational efficiency, and productivity, they also introduce a range of emerging risks.

Under the AI Act, firms will be expected to make best efforts to ensure that AI systems are developed and used in line with core principles, such as human oversight, data privacy, sound governance, and social and environmental responsibility.

For Irish-regulated funds and their management companies, this means that establishing a responsible AI framework and embedding AI-related risks into existing risk registers and governance structures will be essential to demonstrate ethical and compliant use of AI in the interests of investors.

AI-related risks can be particularly complex to assess, especially given the unpredictability of machine-learning models.

These may include algorithmic risks (where AI systems behave unexpectedly or generate flawed outputs) and data risks (such as bias in training data or improper data handling).

Conducting AI impact assessments will be a key part of meeting risk-management obligations under Irish and EU regulatory frameworks, including UCITS and AIFMD, when deploying AI tools in investment processes or operational functions.

The Central Bank report highlights a number of critical risks that must be addressed, and that firms remain responsible for managing these risks appropriately:

  • Input risks – involving the data an AI model uses,
  • Algorithm selection and implementation risks – including inappropriate use of black-box AI in high-stakes settings and incorrect parameter selection,
  • Output risks – relating to decisions taken on the basis of, or informed by AI, leading to individual or collective harm (for example, bias leading to financial exclusion),
  • Overarching risks – linked to the use of AI, such as cyber-resilience, operational resilience, and governance.

The Central Bank expects that, where a firm is using AI, it should be clear what business challenge is being addressed, and why their use of specific types of AI are an appropriate response to the business challenge.

Moreover, the use of AI may require more consideration of accountability and recourse, as well as issues like interpretability, explainability, fairness, and the ethical usage of data.

To help address AI risks, firms should implement robust AI governance frameworks, formally integrate AI-related risks into governance structures and risk registers, and develop systems such that AI can operate safely, transparently, and in alignment with the evolving regulatory landscape.

You must comply

Under the AI Act, firms must implement strong data-governance and managerial controls before deploying AI.

For financial services, this raises several operational and compliance considerations:

  • MiFID – AI used in trading or investment advice must comply with MiFID, including transparency, recordkeeping, and best execution. Such systems may be classified as high-risk under the AI Act, triggering stricter requirements.
  • UCITS/AIFMD – fund managers must uphold fiduciary and depositary duties, manage conflicts, and ensure clear investor disclosures.
  • Outsourcing – when AI is provided by third parties, firms must ensure oversight of performance, data integrity, and compliance with both the AI Act and sector-specific rules.

These requirements reinforce the need for robust governance frameworks when integrating AI into regulated financial activities.

Financial-services firms, including fund managers, face environmental, social, and governance (ESG)-related obligations under regulations like the Sustainable Finance Disclosure Regulation (SFDR).

With the AI Act, firms must now assess how AI use may impact their sustainability activities and disclosures.

The SFDR requires disclosure of how sustainability risks are integrated into investment decisions and the potential adverse effects on ESG factors.

AI introduces key challenges here, including:

  • Disclosure accuracy – AI tools assessing ESG risks may produce biased or inaccurate outputs if based on incomplete or unverified data,
  • Transparency – complex or opaque AI models can make it difficult to meet SFDR transparency requirements, raising concerns for investors and regulators.

Firms must evaluate how AI affects their ability to comply with sustainability regulations, particularly the SFDR.

Does not compute

The new AI regulatory obligations build on existing frameworks, including dataprotection and AML laws.

For instance, AI systems processing personal data (whether for marketing, profiling, or investment purposes) must comply with the GDPR.

This includes conducting a data-protection impact assessment where AI use poses high risks to individuals’ rights – a scenario likely relevant for fund managers using AI in client-facing applications.

Under the EU’s Fifth Anti-Money-Laundering Directive, AI can enhance transaction monitoring and detect suspicious activity, but must still meet core obligations, such as reporting and record-keeping.

Where such systems are deemed high-risk under the AI Act, they must also satisfy enhanced requirements on transparency, robustness, accountability, and human oversight.

These obligations reflect the AI Act’s broader aim of fostering trustworthy AI in sensitive areas like financial-crime prevention, while aligning with existing supervisory expectations.

I’ll be back

As AI becomes increasingly embedded in the operations of investment funds and broader financial services, the regulatory landscape is evolving at pace, and with complexity.

The dual imperative facing financial institutions is clear: innovate responsibly, while navigating a fragmented and intensifying regulatory environment. Innovate – but govern.

Clodagh Ruigrok is a partner in William Fry, specialising in the regulatory framework affecting Irish fund managers and investment funds. Dr Barry Scannell is a partner in William Fry’s Technology Department and a member of Ireland’s AI Advisory Council. 

LOOK IT UP

LEGISLATION:

  • Central Bank (Supervision and Enforcement) Act 2013 (Section 48(1) (Undertakings for Collective Investment in Transferable Securities) Regulations 2019 ()
  • European Communities (Undertakings for Collective Investment in Transferable Securities) Regulations 2011 () (as amended)

LITERATURE:

  • Central Bank of Ireland, (2025)
  • ESMA, (2025)
  • EU Commission, 2024/1689
Clodagh Ruigrok and Barry Scannell
Clodagh Ruigrok and Barry Scannell

Copyright © 2025 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.