Finance

Further Light on Generative AI and UK Financial Services Regulation | Goodwin


In our previous alert, we noted the speech made by Nikhil Rathi, CEO of the UK Financial Conduct Authority (FCA), which built on the points made in a joint paper published by the Bank of England (BoE) and FCA on artificial intelligence (AI).

Like the Rathi speech and BoE/FCA AI paper, the October 2023 response by the International Regulatory Strategy Group (IRSG)1 to the March 2023 AI white paper published by the Department for Science, Innovation and Technology and the Office for Artificial Intelligence (the DSITOAI paper) shines a further light on emerging themes for regulating the use of generative AI in financial services.

The October 5 speech (the Rusu speech) by Jessica Rusu, the chief data, information, and intelligence officer at the FCA also highlights emerging themes. As an aside, the Rusu speech also confirmed that in late October 2023, the FCA will publish a feedback statement on the joint discussion paper that it issued with the Prudential Regulation Authority (PRA) in 2022 (joint DP).

Addressing the Role of Generative AI

The IRSG response does not focus specifically on generative AI, but the Rusu speech notes its use. As we explained in our previous alert, generative AI can create apparently original content, such as text, in response to requests or prompts. Rapid advances in generative AI systems, such as ChatGPT, have led to particular interest in their application to all industries, including financial services. But the enthusiasm for generative AI and its potential uses is tempered by voices calling for restraint.

Other Papers Issued by the Government and Regulators: A Reminder

In addition to the DSITOAI paper, the UK government issued a white paper — noted in our alert — outlining the UK’s proposed approach to regulating AI, which proposed a principle-based approach, supervised by sector regulators, such as the FCA, that are tasked with enforcing AI regulation developed for their respective sectors.

In addition to the white paper and as noted above, the PRA and FCA issued a joint DP that, as we mentioned in our previous alert, highlighted:

  • The important role of privacy and data protection in the context of AI
  • Risks in the data, models, and governance layers of AI systems within categories based on the FCA’s objectives, including consumer protection, competition, and financial stability
  • The view that the use of AI in financial services may amplify existing risks and introduce novel challenges

The IRSG Response

The IRSG response to the DSITOAI paper contains various comments and recommendations, including:

  • Agreement with the government’s proposal to implement a principles-based framework for regulators, such as the FCA and PRA, to interpret and apply to AI within their existing remits, allowing regulatory agility and proportionality
  • The view that the financial-services sector is not currently in need of additional AI-specific legislation
  • The warning that, while an outcomes-based approach is likely to be the most appropriate in practice, process-focused regulation of AI may stifle innovation
  • The view, relating to the above, that technological neutrality in regulation avoids either constraining or altering the innovative and beneficial ways in which AI is, or may be, used in financial services
  • An emphasis on the need for regulators to issue clear, consistent, and interoperable guidance that is regularly reviewed, given the speed of technological change, in regard to how AI regulatory principles will interact with existing legislation and the regulators’ future approaches to enforcement
  • The view that, while the need for effective coordination across regulators will be critical to achieve the objectives of the white paper, an AI-specific regulator is unnecessary at this stage

The Rusu Speech

The Rusu speech focuses on key questions shaping the future of AI in the financial-services industry. Instead of focusing narrowly on AI, it discusses the following broader issues: digital infrastructure, the growing reliance on the cloud and other third-party tech providers, and the vital role good-quality data plays in the adoption of AI and consumer safety.

The Rusu speech highlights:

  • The importance of addressing the systemic risks posed by a firm’s reliance on certain critical third-party providers (CTPs) and the associated risks to stability, resilience, and confidence in the UK financial system. (See our alert Too Important to Fail – Part 2: The Coming Regulation of Providers of Critical Technology Services to UK Financial Institutions)
  • The fact that FCA-authorized firms (which we will simply call “firms”) remain responsible for their own operational resilience, including for services outsourced to third parties, and that safety and security are important considerations when it comes to “frontier technology”
  • The FCA’s expectation that firms will address AI risk in full compliance with existing regulatory frameworks, including the Senior Managers and Certification Regime and the Consumer Duty
  • The need for firms to be aware of the risks of tailored and sophisticated AI-powered cyberattacks, reiterating a point made in the Rathi speech
  • The role of data in AI and the question of ethical data usage as examples of important data considerations to ensure the safe and responsible adoption of AI
  • Examples of the responsible adoption of AI, which include:
    • A firm’s development and use of generative AI and large language models to create more-tailored advice offerings for those excluded from the insurance market
    • The FCA’s use of AI to develop web-scraping and social-media-monitoring tools that can detect, review, and triage potential scam websites

Emerging Regulatory Trends

The likely ultimate effect of each of the publications above on the final content of binding legal standards in the UK is difficult to forecast, but common ground is emerging, connected with both generative AI and AI more generally:

The Rusu speech confirms that the FCA will consult on its requirements for critical services providers later in 2024 once it has considered responses to its joint discussion paper with the PRA: FCA DP22/3 / PRA DP3/22.

  • It is not clear that AI necessarily creates material new risks in the context of financial services, although the rapid rate of technological change may create new risk. It remains too early to tell.
  • Instead, AI may amplify and accelerate existing financial-sector risks — i.e., those connected with financial stability, consumer, and market integrity — which the financial services and markets regime is designed to reduce. An example in the consumer context, noted in the joint DP (and considered in the EU AI Act), is the risk of insurance and lending decisions that are discriminatory or that fail to recognize the needs of vulnerable customers.
  • AI will also have a role in firms’ control of financial-sector risks and in FCA and PRA regulation of the sector (although questions may arise about the justification for AI-generated administrative decisions and their compliance with statutory and common-law principles of good administration).
  • In keeping with the concerns about amplifying and accelerating existing risks, it is appropriate for the FCA and PRA as the current financial-sector regulators, to be charged with regulating AI.
  • The role of the FCA and PRA in regulating AI reinforces the importance of using and developing existing financial-sector regulatory frameworks that enhance continuity and legal certainty and make proportionate regulation more likely (although not inevitable).
  • AI needs effective governance to ensure that it is properly understood, not only by the technology experts who design it but also by the firms that use it — a “Know-Your-Tech Duty” — and firms can respond effectively to avoid harm materialising from any amplified and accelerated risks.
  • Staying with the theme of existing frameworks, the rise of the importance of technology and currently unregulated CTPs, noted above, has resulted in an extension of powers for the FCA and PRA under the recently enacted Financial Services and Markets Act 2023 (FSMA 2023), as noted in our related alert and addressed on our dedicated microsite.
    • Providers of generative AI models that are used by many financial institutions — or by a small number of large or important financial institutions — may become subject to the jurisdiction of the FCA or PRA under the new powers that FSMA 2023 introduces.

[1] The IRSG is a joint venture between the City of London Corporation and TheCityUK.

[View source.]



Source link

Leave a Response