Finance

UK Financial Regulators Publish Response to AI Consultation—Seven Takeaways | 11 | 2023 | Publications | Insights & Publications


Key Takeaways:

  • On 26 October 2023, the Bank of England, Prudential Regulation Authority and Financial Conduct Authority (collectively, the “UK Financial Authorities”) published FS2/23 on Artificial Intelligence and Machine Learning (the “Response Paper”). The Response Paper summarises industry feedback on the UK Financial Authorities’ proposed approach to AI, and gives an indication of how the Authorities may approach this subject area.
  • This blog post summarises seven key takeaways from the Response Paper, and includes steps firms can take now in anticipation of incoming AI regulations.

On 26 October 2023, the Bank of England, Prudential Regulation Authority (“PRA”) and Financial Conduct Authority (“FCA”, collectively the “UK Financial Authorities”) published FS2/23 on Artificial Intelligence and Machine Learning (the “Response Paper”). It summarises participants’ responses to the October 2022 AI discussion paper (DP5/22, the “Discussion Paper”), which outlined the UK Financial Authorities’ proposed approach to AI regulation.

The UK’s Approach to AI Regulation

As set out in the government’s white paper on AI, the UK, unlike the EU, does not intend to implement AI-specific laws or regulations. Rather, the government plans to issue non-statutory guiding principles to which existing UK regulators can adapt, and implement within, their respective sectors. The UK Financial Authorities are, therefore, amongst the forerunners in establishing what their AI regulatory approach may look like.

The Response Paper does not represent the UK Financial Authorities’ views, nor include any specific policy proposals; it is a summary of industry feedback on their proposals. However, it does give an indication of how the UK Financial Authorities may approach AI regulation in the future.

Seven Takeaways from the Response Paper

  1. The Definition of AI Is a Key Gating Question. The definition of “artificial intelligence” has been a contentious point in multiple AI legislative processes—including the draft EU AI Act—and it appears that the UK Financial Authorities could face similar challenges. The Discussion Paper gave a potential AI definition of “the theory and development of computer systems able to perform tasks which previously required human intelligence”. Firms were asked to comment on how the UK Financial Authorities should approach an AI definition, including whether they should pursue a financial services sector-specific definition.
    In the Response Paper, most respondents were not in favour of a sector-specific definition, and even suggested that the UK Financial Authorities could forego AI-specific regulation altogether. Respondents gave a range of reasons, including concerns that AI-specific regulation could become quickly outdated due to the pace of technology development, could easily be both too broad and too narrow in terms of the technology it intends to capture, and could create incentives for regulatory arbitrage. Instead, most respondents advocated for technology-neutral frameworks that adopt outcomes- and principles- based approaches, consistent with the UK Financial Authorities’ technology-neutral approach to other areas of regulation. It remains to be seen how this would operate in practice but, if it is adopted, certain AI tools would (presumably) still be subject to the regulation.
  2. Any Regulation Should Be Risk-Based—but with Potential Divergences from the EU AI Act’s Criteria for Assessing Risk. In the Discussion Paper, the UK Financial Authorities identified several non-exhaustive AI-related risks that could affect their area, including consumer protection, competition, financial safety and soundness, insurance policyholder protection, financial stability and market integrity. They solicited comments on which risks should be prioritised and how they should be evaluated. In the Response Paper, respondents generally agreed that AI regulation should be risk-focused, with a particular focus on consumer or the financial market risk. However, unlike the EU AI Act, which focuses on impacts to individuals, some respondents suggested that the UK Financial Regulators may want to include different or additional risks, such as financial stability. This could impact financial firms who are considering using the EU AI Act as their ‘high watermark’ for AI regulatory and governance compliance, who will have to accommodate any UK-specific requirements in their compliance programmes.
  3. Cross-Functional Oversight of AI Tools and Use-Cases—by a Team with Sufficient Expertise to Identify and Mitigate Risks—Is an Important Aspect of Effective AI Governance. The Discussion Paper highlighted the importance of good governance in effectively identifying and managing risks stemming from AI tools and use cases. The Response Paper shows a divergence in views on how this should be achieved in practice. Some respondents thought that existing governance structures are sufficient to cover AI, while others advocated for the adoption of specific AI oversight committees (either at a central or local business area level). Most respondents did not favour creating an AI-specific prescribed responsibility for a Senior Management Function, but acknowledged that some form of board or senior management-level oversight of AI is necessary. Nonetheless, respondents generally agreed that the team responsible for AI oversight needs to have sufficient expertise to spot or address new forms of AI systemic risks. For example, if an AI tool ceases to function, being able to quickly assess whether that is due to an (active) cybersecurity incident.
  4. AI Regulation Should Include Oversight of Third-Party Providers. The Discussion Paper notes that a key challenge for firms is their ability to monitor the AI-related operations and associated risks of their third parties. This challenge is particularly acute given that many financial firms are either completely leveraging, or developing their own products on the back of, existing AI tools from external providers. The Response Paper explores, at a high level, different options for managing these risks. Some respondents suggested that third party providers should be required to provide certain information to firms regarding their AI tools – including evidence of responsible development and risk information – so firms can better understand and mitigate the associated risks. It is unclear how this would be achieved in practice, especially given that many AI tool developers likely fall outside the UK Financial Regulators’ regulatory purview. One possibility is for the UK Financial Regulators to introduce standardized AI due diligence requirements that firms must satisfy before they can adopt third-party tools.
  5. There Is Strong Appetite for Any Future Regulations to Align with Existing Domestic and International Laws and Regulations. The Response Paper strongly advocates that any future AI regulation be consistent, and not unnecessarily overlap, with existing domestic laws (including the Equality Act 2010) and financial services regulations (including operational resilience and third-party risk management requirements). There was also considerable support in the Response Paper for consistency with other international AI laws (such as the EU AI Act and the NIST AI Risk Management Framework), in particular as any divergences could undermine the UK’s competitiveness. This point will likely be of interest to the FCA and PRA, given their new secondary statutory objective to support the UK’s growth and international competitiveness.
  6. Some View the (UK) GDPR as Creating Particular Challenges for AI Adoption. The Discussion Paper requested feedback on whether there are “any regulatory barriers to the safe and responsible adoption of AI in UK financial services”. In the Response Paper, a number of respondents flagged the UK GDPR as, in their opinion, creating particular challenges. The complexities in achieving GDPR compliance when adopting AI tools is not a new topic—see our blog post. For example, the French data protection authority, the CNIL, recently published a series of fact sheets aimed at helping companies achieve GDPR-compliance when developing and adopting AI tools. The UK Financial Authorities could look to such existing resources when developing future guidance.
  7. Regulations Should Not Just Focus on the Impact of Financial Firms Using AI, but How AI Use Can Impact Financial Firms. In the Response Paper, several respondents also flagged that the UK Financial Regulators could consider how the use of AI tools by malicious actors could impact financial firms. This includes AI’s potential use as a tool for fraud and money laundering (e.g., through deepfakes), and cyber-attacks (e.g., using generative AI to generate more realistic phishing emails). While these are not new risks for financial firms – banks have long been required to employ robust anti-fraud measures, for example – wide-spread access to AI tools could result in a proliferation of such attacks, which financial firms need to be equipped to deal with.

What Firms Can Do Now

While there is still uncertainty over the content of future AI regulation from the UK Financial Regulators, it is clear that further regulation in this area is coming. Given the potential difficulties in trying to retrospectively fit AI governance restrictions into organisations and their AI tools, there are several hallmarks of good AI governance that firms can start implementing now.

For example, firms should consider how they can create ongoing cross-functional oversight over AI. Typically, companies start by creating a committee (which could be AI-specific) that will oversee and guide the company’s use of AI tools and use cases more broadly, including through vendors.

Firms may then consider developing policies and procedures relating to the AI Committee’s oversight of the firm’s AI framework. This may include: creating an inventory of the AI tools the firm has access to and its use cases in production; developing a risk-rating framework for its AI tools and use-cases; and determining how the firm will identify and mitigate high-risk AI use cases.

In addition, many components of the coming AI governance obligations could require firms to significantly increase their compliance budgets and secure additional resources, which some firms may want to address now as 2024 budgets are being considered.

Finally, firms should also consider how AI may impact their operational resilience, and update their business continuity plans to account for any novel or increased disruptions that could be caused by the firms’ increased reliance on AI for core business functions. For firms also regulated in the EU, this may be particularly important given the requirements in the EU Digital Operational Resilience Act, which comes into force in January 2025.



Source link

Leave a Response