Banking

Help Or Hindrance To Financial Services?


Picture305312024 - Global Banking | FinanceBy Vikas Krishan, Chief Digital Business Officer at Altimetrik

The explosion of Artificial Intelligence (AI) and its capabilities has taken the world by storm. In fact, it would not be too much of a stretch to suggest that most people have now heard of AI and its business impact thanks to the coverage across mainstream media outlets, as well as potential doomsday scenarios predicted by  Elon Musk. We have also seen the rise of ‘deep fakes’ where celebrity’s voice and/or image has been replicated by AI (Taylor Swift being perhaps the most notable), fooling many into believing they are video or the image of the person themselves. This has provoked serious concerns around AI and its potential use by fraudsters. It is against this backdrop that the EU AI Act has come into being, with the aim to help regulate the use of AI technology in the European Union. The Act in its current form requires producers of AI to meet strict standards for transparency, accountability, and human supervision. Although yet to become law, the Act intends to set clear legal requirements for the use and creation of AI tools.

The Financial Services industry in the EU and across the world is already strictly regulated and compliance is a fundamental tenet to operate. By contrast, some critics of the Act have suggested it is too open for interpretation. So, will the Act be a help or a hindrance to the financial services sector?

Expected to come into force in the summer of 2024, the EU AI Act has wide-reaching consequences for AI use within Financial Services organisations. The Act aims to standardise the rules for AI usage, development, market spread and adoption. The wide scope of the Act has the potential to impact developers and deployers of AI systems based in the EU but also much further, to businesses with an office within the EU, as well as international companies that produce systems that are used within the EU.

Within the Financial Services Industry, multiple models are used to assess use cases ranging from individuals applying for loans, when checking their suitability for a financial product through to complex pricing models for Financial Instruments. These models require internal review and validation and must have the appropriate documentation that can be shared with the regulator where required. It is in these scenarios that the use of AI is viewed as particularly problematic unless properly regulated, due to the potential for bias in the technology’s development as well as the explainability of model outputs.

Striking the right balance

One challenge for European lawmakers lies in the very definition of AI. In fact, draft versions of the EU AI Act have already seen its definition amended several times. Having clarity on what AI is, will be central to the Act’s effectiveness.

Well-documented concerns around the potential for institutional and societal biases to be embedded in the creation of AI is also a key issue that the EU AI Act aims to address. After all, AI is only as good as the data with which it is fed, and if that data contains ingrained biases against gender, disability, age or other social demographics, then the output that AI creates will also have these biases, unless carefully constructed safeguards are put in place.

These valid concerns are again countered with the desire to take advantage of all that innovation through AI has to offer. We are yet to see compelling evidence that governance frameworks and conformity assessments will limit innovation. I would argue there is a compelling argument for creators and users of AI to say how it will be used, as will be required under the Act. The challenge will be the burden of reporting for those Financial Services organisations using AI.

Evaluation of risk

For those working in Financial Services, it is critical to understand that the Act broadly categorises AI into several categories based on their level of risk for societal harm. ‘Unacceptable risks’ are AI systems which will be prohibited under the Act. This includes systems that deploy subliminal techniques beyond a person’s consciousness to influence behaviour, systems that exploit vulnerabilities within a specific group of people and potentially discriminate based on a person’s age, disability, or socio-economic backgrounds.

‘High risk’ uses of AI are classified as those scenarios that could put the life and health of individuals at risk if something goes wrong. These scenarios will now require a conformity assessment.

‘Lower risk’ are systems where there are several transparency requirements such as the need to let users know that part of a screening process is undertaken using AI. With these systems, disclosure of AI use is required.

Finally, the ‘minimal’ or ‘no risk’ category of AI use, includes video games and spam filters, which will be subject to voluntary codes of conduct under the Act. The key challenge for lawmakers and indeed financial services is the potentially subjective nature of what activity or scenario falls within these risk categories.

As an example, the use of real-time facial recognition using AI at ATMs or in High Street banks would be considered an unacceptable risk. This use of AI would be perceived as too far reaching due to the public nature of the AI use and impact on people’s right to privacy. Facial recognition to access a banking app on a personal mobile phone, which is classified as private usage, would not be considered unacceptable under the Act, however.

Whilst self-regulation has been proposed by some AI producers as an alternative to the Act, this comes with its own issues. A particular concern is that a self-regulatory body would not have the teeth needed to effectively tackle immoral AI use and creation – we have seen this as a common complaint with press self-regulation in the UK for example. There is also concern around bias towards AI producers and a self-preservation mindset that would see an allowance for bending of the rules and so a lack of protection for the end consumer.

With the EU AI Act, there may be some merit in creating a separate regulatory body to oversee AI production, this would need to be a coordinated effort across the EU and an effort that would require strong levels of harmonisation with the UK as well as US regulators. This is important not only to ensure that the general principles around what constitutes unacceptable risk, high risk, minimal and no risk is understood, but also that the regulatory burden on individual downstream providers of AI systems is not so onerous as to prevent them from innovating and developing more effective AI solutions for the marketplace.

In its current form the EU AI Act puts a lot of obligation on providers. The Act also classifies those clients who use large language models as providers if they modify the AI programme which they are using. This can prove problematic, with clients then subject to the same regulatory burden as providers. This changes the dynamic as well as the economic use case for AI.

How should the finance sector prepare for the EU AI Act?

Financial Services Institutions must start thinking about AI in a co-ordinated way, much like we saw with the introduction of GDPR regulations. Unlike GDPR, however, we are not currently seeing organisations reviewing their operations in such a stringent manner. I would certainly expect to see the role of Chief AI Officer being introduced to help ensure compliance with the Act and what will be undoubtably more complete frameworks over time.

There is a very clear need to establish a formal AI governance structure within businesses that includes ownership of the comprehensive risk management framework which will be required to comply with the requirements of the Act. It is important to raise awareness and communicate effectively about the Act, not just internally, but also throughout your ecosystem of providers and clients.

Financial Services Institutions need to begin the process of assessing their technical landscape and thinking about their future technology, AI and data road map, to better understand how they will be impacted by the introduction of the EU AI Act. These institutions will need to identify which areas of their operating model will be most impacted and so will require prioritisation and focus, as well as the necessary remedial actions that will be required to comply with the Act.

The introduction of the EU AI Act represents a crucial step towards regulating the burgeoning field of AI within the Financial Services sector and beyond. As the Act approaches implementation, it is crucial that Financial Institutions adapt and take note of its requirements, ensuring compliance while also taking advantage of the innovations AI-driven solutions have to offer. The Act not only sets out to mitigate the risks associated with AI, such as biases and privacy concerns, but also encourages transparency and accountability. Financial institutions must therefore establish robust governance frameworks that can manage these new regulations effectively and position themselves to be as frictionless as possible in adhering to new Regulatory frameworks.



Source link

Leave a Response