© Reuters. FILE PHOTO: Signage is seen for the FCA (Financial Conduct Authority), the UK’s financial regulatory body, at their head offices in London, Britain March 10, 2022. REUTERS/Toby Melville/File Photo
By Huw Jones
LONDON (Reuters) – As criminals make “unfettered” use of artificial intelligence (AI) to disrupt markets and scam consumers, Britain has a cocktail of rules already in place to tackle them, the Financial Conduct Authority (FCA) said on Tuesday.
The European Union has just provisionally approved a landmark law to regulate AI, the first of its kind globally, and piling pressure on other jurisdictions to follow.
FCA CEO Nikhil Rathi said it was important not to “jump in” to regulate every facet of a technology whose implications are yet to be fully understood.
Rathi told parliament’s Treasury Select Committee that AI is a “hugely accelerating topic”, and there was a need to approach the topic with “humility” as financial firms adopt it rapidly.
“The serious organised criminals don’t have anyone regulating them, and they are making unfettered use of AI to manipulate markets,” Rathi said.
“We do need firms to make sure that as they roll out use they think about anti-fraud and cyber risk,” Rathi added.
Britain already has rules to ensure market integrity, that senior managers at firms are accountable for monitoring risks, and a new consumer duty to ensure fair outcomes, Rathi said
“In that sense we are differently placed to a number of our major competitors around the world in having that framework,” Rathi said.
It means that financial firms should have the systems and controls already to “pull back and stop” if any AI-related problems emerge, Rathi said.
The Bank of England’s Financial Policy Committee said last week it would further consider the implications of AI next year.