Banking

UK banks prepare for deepfake fraud wave


Virgin Money boss David Duffy was thousands of miles from his bank’s Newcastle headquarters when he realised the urgent need to amp up its fraud prevention strategy.

As he took a tour of Microsoft’s Seattle headquarters with other bank chiefs in May, he learnt how “deepfake” voices and video would soon be able to trick his customers into sending money to fraudsters.

“I saw the level of evolution of [generative AI] technology and the power of it . . . the ability to clone voices is quite worrying,” said Duffy. “AI, powered by quantum computing, is going to put financial crime potentially on steroids.”

Banks have long had to deal with impersonation fraud. But as deepfakes and voice cloning become easier to generate, schemes in which scammers pretend to be anything from a prospective romantic partner to a family member in crisis have the potential to target far more people and with a higher rate of success.

UK banks are already being hit by scams using deepfakes, according to Sandra Peaston, research director at fraud prevention body Cifas.

The impersonations have so far largely been of celebrities because of the availability of footage required to train deepfake algorithms, she said, with criminals typically using synthesised videos to pass online “know your customer” checks in the form of online videos in an attempt to open bank accounts or apply for credit cards.

Deepfake videos are also already used as clickbait to drive traffic to malicious websites to harvest card payment details, research by Stop Scams UK and consultancy PwC has found.

Virgin Money boss David Duffy said he had learnt about the risks posed by deepfake voices and video
Virgin Money boss David Duffy said he had learnt about the risks posed by deepfake voices and video © Chris J. Ratcliffe/Bloomberg

As the technology improved, Peaston said, it “will require less and less material to train and could be used on a more industrial scale”, which means “you won’t need someone to have been in the media frequently” to imitate them. Voice-cloning technology raises the prospect of victims being duped by phone into sending money to someone they believe is a relative.

The UK has become a major hotspot for fraud and scams in part because English is so commonly spoken as a second language, which makes it easy for fraudsters worldwide to communicate with potential victims. The country’s near-instant payment system and high adoption of digital banking are also factors, experts say.

In the first half of 2023 alone, Britain lost £580mn to fraud, according to UK Finance. Of this, £43.5mn was stolen through police or bank staff impersonations and a further £6.9mn was lost to impersonations of chief executives, the trade body added.

With the rollout of AI-powered translation tools, scammers will be able to use deepfakes to replicate people’s voices and accents in more languages, which could allow them to target more victims across borders.

Other European countries, which have so far dealt with less fraud than the UK, could be hit by a sudden wave of scams as a result, said Chris Lewis, head of research at anti-fraud data company Synectics.

As the technology used by fraudsters becomes more sophisticated, so too do the tools to prevent and detect scams.

Ajay Bhalla, president of cyber and intelligence at Mastercard, said the company had developed an AI-powered screening tool that it is offering to nine UK lenders to help them detect fraud before money leaves customers’ accounts.

Lloyds Banking Group, the UK’s largest high street lender, said AI’s pattern-recognition capabilities could also enhance its current fraud prevention system. The bank uses behavioural analysis to build “a detailed profile of how a customer usually acts” including how long it takes them to type and how they navigate their screens so it can freeze payments when it spots unusual activity, according to fraud prevention director Liz Ziegler.

Industry experts in the field of generative AI are also racing to develop so-called watermarking technology through which the companies that power the creation of deepfakes could embed a “watermark” on AI-generated content that would allow checking software to detect synthetic content.

But Antoine Moyroud, a partner at Lightspeed Venture Partners, a backer of French generative AI company Mistral, cautioned that the technology remained in the early stages of research and development and was vulnerable to hacks.

“A lot of initiatives are happening but you hear about people ‘jailbreaking’ them and beating them — and sometimes . . . some people are able to re-create fictitious water marks, so it’s even worse,” said Moyroud.

The chair of the Basel Committee on Banking Supervision, Pablo Hernández de Cos, on Monday urged global leaders ahead of this week’s World Economic Forum in Davos to co-ordinate a response to challenges posed by the fast-growing technology, which he said “could change the course of history, not necessarily for the good”.

In the UK, banks have an incentive to fight and prevent ever-evolving types of fraud. New rules from the Payment Systems Regulator coming into force in October will render financial institutions on both sides of fraudulent payments liable to compensate victims of authorised push payments fraud, where victims are tricked into making a bank transfer to someone illegitimate.

Virgin Money said in November that it would invest £130mn in financial crime prevention over the next three years as it attempts to beef up its cyber defence and “biometrics capacities” to “futureproof” against new technology-powered financial crime threats.

Steve Cornwell, head of fraud risk at TSB, urged companies providing AI software to be “aware of its potential criminal use and place safeguards around it” and warned the public to be wary of unexpected digital communication and adverts on social media that could be AI-generated.

A rise in deepfake-powered fraud is also likely to raise the pressure on tech companies to compensate victims in cases where their platforms have been used to commit the scam. More than three-quarters of fraud starts online in Britain, mostly on social media, according to data compiled by UK Finance.

Although the Online Safety Act, which is currently in consultation, is not expected to make tech companies financially liable for fraud compensation, it does include requirements for tech companies to spot and remove content on their platforms that enables fraud.

“The banking and financial services industry is the only sector that reimburses victims of fraud” said UK Finance economic crime director Ben Donaldson.

“We need all relevant sectors to step forward and help us to both support victims through reimbursement and protect them by preventing crime in the first place.”



Source link

Leave a Response