Omar Green has been trying to realise a similar vision since 2012, when he founded Wallet.AI. He is cagey about the details of his product, and declines to give details of the company’s finances or investors, saying they have asked to be anonymous.
But he is open and thoughtful about his principles, describing how he hopes to build personalised AI that can coach and coax people through the difficult long-term grind of reforming their spending habits and achieving financial independence.
“It turns out that’s a really hard problem, because it is about… bringing a certain degree of reality into the sort of delusional set of reality that we’d like to live in,” he says.
That could mean an AI that nudges users, via a smartphone screen or a synthesised voice in their ear, with insights and suggestions drawn from their own behaviour and that of others like them. But it could also mean an AI that is capable of listening with simulated empathy to a dire situation – your daughter needs private medical treatment, and you can’t afford it – and then frankly discussing your options, and what you might have to do to make it happen.
This, too, could go deeply wrong. Investment or spending advice from an AI could be skewed to its creator’s commercial advantage, pushing people into bad decisions that benefit the company.
O’Neill points out that it could also be deliberately “gamed” by unscrupulous financiers who find out hidden ways of tricking the AI into favouring their products, just as today unscrupulous web designers trick Google into putting useless nonsense at the top of its search results.
Küsters fears that large numbers of people getting personal advice from the same set of AIs could have wider effects, creating “a new type of herding behaviour” that amplifies market volatility.
Green is far from ignorant of such dangers. He explains how, in the late 2010s, he hoped to work with big banks to make Wallet.AI’s insights part of their mainstream products. But these partnerships mostly collapsed because, he claims, banks actually profit from their customers making bad financial decisions – such as getting into debt they can’t afford – and do not want them to make better ones.
“They couldn’t figure out how to turn [Wallet.AI] into a growth mechanic without doing things that would be predatory,” he says. One credit executive asked him: “You do understand that sometimes we produce programmes for our customers that we don’t want them to take advantage of? That we can’t afford for them to take advantage of?”
That experience underscores the reality that AI does not spring into being from nowhere, free of earthly bonds. It is trained by, shaped by, and ultimately serves the interest of the institutions that create it. Whether you trust AI to manage your money will depend on how far you trust existing corporations, financial markets, and capitalism itself.
A 2021 survey by McKinsey of 1,843 firms does not inspire confidence, finding that most respondents were not regularly monitoring their AI-based programmes after launching them.
Green is deeply concerned about what will happen if Big Tech incumbents such as Meta, with its historic culture of “moving fast and breaking things”, or Apple, with its controversial dominion over the iPhone app ecosystem, define the shape of future AI finance.
Popson and Rooney argue that the financial industry is highly regulated and will not get away with behaving like Big tech, while Küsters says we need more specific regulations similar to, but more robust than, the European Union’s proposed “EU AI Act”.
That does not mean that the AI industry can just leave it to the politicians and wash their hands of the problem.
“I am a cautious optimist,” says Green. “I think that if [AI makers] can be taught to believe that there is an incentive to building systems that are helpful, that avoid harm, that represent the angels of our better natures, then they’ll build them… Let’s show some discipline as makers, and try to build the world we want to exist.”