The Bank of England issued a warning this week that AI, when left to its own devices, could act in ways that amount to market manipulation, purposely causing chaos in order to drive profit for their unwitting owners.
The bank’s financial policy committee pointed out several ways in which malicious humans could hijack automated trading, from cybercrime to “poisoning” datasets at the training stage to get a model to behave like they want.
In the fall, the International Monetary Fund observed that more automated trading would mean more trades are processed more rapidly, which could lead to more volatility.
But another of the bank’s concerns was what happens when AI models are deployed and given more power to act on their own. That’s because AI could learn that market volatility has the potential to be highly profitable, and make moves to trigger those events, like selling or buying large amounts of stock at once to influence the price, or trigger moves from competing financial firms.
Goal-oriented: AI is just a bunch of code, so we can’t really attribute human attributes like good/bad, or nefarious intent. They are just acting in accordance with how they are programmed — but in the case of AI, those methods can sometimes lead to “the ends justify the means” style actions.
When people talk about “AI training,” they generally mean reinforcement learning — the system is “rewarded” (literally just increasing a number in its code) when they connect pieces of information and use that to complete a task. A large language model learns that a bunch of words make sense together in a sentence, it uses those connections to write an email for a user, and it gets rewarded.
A lot of AI errors or hallucinations come because the model is laser-focused on completing its goal. Sometimes, it is more “rewarding” for a system to deliver an answer to a user’s question (even if it doesn’t have all the info to get the facts right) or blatantly disregard the rules of chess in order to win a game.
Researchers at Anthropic published a paper showing that reasoning models — like DeepSeek’s R1 and its own Claude — are even misrepresenting the facts when asked to show their process.
It is easy to see how an AI could engage in some light market manipulation if it means it will achieve its goal of maximizing returns.
Big picture: An AI doesn’t have to be designed to trade stocks to cause market turmoil. For a brief period during Monday’s market drop, there was a big upward spike when financial data platform Benzinga falsely reported that the White House was considering a break on tariffs, seemingly based on unsubstantiated X posts. Not only could AI social media bots be trained to spam those kinds of claims, the increased use of AI to gather data or even generate full articles could lead to more market-influencing falsehoods getting propagated.