Hello. Given the edict that a certain Canadian tech CEO gave his company this week, a timely weekend read might be this collection of first-hand accounts of what it’s like to work for a company that’s forcing AI onto its workers.
Since the headline includes the phrase “overwhelmingly negative and demoralizing,” you might guess that it’s not great.
ARTIFICIAL INTELLIGENCE
AI could tank the stock market on purpose
It's just doing what we taught it to do.
The Bank of England issued a warning this week that AI, when left to its own devices, could act in ways that amount to market manipulation, purposely causing chaos in order to drive profit for their unwitting owners.
The bank’s financial policy committee pointed out several ways in which malicious humans could hijack automated trading, from cybercrime to “poisoning” datasets at the training stage to get a model to behave like they want.
In the fall, the International Monetary Fund observed that more automated trading would mean more trades are processed more rapidly, which could lead to more volatility.
But another of the bank’s concerns was what happens when AI models are deployed and given more power to act on their own. That’s because AI could learn that market volatility has the potential to be highly profitable, and make moves to trigger those events, like selling or buying large amounts of stock at once to influence the price, or trigger moves from competing financial firms.
Goal-oriented: AI is just a bunch of code, so we can’t really attribute human attributes like good/bad, or nefarious intent. They are just acting in accordance with how they are programmed — but in the case of AI, those methods can sometimes lead to “the ends justify the means” style actions.
When people talk about “AI training,” they generally mean reinforcement learning — the system is “rewarded” (literally just increasing a number in its code) when they connect pieces of information and use that to complete a task. A large language model learns that a bunch of words make sense together in a sentence, it uses those connections to write an email for a user, and it gets rewarded.
A lot of AI errors or hallucinations come because the model is laser-focused on completing its goal. Sometimes, it is more “rewarding” for a system to deliver an answer to a user’s question (even if it doesn’t have all the info to get the facts right) or blatantly disregard the rules of chess in order to win a game.
Researchers at Anthropic published a paper showing that reasoning models — like DeepSeek’s R1 and its own Claude — are even misrepresenting the facts when asked to show their process.
It is easy to see how an AI could engage in some light market manipulation if it means it will achieve its goal of maximizing returns.
Big picture: An AI doesn’t have to be designed to trade stocks to cause market turmoil. For a brief period during Monday’s market drop, there was a big upward spike when financial data platform Benzinga falsely reported that the White House was considering a break on tariffs, seemingly based on unsubstantiated X posts. Not only could AI social media bots be trained to spam those kinds of claims, the increased use of AI to gather data or even generate full articles could lead to more market-influencing falsehoods getting propagated.
IN OTHER NEWS
Tech lobbying group pushes party leaders to Buy Canadian. An open letter published by the Council of Canadian Innovators and signed by 150 tech founders and executives is asking each of leaders currently running for Prime Minister to detail their plans for economic sovereignty at next week’s debates. The letter pushes the leaders towards policies that promote procurement from Canadian companies, doesn’t offer funding to multinational companies, and keeps talent in Canada. (Council of Canadian Innovators)
OpenAI countersues Elon Musk, claiming harassment. OpenAI says that Musk’s lawsuits and “sham” acquisition bid are bad-faith efforts to take control of the company and hamper its transition to a for-profit company. The stakes are high for OpenAI, as it could lose out on $10 billion pledged by SoftBank if it doesn’t convert to a for-profit structure by the end of the year. (Reuters)
Google lays off hundreds from product team. The cuts came from the Platforms and Devices division, which leads work on things like Pixel phones and smartwatches, Nest smarthome devices, Chrome browser, and the Android and ChromeOS operating systems. The division was created last year from the merger of two internal units as part of a push for efficiency, and the layoffs followed a voluntary buyout plan earlier this year. (The Information)
Former fintech CEO charged with fraud for passing human labour off as AI. Albert Saniger claimed that his shopping app Nate used AI to automate online checkouts, but the U.S. Department of Justice says the work was actually being done by hundreds of human contractors in a call centre in the Philippines, a practice first reported in 2022 by The Information. (TechCrunch)
SOCIAL MEDIA
Facebook whistleblower gets her day to testify
Claims that Meta tried to hide behind a gag order are now part of the congressional record.
In what might be the biggest example of the Streisand effect ever documented, Sarah Wynn-Williams — a former director of global public policy for Facebook turned Meta whistleblower — sat in front of U.S. senators this week and detailed several claims her former employer has sought to hide behind a court order.
Background: Last month, Wynn-Williams published Careless People, a book that detailed things she saw during her six years at the company that was then known as Facebook, including as its director of public policy. It painted a picture of a company culture that failed to act on harassment claims, ignored staff while they had seizures, allowed misinformation that fueled a genocide, and provided custom services to the Chinese Communist Party (CCP). That last part caught the attention of U.S. senators looking into Facebook’s cooperation with the Chinese government.
Meta has continually disputed the book’s claims — calling them “divorced from reality” — and hit Wynn-Williams with a gag order stopping her from promoting the book, but that doesn’t prevent her from answering a call from lawmakers to testify.
The gag order may have actually helped the book’s sales as people sought out the book that several news headlines described with some variation of “the book Mark Zuckerberg doesn’t want you to read.”
On the record: In her responses to questions from senators this week, Wynn-Williams claimed Meta has downplayed in previous testimony the degree to which it works with the CCP, which has included creating custom censorship and content moderation tools, silencing specific dissidents, and even considering turning over data for users based in Hong Kong.
Another area of interest for politicians is how Meta impacts the well-being of youth, and they took the opportunity to ask Wynn-Williams about a claim that Instagram could target ads to teenagers when they felt down or depressed.
Looks bad, but it tracks: The thing that’s striking both about Careless People and its author’s testimony is how much it fits with how others in the know have categorized Meta’s culture. One reporter who has covered Meta for years said the book confirmed that he wasn’t crazy after years of carefully scripted responses about things he knew Facebook had done. A review written by another former Meta employee took issue not with Wynn-Williams’ portrayal of the company, but with the fact that there was a lot of stuff she left out.
ALSO WORTH YOUR TIME
The team behind the Flipper Zero has a new gadget: a screen for your desk that will tell people not to bother you.
How a Hamilton buy-and-sell Facebook group got hijacked to become a major voice advocating for Canadian annexation by the U.S.
Creatives with a (completely understandable) grudge bullied Adobe off of Bluesky.
How should Apple respond to “U.S.-made iPhone” being used as an impossible-to-fulfill political promise?
The latest incredibly boring LinkedIn meme is using ChatGPT to turn users into an action figure.