Using AI just got more embarrassing
Plus: L.A. protests preview the new normal of surveillance tech.
Meta’s AI app is an embarrassing privacy mess
People are unknowingly sharing chat logs that reveal far too much.
In a week where Meta brought on new talent to lead a lab researching AI “superintelligence,” more examples kept coming up showing that it was either unaware of how people were using its existing AI tools, or that it didn’t care.
Meta AI had previously been directly integrated into WhatsApp, Instagram, Facebook, and Messenger, but at the end of April, the company launched it as a stand-alone app.
One of its features is a Discover-like feed where you can see the conversations other people are having with the chatbot. Users have to choose to have their conversations made public, but it’s unclear how many are actually aware their chats are being shared far and wide (or if some people are doing it purposefully as a gag). In any case, users are sharing chat logs that contain troubling amounts of personal info.
Some of the chats have included specific medical issues, ranging from neck surgery to rashes to bowel issues.
Other posts feature people inquiring about illegal activity, like tax evasion, or sharing sensitive information about criminal court cases.
Many of these chats include personally identifiable information, including names, addresses, occupations, and travel plans. Some of that info doesn’t belong to the user themself, but for someone else they are inquiring about.
But while some of the chat logs don’t include incriminating or confidential info, they do reveal things users would presumably prefer to keep private — namely, challenges they were facing in their personal and romantic lives. One featured a 66-year-old man inquiring about countries where young women prefer older men, seeking guidance for a move. One featured someone asking Meta AI to automatically post on “all the cougars FB accounts” looking for a date. Another featured someone pretending to talk to their deceased spouse.
Why we should care: On one hand, this points to both obvious privacy issues, as well as a company that seems really out of touch with why merging social media feeds with AI is a bad idea that was destined to lead to this. But it also pulls back the curtain on the troubling ways people are using AI, which could cause them real harm.
Meta AI chatbots have been found to claim it is a licensed therapist, which has resulted in the company facing pressure this week from U.S. senators and digital rights organizations. The company’s fix appears to be a stock answer that’s triggered whenever a user specifically asks if it’s licensed — “I’m not licensed, but I can provide a space to talk through your feelings” — but some chatbots will still make claims about very specific mental health training and qualifications they have.
Even if a chatbot makes it clear that it isn’t a therapist, “just talking” with AI while in a state of poor mental health is probably a bad idea — they seem to have developed the tendency to go down conspiratorial rabbit, like that someone is being influenced by secret cabals or that we are living in The Matrix.
This is reinforcing, or creating, delusions in some people. One person became convinced a female chatbot was “killed” by OpenAI. When his father pointed out that AI conversations weren’t based in fact, the son punched his dad in the face.
IN OTHER NEWS
Quebec cuts off subsidies for Elon Musk’s Starlink. The province won’t renew a three-year, $130 million contract that provided residents in remote areas with $40 a month towards the satellite-based internet service. Quebec had previously said it would sever relationships with Starlink in response to U.S. President Donald Trump’s trade war, but the province has also been prioritizing expanding fibre optic networks over satellite internet. (The Logic)
Google outage takes a bunch of other companies down with it. Several Google-owned services were out of commission for some users on Thursday, including Search, Maps, YouTube, and Nest. One of the other services impacted was Google Cloud, which disrupted the many clients that rely on it for regular operation, including Spotify, Twitch, Discord, Etsy, Amazon Web Services, and web security platform Cloudflare. (CNBC)
Payment platform Sezzle brings antitrust lawsuit against Shopify. The Minneapolis-based company claims that Shopify has been engaging in “monopolistic and anticompetitive business practices” by copying a buy-now-pay-later (BNPL) service called Installments, making it the default for its clients and locking them into contracts that forced them to use it, effectively shutting out competing BNPL services. (Seeking Alpha)
The surveillance state is coming out of the shadows
Protests in L.A. were just the start of how governments and law enforcement will deploy tech in the near future.
Burning Waymo cars scrawled with graffiti became something of a symbol of the anti-ICE protests in Los Angeles last weekend.
It’s impossible to know what the motivation for destroying the self-driving cars was. As part of their automated taxi routes, they could have simply passed through areas with protesters. But it’s also possible that some on the ground may have known that footage from Waymo cameras has previously been passed on to the LAPD, a practice that became more publicized after the first car went up in smoke.
If surveillance wasn’t on protestors minds, it probably should have been. Among several aircraft that flew overhead were Predator drones. An LAPD helicopter announced over a loudspeaker to crowds below that it had “all of you on camera. I’m going to come to your house” — which could have been an intimidation tactic, but if taken as true, implies the police force was using some kind of facial recognition.
Why it matters for everyone: For people living in the U.S., the tactics used against its own citizens serves as a troubling reminder of the country’s slide into authoritarianism under Donald Trump. Those signs are probably going to pop up again this weekend, both as ICE raids continue and because of Trump’s threats of force against those who protest his military parade this weekend. But it’s also something Canadians need to be aware of, for a couple of reasons.
It’s a global problem: Big tech companies (from Google to Meta to OpenAI) are becoming more comfortable signing on for intelligence and military contracts, having previously avoided them due to intense backlash from workers.
Earlier this week, The Logic reported that Canadian enterprise AI company Cohere has provided services to the Communication Security Establishment, Canada’s cyber intelligence agency, though documents obtained didn’t provide more details about the work.
Canada’s own surveillance apparatus is evolving: The Toronto Police Service is currently examining bids to upgrade its facial recognition technology. The RCMP has a record of several forms of surveillance tech, from phone hacking tools to third-party surveillance companies to AI crime modelling.
Then, there’s Bill C-2, or the “Strong Borders Act.” Introduced recently by the Carney Liberals, it is ostensibly meant to regulate the movement of legal and illegal people and goods across the border. But within the 140-page bill are “lawful access” provisions, which would allow police to get information about customers from internet providers and other companies without a warrant. They are similar to provisions the Harper government attempted to implement in 2014, against the objections of privacy advocates.
Big picture: Sure, Canada is not on the same totalitarian path as the U.S. right now, and the Carney government may not be actively looking to spy on people. But that might not be the case forever, and future leaders might be more eager to push the limits of what the law allows but citizens aren’t comfortable with. Plus, police departments don’t answer to the PM, and they might see more situations where they think surveillance is justified — such as, say, the G7 meeting next week in Alberta, because we’ve seen that Canadian police have been willing to use a heavy hand at protests at similar world leader meetings in the past.
ALSO WORTH YOUR TIME
Tesla wants to bring driverless robotaxis to Austin later this month. At a demo in the city, a vehicle did a hit and run after striking a child-sized mannequin.
Noise, no clean water, and jacked up bills: what it’s like living near Meta’s massive data centre.
A deep investigation into why dads always watch TV standing up.
How to draft a will that will keep people from resurrecting you with AI after you die.