You Didn't Lose Your Privacy. Something Worse Happened.
AI doesn't need you to share anything anymore. It can infer what you never told it, from data you gave away years ago.
Read the original article on X.
I work with people who build AI systems. Not the consumer stuff - the infrastructure underneath it. The pipes. The models that run quietly behind products you use every day without thinking about it.
And for the last two years, every time a friend asks me about privacy, I give them the clean version. The version that sounds like a TED talk. “Yeah, companies collect a lot of data, you should probably check your settings.” I nod. They nod. We move on.
I’m done giving the clean version.
Because what I’ve been watching happen over the last eighteen months is not what most people think is happening. It’s not that companies are collecting more data. That story is twenty years old. It’s that AI has changed what your existing data can say about you. Data you already gave away, years ago, for free, in exchange for convenience - that data just got a thousand times more powerful. And almost nobody is talking about it in a way that matters.
The Vitamins, the Wine, and the Ad
My friend Alessandro is a financial planner in Milan. Smart guy. Reads the Economist. Votes. Has opinions about things. A few months ago, his wife started getting ads on Instagram for fertility clinics. They hadn’t told anyone they were trying to have a baby. They hadn’t searched for it. She hadn’t visited any fertility websites.
What had happened, as far as Alessandro could piece together, was this: she’d stopped buying wine at their usual grocery delivery app. She’d started buying prenatal vitamins at a drugstore - with a credit card. Her phone’s location data showed she’d visited an OB-GYN office, once, for twelve minutes. None of those data points mean anything on their own. But an AI model, trained on the purchasing and behavioral patterns of millions of other women, connected them. The model didn’t know she was trying to get pregnant. It predicted it. And it sold that prediction to an advertiser before she’d even told her mother.
Alessandro told me this story over dinner, and I watched his face do the thing that everyone’s face does when they realize what’s actually going on. It’s not anger. It’s not even surprise. It’s this slow, sinking recognition - like finding out someone has been standing in your living room for years and you just now noticed.
Here’s the thing I want you to hold onto: Alessandro’s wife didn’t make a mistake. She didn’t overshare. She didn’t fail to read a terms of service. She bought vitamins and skipped wine and went to a doctor. That used to be private. It isn’t anymore, and it wasn’t a policy change that made it this way. It was a capability change. AI can now infer what you never chose to share.
And Alessandro’s daughter had just turned eight. She’d had her own tablet since she was five. I think about that a lot.
The Old Problem vs. the New One
Let me make this concrete, because the way most people think about privacy is about ten years out of date.
The old privacy problem was surveillance. Someone reads your emails. Someone listens to your calls. Someone looks at your browsing history. It’s creepy, it’s invasive, but it’s direct. The information someone gets about you is the information you actually produced. You typed it, you said it, you clicked on it.
That model of privacy is almost quaint now.
The new privacy problem is inference. And inference changes everything, because it means companies don’t need you to tell them anything. They can figure it out.
Here’s what AI inference actually looks like in practice. A research team showed in 2023 that GPT-4 could accurately predict a Reddit user’s age, location, income bracket, and relationship status from nothing but their comment history - posts about cooking, video games, commuting. Not confessions. Not personal details shared on purpose. Just the texture of how someone talks and what they talk about. The model was more accurate than human guessers by a significant margin.
This isn’t theoretical. This is operational. A woman I know - Lara, works in HR at a midsize company in Florence - told me she started getting ads for rheumatoid arthritis medication. She’s thirty-four and healthy. No diagnosis, no symptoms, no searches. She panicked. Went to her doctor, ran blood work, everything came back normal. What she eventually pieced together: her mother, who lives with her and uses the household wifi, had been researching the condition on Lara’s laptop while Lara was at work. The model didn’t distinguish between them. It saw the search behavior, cross-referenced it with the purchasing account, and decided someone at that address needed medication ads. Lara spent three weeks convinced she was sick because an algorithm couldn’t tell her apart from her mother.
And this is just what’s visible. The ads are the part you can see. The inferences that determine your insurance quotes, your loan approvals, your hiring scores - those happen in the background, and you never get an ad that tips you off.
Right now, data brokers are running inference models on datasets they’ve been accumulating for a decade. Your old data - the stuff you gave Uber in 2016, the loyalty card you signed up for in 2019, the free wifi you connected to at that hotel - is being re-analyzed by systems that didn’t exist when you gave it away. You consented to data collection. You never consented to AI-powered data interpretation, because it didn’t exist yet.
Think about that for a second. The terms of service you clicked “Accept” on five years ago gave a company the right to collect your location data. Fine. But nobody told you - because nobody could have told you - that in 2026 an AI model would combine that location data with your purchase history and your social media activity and infer your political affiliation, your mental health status, and whether you’re likely to be involved in a lawsuit in the next twelve months.
You agreed to give someone your ingredients. You didn’t agree to let a machine that didn’t exist yet cook with them.
You Can’t Delete a Prediction
There’s a layer beneath the inference problem that’s even harder to see, and it’s the one that keeps me up at night.
When a company collects your data directly - your name, your email, your search history - you can at least imagine requesting it, deleting it, opting out. Europe’s GDPR was built around this idea. An Austrian law student named Max Schrems once requested his personal data from Facebook and received a document that ran over a thousand pages. That request helped spark a regulatory movement. It was built on a simple principle: you should be able to see what they know about you.
But AI inference breaks that principle at the root.
Because what do you request the deletion of when the “data” about you isn’t data you produced - it’s a prediction a model made? If an AI infers from your Spotify listening patterns, your sleep schedule (tracked by your phone’s screen-on time), and your grocery purchases that you’re showing early signs of depression, where does that inference live? It’s not in a database with your name on it. It’s a probability score in a model that was trained on ten million people who share your behavioral patterns. You can’t FOIA a statistical correlation. You can’t opt out of a pattern.
It gets worse. Even if you delete your accounts, go off-grid, and scrub every database you can find, the models have already learned from your data. Your behavioral patterns have been absorbed into training sets that improve predictions about everyone else. You can remove yourself from the inputs, but you can’t remove your contribution to the model’s understanding. It’s like trying to take your voice out of a choir recording. The song has already been sung.
And here’s where it gets personal in a way Lara didn’t expect. After the arthritis ad incident, she tried to “fix” things. She spent an entire weekend clearing her cookies, resetting her ad preferences, opting out of data broker sites. Two weeks later, she started getting ads for stress management apps and therapy services. The system had noticed her sudden burst of privacy-seeking behavior, and it inferred - correctly, as it turned out - that she was anxious.
This is the part that Alessandro couldn’t get past. “So what do I even do?” he asked me. And I watched him do the other thing that everyone does, which is the shrug. Not a dismissive shrug. A defeated one. The shrug that says: it’s too big, it’s already happened, I’ll just keep living my life.
I understand that shrug. I’ve done it myself. But the shrug is exactly what makes the system work. And I want to explain why, because understanding the psychology of this is the only thing that’s ever changed anyone’s behavior when I’ve had this conversation.
Why Smart People Do Nothing
The reason most smart people do nothing about privacy isn’t that they don’t care. It’s that the cost of caring feels infinite and the reward feels invisible.
You could spend a weekend degoogling your life. You could switch browsers, install a VPN, audit your apps, read terms of service. And at the end of it, you’d have... what? You can’t feel the difference. Nothing changes in your daily experience. Your phone still works. Your ads might be slightly less creepy for a month before the system re-learns you through some other channel. The feedback loop is broken. Exercising gives you endorphins. Eating well gives you energy. Protecting your privacy gives you nothing you can perceive.
And on the other side, the cost is designed to be high. Every privacy-respecting alternative is slightly worse. The degoogled phone is clunkier. The privacy browser breaks half the websites you use. The encrypted messaging app is the one where none of your friends are. The system doesn’t just fail to reward you for protecting yourself - it actively punishes you with friction.
This isn’t an accident. It’s a business model. The ad-driven internet is built on the premise that your data is the product, and every design choice flows from that premise. The “Accept All” button is big and blue. The “Manage Preferences” button is gray text, three clicks deep, and resets every time you clear your cookies. You’re not lazy for clicking Accept. You’re responding rationally to a system that has spent billions of dollars making the alternative as painful as possible.
So here’s what actually happens inside your head: you learn that caring is expensive and useless, so you stop caring. Psychologists call this learned helplessness. You call it “I have nothing to hide.” But “I have nothing to hide” was never really a belief. It’s a coping mechanism. It’s the story you tell yourself so the shrug doesn’t feel like surrender.
I know this because I told myself the same story for years. And then I watched what happened with Alessandro’s family, and I watched it happen to other people I care about, and at some point the story stopped working.
I want to be careful here, because I’m not writing this to make you feel bad. I’m writing it because I think something has changed, and the people I care about don’t know about the change yet.
You are not broken for clicking Accept. You are not stupid for using Gmail. The system was designed by some of the most well-funded engineering teams in human history to make sure you would do exactly what you did. The guilt is misplaced. If you feel it, put it down. It’s not yours to carry.
But here’s what has changed, and it’s the reason I stopped giving the polite version:
AI inference means the deal has been renegotiated without your input. The data you gave away under the old terms - when “collection” meant a company had a record of what you bought - is now being used under new terms, where “collection” means a machine can predict your future behavior, your health risks, your relationship stability, and your political vulnerabilities. You said yes to a filing cabinet. You got a psychologist with perfect memory who works for whoever pays.
And the window to do something about this - not to fix it perfectly, but to make it meaningfully better for yourself and for the people in your life - that window is still open. But I don’t think it stays open forever.
I think about Alessandro’s daughter, and I think about all the kids who are eight or ten or thirteen right now, who have never known a version of the internet that wasn’t tracking them, who have already generated years of behavioral data that AI models are learning to interpret in ways we can’t fully predict yet. Those kids never had the “before” to compare to. They won’t feel the loss, because they never had the thing that was lost. And that’s the part that moved me from the polite version to the honest one.
What I Actually Tell My Friends to Do
Here’s what I actually tell the people in my life to do. Not the theoretical stuff. The stuff that works, that takes real time and real effort, but that meaningfully changes your exposure.
The first thing - and this takes three minutes, so do it right now if you can - is switch your default browser to Firefox and install an extension called uBlock Origin. Not an ad blocker. A content blocker. It stops trackers from loading in the first place. This single change eliminates a significant percentage of the third-party tracking that follows you across the web. Three minutes. Done.
The second thing is to go into your phone’s privacy settings - both iPhone and Android have them now - and turn off the advertising identifier. On iPhone, it’s Settings, then Privacy & Security, then Tracking, and you toggle off “Allow Apps to Request to Track.” On Android, it’s Settings, then Privacy, then Ads, and you delete your advertising ID. This doesn’t stop all tracking. But it removes the single easiest mechanism companies use to follow you across apps. It’s the digital equivalent of taking your name tag off at a conference. People can still figure out who you are, but you’re not making it trivially easy.
The third thing takes a Saturday morning. Go through your phone and delete every app you haven’t used in the last thirty days. Not because those apps are necessarily doing something evil right now, but because every app is a surface. It’s a potential channel for data collection, and more importantly, for AI inference. An app you downloaded three years ago for a trip to Spain might still have background location access. A game your kid installed might be sharing usage patterns with a data broker. The principle is simple: reduce your surface area. Every app you remove is one less input into the models that are building your profile.
The fourth thing is the hardest, and it’s the one most people push back on: switch your messaging to Signal for the conversations that matter. Not all conversations. You don’t have to become a cryptography enthusiast. But the conversations with your partner about finances, health, family decisions - those conversations are training data for inference models if they happen on platforms that can read them. Signal is end-to-end encrypted, open source, and free. The interface is nearly identical to iMessage. The friction is getting your people to switch, and I won’t pretend that’s easy. But you only need four or five people. Your inner circle. Start there.
The fifth thing is for your kids, if you have them. Sit down with your child’s tablet or phone and audit the permissions. Check which apps have access to location, microphone, camera, and contacts. Then ask yourself: does a coloring app need to know my child’s location? The answer is always no. For kids under thirteen, I’d go further. Use the parental controls built into iOS and Android to disable ad tracking entirely and restrict background app refresh. Your child cannot consent to data collection. They don’t understand what inference means. You are the only person standing between them and a system that will build a behavioral profile on them before they’re old enough to understand what that sentence means.
None of this makes you invisible. That’s not the goal. The goal is to make the inferences about you less accurate, less complete, and less valuable. You’re not building a fortress. You’re closing windows.
The Window
I want to zoom out for a second, because this isn’t just about you and me and our browser settings.
There is a version of the next twenty years where AI inference gets better - and it will get better, that’s the nature of the technology - and we simply accept it. We accept that every human behavior generates data, that every data point feeds a model, and that every model output is available to whoever will pay for it. We accept that privacy is a twentieth-century concept, like landlines and handwritten letters, and we move on.
I’ve met people who believe this is inevitable. Some of them are very smart. Some of them build the systems I’m describing.
I don’t believe it’s inevitable. But I believe it’s the default. It’s what happens if nobody does anything, and right now, not enough people are doing anything, because not enough people understand what has actually changed.
Remember Alessandro? After the fertility clinic ads, he did something I didn’t expect. He didn’t just change his own settings. He spent a Sunday afternoon helping his parents - both in their sixties, both on Facebook, both with health data scattered across a dozen apps they barely use - go through the same process I just described. It took four hours. His mom kept asking “but why would anyone care about my grocery list?” And he kept explaining, patiently, that it’s not about the grocery list. It’s about what an AI can infer from the grocery list.
His daughter watched them do it. She didn’t understand all of it. But she understood that her dad thought it was important enough to spend a Sunday on.
That’s the thing about this particular problem. The technical solutions matter. The policy fights matter. But the thing that actually changes the trajectory is regular people deciding that this matters enough to act on it and to talk about it with the people they love. Not with fear. Not with paranoia. With the same calm urgency you’d use to tell a friend that the lock on their back door doesn’t work.
I don’t know exactly where the line is - the line where inference gets so good and so pervasive that individual action stops mattering. I don’t think we’re there yet. But I think we’re closer than most people realize, and I think the gap between what the public understands and what the systems can actually do is growing every month.
You didn’t lose your privacy because you were careless. You lost it because the rules changed after you’d already played your hand. But the game isn’t over. The window is still open, and the people in your life - your parents, your partner, your kids, the friends who still think this is about someone reading their emails - they need you to explain what actually happened.
Not the polite version. The real one.
Because right now, the best thing you can do for someone you care about is send them something that makes them understand what changed and what to do about it.
This is that something.

