Why Everything Online Feels the Same
On the logic of recommendation, the aesthetics of legibility, and the politics of prediction
The Model of Me
It started when TikTok decided I was a “clean girl”. I’d liked one video, a pantry organisation clip with soft jazz and a neutral-toned label printer, and suddenly, my For You Page was full of sleek ponytails, beige yoga sets, and “hot girl morning routine” vlogs. I hadn’t asked for this aesthetic. But it fit, a little and so I watched.
The algorithm doesn’t know you. But it models you, not as a self, but as a statistical projection. Every click, pause, and scroll becomes part of a feedback loop. What others like you liked, you are likely to like. And so, it shows you more- of them, of yourself. “The goal”, writes Blaise Agüera y Arcas, “isn’t to understand who you are, but to predict what you’ll do”.
Personalisation isn’t personal. It’s predictive averaging, wrapped in aesthetic packaging. The result isn’t a mirror but a suggestion. An invitation to become more like what you're already behaving like, though, the version of you it proposes is never quite stable. You are rendered as a shifting cluster of probabilities, continuously refined. This isn’t dystopian in the obvious sense. It’s subtler than that. As Byung-Chul Han writes, control today is exercised not by force but by self-optimisation. The algorithm doesn’t coerce you, but rather it seduces you, softly, into legibility.
This post is a field guide to that seduction.How machine learning turns desire into data.How trends turn people into profiles.And what it means to live as a projected self.
How the Algorithm Works (And Doesn’t)
First: there is no single algorithm. There are many; each platform has its own black box, trained on different objectives. TikTok optimises for watch time and rewatch rate. YouTube blends click-through with long-term engagement. Instagram, for now, still leans more on your social graph (who you follow, who follows you) though machine learning is eating that too.
The term “algorithm” is a bit of a red herring. It makes it sound like a fixed formula, but in reality, what drives these systems is machine learning, a process less like a script and more like an evolving statistical organism. It doesn't “know” you, it learns to model you.
I. Machine Learning 101
At its core, machine learning is function approximation. You feed a model a vast quantity of input data (your taps, pauses, likes, comments) and label it with an outcome (did you rewatch the video or did you scroll away). The model “learns” patterns by adjusting internal parameters—billions of tiny weighted guesses—to better predict that outcome in the future. That’s it. You never told it what you liked. You just behaved, and it correlated. “Machine learning doesn’t explain”, notes Kate Crawford in Atlas of AI, “it predicts.” It doesn’t think in terms of intention, meaning, or even personhood. You are rendered not as a subject, but as a vector in a multidimensional feature space. This is what you feel when your feed suddenly shifts: you’ve been sorted into a new similarity class as your data isn’t used to understand you, it's also used to compare you.
II. Training on You
Most recommender systems use reinforcement learning: the model is rewarded when you engage, punished when you don’t. Over time, it gets better at keeping you there. TikTok in particular excels here; its “For You” algorithm is trained not just on what you watch, but on how you watch it: how long you linger before swiping, whether you rewatch, whether you hesitate before liking.
This kind of training creates a tight feedback loop:
You scroll.
The model guesses what you want.
You respond (click, ignore, share, pause).
It updates its guess.
Repeat.
The loop is recursive as the system trains on your reaction to its own output, and so over time, the model doesn’t just reflect your taste, it shapes it. This is what Zeynep Tufekci calls the “radicalisation engine” problem: a system optimised purely for engagement may always steer toward extremity, because intensity = retention. But it doesn’t need to be political extremism. It can just be aesthetic convergence. Beige girls. Feral girls. ADHD-core. The algorithm is not biased toward truth. It is biased toward stickiness.
III. The Predictive Self
This is where identity gets weird. Because what the algorithm gives you isn’t what you want, it’s what people like you tend to want. You’re not being treated as an individual, but as part of a statistical cloud. TikTok doesn’t say: you’re a student. It says: people who watched these five videos also watched these fifteen others. A new identity is softly implied. Sometimes it’s right. Sometimes it’s eerily right. But even when it’s wrong, it’s still a form of world-building. It suggests a version of you that you might not have invented, but now that you’ve seen it, it’s available. As Taina Bucher argues in If...Then: Algorithmic Power and Politics, the logic of machine learning is performative: it doesn’t just sort users; it proposes subject positions. In this way, the algorithm becomes a theory of you — not a mirror, but a model. Not your self, but your likely next self. And that distinction changes everything.
Viral Realism
Online, everything is a genre before it is a fact. You don’t just post a picture of your dinner; you post a girl dinner. You’re not just sad; you’re in your feral era. Your bookshelf isn’t just messy; it’s dark academia. Each trend arrives with a set of props, colours, fonts, and vocal inflections. Each is a pocket-sized aesthetic ontology.
The algorithm did not invent this. But it supercharged it because genre is compressible. It’s easy to detect, easy to tag, and, most importantly, easy to recommend. If enough people like videos with a pink hue, lo-fi jazz, and a Hydro Flask, the system learns to keep showing them. Not because it understands why but because it works. “The neural net doesn’t know it’s recommending ‘coastal grandmother’” as AI researcher Janelle Shane quips; “It just knows that beige linen + beach scenes + Norah Jones = attention”.
In a traditional media ecosystem, trends were seasonal, editorial, top-down. On algorithmic platforms, they’re emergent: born from repetition, refined by machine learning, and fed back into the loop. Virality is not a spike, it’s a statistical convergence, where a content form becomes optimised enough to outcompete its neighbours. This leads to what writer Kyle Chayka calls “algorithmic monoculture”: feeds full of eerily similar content, different faces saying the same thing in the same voice, all generated independently. Trends aren’t just things we follow as they become parameters that define what content is even seen.
I. Compressed Identities
To succeed in this environment is to be legible to the algorithm. You have to be indexable, easy to file, and easy to sell. And so people increasingly present themselves not as messy, contradictory selves, but as aesthetic packages: the granola girl, the finance bro, the art hoe, the tradwife, the mushroom boy. These are not styles, they are identities optimised for recommendation. “We are no longer real”, writes Mark Fisher, “but instead become simulations of ourselves”.
This is why so much viral content feels uncanny. It’s not fake, exactly. But it’s not lived, either. It is performed realism which is designed to appear authentic within the constraints of what the algorithm will surface. This is not always a bad thing. Sometimes these templates make life more navigable. You try out “clean girl” for a week and learn something about what order feels like. But as more people conform to these legible forms, the algorithm rewards them more, which leads to more conformity and perhaps identity becomes a probability distribution.
II. The Irony of Personalisation
In theory, personalisation means diversity: everyone gets their own feed. But in practice, recommendation systems optimise for sameness. Not absolute sameness, but near neighbours, content that is just different enough to feel new, but familiar enough to click. This creates a strange effect: everything begins to blur. Subcultures flatten into vibes. Aesthetic categories once linked to politics or subversion—cottagecore, coquette, bimboism, even punk—are smoothed into looks, stripped of context, and offered back to you like seasonal menus. And this, too, the algorithm learns.
The Predictive Self
If the algorithm is a theory of you, what happens when you begin to believe it?
This is the quiet violence of machine learning: it doesn’t just show you a world, it shows you your world, tailored, trimmed, and continuously reinforced. Over time, that world becomes a kind of mirror. And like all mirrors, it flatters and distorts in equal measure. “To be seen is not the same as being known, and in the digital economy, to be seen is often to be categorised” writes Jenny Odell in How to Do Nothing.
The problem is not that the system is wrong. It’s that it’s close enough to feel true. You do like that video. You are kind of a clean girl. But in being nudged toward what you are most likely to want, you begin to shed what is unlikely. The weird, the difficult, the inchoate parts of the self; the ones that don’t convert well into clicks and as such are quietly backgrounded.
I. The Death of the Undecidable
The algorithm struggles with ambiguity. It can’t model ambivalence, hesitation, contradiction because these things don’t behave consistently. So it models around them. Over time, it learns to reward not nuance, but coherence: clear genre, clear label, clear behaviour.
This pressure toward consistency bleeds into our inner lives. You start to self-surveil, to curate your tastes for legibility and so you internalise the logic of the recommender system, and begin to act like one. As media theorist Rob Horning writes, “we become pre-emptive versions of ourselves, behaving in ways that will make sense to the algorithm”.
This is not new, exactly. Goffman wrote about the performance of self in the 1950s. But what’s different now is the granularity and intensity of the stage. You’re not performing for a room. You’re performing for a machine that remembers everything and is always adjusting the script.
II. Politics and Intimacy Under Recommendation
Even politics bends under this logic. Radical ideas get repackaged as personal aesthetics: mutual aid becomes a carousel post; systemic critique becomes a vibe. The algorithm doesn’t prefer left or right- it prefers clarity and virality. And so, political identity too becomes a brand.
The same goes for intimacy. Dating apps learn what faces get swiped on. Platforms learn which photos get attention. You begin to optimize, not just for attraction, but for visibility. Your romantic self becomes a profile: a proxy designed to trigger engagement.
This isn’t surveillance in the Orwellian sense. It’s something gentler, and stranger: a system that knows you as a pattern, and rewards you for staying within it.
Living with the Machine That Thinks You Are You
You cannot opt out. Not really. You can log off, throw your phone in the sea, but the infrastructures of prediction remain. Credit scores, insurance risk models, job application filters all operate by similar logic. You are already being guessed.
The question is not how to avoid this. The question is: how to live in the knowledge of it.
This begins, simply, with understanding. Recommendation systems do not “know” you; they operate on inference, not insight. They do not interpret your content. They track what other people did with things like it, and compare you to them. “There is no algorithmic you” as James Bridle writes in New Dark Age.
These shadows are actionable. They determine what you see, what you are offered, what you are withheld. The danger is not that the machine misunderstands; it’s that it becomes too easy to let its predictions stand in for your possibilities.
The skill, then, is not to reject the system, but to develop epistemic hygiene around it- the same way you learn to read advertising critically, or navigate bureaucracy. A few practical principles:
Treat the feed as suggestion, not reflection. It shows you what works, not what matters.
Resist flattening. Let your interests exceed what can be captured. Be specific. Be contradictory. Be boring.
Follow dead ends. Seek things with no apparent aesthetic value. Value what doesn’t trend. Let some interests fail to become content.
Note when your self-description begins to mirror genre.
None of this breaks the machine, but it reclaims interiority and preserves space for incoherence, for non-performative interest, for selves that don’t index cleanly.
There’s a reason so much great literature obsesses over being misread (Kafka, Woolf, Ellison, Lispector). There’s something fundamentally human about resisting legibility. We are not pure chaos. But neither are we compressible. The most real parts of us often emerge in the margins and in moments that cannot be profiled because they were not designed.
The algorithm cannot find these. It is not built to.
Which means you still have the capacity to surprise it.
And maybe, in doing so, surprise yourself.