What the Algorithm Rewrote
How a new Nature study proves what we’ve long suspected about algorithmic influence
In the last piece in this series, I argued that the values embedded in AI systems don’t stay where they’re built. They propagate outward, shaping what billions of people see, think, and believe. The question I left hanging — the one I couldn’t fully answer — was whether this was something we could actually measure.
A study published this month in Nature answers that question more directly than anything I’ve seen. Researchers ran a rigorous, randomized experiment with nearly 5,000 X users, assigned some to the platform’s algorithmic “For You” feed — the default, which selects content based on predicted engagement — and others to a simple chronological feed showing only posts from accounts they already followed, in order. Seven weeks later, the algorithmic group had meaningfully different political beliefs. And when the experiment ended and the algorithm was turned off, the beliefs didn’t change back.
This is not a theoretical concern. It’s a measured outcome. Here’s what happened.
The experiment
The study, “The political effects of X’s feed algorithm,” was conducted by economists from Bocconi University, the University of St. Gallen, and the Paris School of Economics. They recruited 4,965 active US-based X users in summer 2023 and randomly assigned them to one of two conditions: the standard algorithmic “For You” feed, or a chronological “Following” feed showing only posts from accounts they already followed.
The setup is intentionally minimal. Users didn’t change their following lists. They didn’t change their behavior. Only the feed changed. A browser extension installed on participants’ computers tracked exactly what content each person was shown throughout the study, while surveys before and after measured their political views across a range of issues.
The results were significant by any measure. Users assigned to the algorithmic feed shifted 0.11 standard deviations toward conservative positions on policy priorities — an effect large enough that, projected across a population, it would be decisive in a close election. They were 4.7 percentage points more likely to rank issues favored by the Republican Party — crime, inflation, and immigration — as their top concerns. They were 5.5 percentage points more likely to view the criminal investigations into Donald Trump as unacceptable and anti-democratic. They were 7.4 percentage points less likely to view Ukrainian President Zelensky positively and more likely to hold pro-Russia positions on the war.
The content analysis explains the mechanism. The algorithmic feed systematically promoted conservative political influencers and activist accounts while demoting traditional news outlets — the newspapers and broadcasters that users in the chronological group continued to see. The algorithm wasn’t presenting users with different facts about the same events. It was reshaping whose voices dominated their information environment entirely.
The part that should change how you think about this
The findings above are striking. But they’re not the most important result of the study.
Here’s what is: when the experiment ended and users in the algorithmic group were switched back to the chronological feed, their political views didn’t move. The beliefs that had shifted remained shifted.
This might seem to contradict the logic of the experiment. If the algorithm caused the shift, shouldn’t removing the algorithm reverse it? The researchers found the answer in the behavior data. During the seven weeks on the algorithmic feed, users had followed new accounts — conservative influencers and activists that the algorithm surfaced and they chose to engage with. When the algorithm stopped, those accounts didn’t disappear. The users kept following them. The information environment the algorithm had built persisted as a structural change to their permanent social graph on the platform.
The most important sentence in this study is this one: switching off the algorithmic feed had no detectable effect on attitudes or behavior, because the algorithm had already reshaped who users followed — changes that continued to influence what they saw long after the feed setting changed.
This asymmetry is what makes the finding so significant. It’s not that the algorithm subtly nudged people and the nudge faded when the nudge stopped. It’s that the algorithm restructured the information environment users inhabited — and that restructuring outlasted the algorithm itself. Turning off X’s algorithmic feed doesn’t undo what the algorithmic feed did. The algorithm already wrote its changes into something more durable: the network of human connections users carry with them.
A different kind of AI
It’s worth being precise about what kind of AI we’re discussing, because it’s different from what usually dominates this conversation in 2026.
Most current coverage of “AI’s impact on society” focuses on large language models — ChatGPT, Claude, Gemini, and their descendants. These systems generate text, code, and images in response to prompts. They embed values through their training data, fine-tuning decisions, and system-level constraints, which is what I wrote about last time.
Recommendation AI works differently. It doesn’t create content. It selects from existing content, deciding in real time what to surface to each user and what to bury. The algorithm’s influence is invisible in a particular way: it doesn’t put words in anyone’s mouth. It determines whose words you encounter in the first place.
These systems are also much older than the current AI moment. Amazon’s recommendation engine launched in 1998. Netflix’s system was reshaping viewing behavior by the mid-2000s. Facebook’s News Feed algorithm has been making consequential decisions about political discourse since 2006. YouTube’s recommendation system has been studied for years for its potential role in driving users toward increasingly extreme content — with mixed results, but consistent methodological concern.
These systems were already doing something consequential — shaping what information people encountered, whose voices were amplified, how political narratives spread — long before ChatGPT became a household name. The current conversation about AI’s social impact is, in many ways, arriving late to a party that started decades ago.
What the full picture looks like
Reading the X study in isolation, it would be easy to conclude that recommendation algorithms systematically radicalize everyone who uses them. That would be an overreach.
A set of four coordinated studies published simultaneously in Science in 2023 examined Facebook’s algorithm during the 2020 US presidential election and found limited direct effects on political attitudes — suggesting the relationship between algorithms and belief change is not universal. But those studies faced significant criticism: Facebook had implemented what researchers called “break glass” measures, reducing political content on the platform during the study period in ways that may not have reflected normal operations. The experimental conditions may have been less representative of typical Facebook use than they appeared.
Research on YouTube’s recommendation algorithm has produced genuinely mixed findings. Some work has documented pathways toward increasingly extreme content; other research has challenged these findings and found the effects more limited than early reports suggested. The methodological challenges are substantial — the algorithm changes constantly, viewing patterns are heavily self-selected, and distinguishing causation from correlation in observational data is difficult.
What distinguishes the X study is its design. This is a randomized controlled trial, not an observational study inferring causation from patterns in existing data. Users were randomly assigned to conditions — not self-selected based on their preferences. The browser extension tracked actual feed content rather than relying on platform-provided data or user recall. And the study identified a specific mechanism — the persistent following of new accounts — that explains why the effects lasted, giving it more explanatory power than studies that find correlations without accounting for how they propagate.
The X study doesn’t prove that all recommendation algorithms radicalize all users. It proves that this algorithm, in this context, produced measurable and persistent political shifts — and that we now have the research tools to detect and explain such effects when they occur.
Who writes the algorithm
The researchers don’t claim that X’s engineers designed their recommendation algorithm to shift users toward conservative positions. What the algorithm was designed to do was maximize engagement — to surface content that would keep users on the platform longer, interacting more.
But “we just optimized for engagement” doesn’t fully close the question. No one necessarily decided to promote activist accounts over traditional journalism. What happened, most likely, is that the algorithm discovered this on its own: partisan content, outrage, conflict — these drive more interaction than measured reporting. The algorithm did what it was built to do. The political consequences emerged from what engagement actually looks like in a charged information environment.
This distinction matters, but it doesn’t eliminate accountability. “Optimize for engagement” is still a design choice — a choice about what to value and how to measure success. And the connection between engagement optimization and political polarization has appeared in enough platform research to be treated as a known consequence, not an unknowable surprise. Building a system that maximizes engagement in a political information environment and then being surprised that it amplifies extreme voices is like building a system that rewards speed and being surprised it causes accidents.
There is, however, an additional layer specific to X and its current owner. As I wrote in the previous piece, Grok — Musk’s AI chatbot — was explicitly tuned to reflect its owner’s political sympathies, and changes to X’s platform more broadly have moved in a consistent direction. Whether the recommendation algorithm was also directly shaped by ownership decisions is harder to audit from outside. What the study shows is the outcome. How much of it reflects engagement optimization working as designed, and how much reflects deliberate choices made above the algorithm, remains opaque. That opacity is itself a problem.
This is the thread connecting recommendation AI to the larger argument about who shapes AI systems. In the generative AI context, the question is whose values get embedded in models that draft emails and tutor children. In the recommendation AI context, it’s whose voices get amplified and what beliefs get built in the process. The specific systems are different. The underlying question — who decided this, for what reasons, and who holds them accountable — is the same.
What we do with this
The study doesn’t tell us what to do. This article doesn’t either.
But it gives us something more valuable than a prescription: it changes what we know. We now have rigorous, randomized evidence that recommendation algorithms can shift political beliefs in a measurable direction — and that those shifts can persist after the algorithm is removed, because the algorithm reshaped the social infrastructure users navigate. That changes the conversation from “could this be happening?” to “it has happened, here is how it works, what do we do about that?”
The vocabulary for demanding accountability exists. We know what to ask about. We can ask what recommendation systems are optimized for, and whether engagement is the right target. We can ask what the externalities of that optimization are, and who bears them. We can ask whether the people who build these systems are measuring the right things — and whether anyone outside those companies is in a position to check.
The original observation in “Who Writes the AI?” was that the values embedded in AI systems are design choices made by specific people, and that those choices have consequences that propagate far beyond the people making them. What recommendation AI adds to that picture is a timeline. These systems have been making consequential societal choices for decades. We now have the tools to measure those choices with randomized precision. The question isn’t whether we can know what the algorithm rewrote. The X study makes clear that we can.
The question is whether we decide that matters.


Jesse McCrosky's blog continues to be clear, insightful, and informative. I don't think it's overstating the case to say that our cherished Enlightenment-based, open, liberal, scientific society is in severe danger. This blog is part of the antidote.