The Illusion of Savings

A familiar, classic pattern. Social media draws us into its algorithmic vortex, feeding us content that most aggressively preys on our attention.

The Illusion of Savings
Photo by Jacky Pun / Unsplash

Prianka Srinivasan describes how she created a fake account and a burner phone, showing in her article How Social Media Algorithms Decide Who You Are that the system knows an astonishing amount about us, and doesn’t hesitate to use that knowledge to push content that keeps us on the screen longer than common sense would suggest. In the era of AI, though, this is no longer just an ordinary algorithm. It is a fully fledged, hungry recommendation machine, one that works faster than we can realize it has just taken control of our time.

This is where the concept introduced by Glyph comes in, called the Futzing Fraction. It’s an analytical method describing whether AI actually saves us time or merely creates the illusion of doing so. The FF measures the ratio of all the effort spent correcting and supervising the system to the actual value we gain. If the cost of corrections outweighs the benefits, AI becomes a burden. In the case of social media, the analogy is striking. Instead of helping us find what we need, the system tests, probes, adjusts, and reprocesses our signals - and we have to verify all of it. Every click, reaction, or ignored post is a tiny iteration of our own labor. It’s an endless cycle of variations in our own private loop.

We see this clearly in Srinivasan’s experiment. At first, harmless content with cats, food, and local news. Then the system decides to push her toward motherhood because she installed a pregnancy app. Suddenly, her feed becomes a mix of parenting guides and erotic content, because the algorithm concluded that both types are effective at holding users’ gaze. It’s the cold logic of machines that don’t understand human subtlety. This is not an analysis, but a mechanical optimization of performance metrics. And we have to make adjustments, weaken signals, erase traces of what the system assumed was our identity… assuming we have the energy to do so.

Things get even more interesting when Srinivasan poses as a teenager and visits TikTok and Snapchat. The algorithm immediately serves her a package of popular narratives that have gained traction within that age group. And so she encounters content from conservative influencers and conspiracy theories. It’s the well-known stickiness effect. AI recommends content that performed well with other people of a similar profile. It doesn’t matter who you are. What matters is which data cluster the system assigned you to. This is exactly the behavior Glyph warns about - when a generative system starts making decisions based on statistical similarity rather than real context, the human being is reduced to a vector in a database.

Here we see how both worlds intertwine. Social media and generative AI rely on the same mechanism - a human loop in which we constantly have to check whether the system is wrong. Every scroll of the feed is essentially a micro-check of a recommendation result. Every click is the equivalent of a refined prompt. Every rejected piece of content is a model correction. On a global scale, it’s an ocean of labor done for free by users. And the system grows only because we keep feeding it more seconds of our attention.

The AI hype builds a narrative about saving time, but the effect is often the opposite. Glyph makes this clear. Without precisely understanding how much work it takes to maintain result quality, the technology can consume more energy than it gives back. In social media, this cost is hidden. It’s the cost of distraction, another notification, one more glance at the phone. Meanwhile, some machine has to process data, using electrical power and then water to cool itself after heavy workload—something that, according to The Guardian report “Revealed: Big tech’s new datacentres will take water from the world’s driest areas”, deprives countless people of resources near the massive data centers that have sprawled for years to support social-media giants and now share space with equally enormous AI leviathans.

This is why, in the final reckoning, both intelligent language models and recommendation systems operate similarly. They pretend to work for us, while in reality, they need our involvement more than ever. And until we calculate our own Futzing Fraction, we will continue feeding this machine, naively believing that we’re getting something valuable in return.