Skip to main content

Here is a glitch in the matrix: Reality is naturally jagged. It is full of contradictions, awkward silences, and data points that don’t fit the curve.

But if you look at your feed, your AI-generated search answers, or the polished “Day in the Life” reels of your favorite influencer, the jagged edges are gone. Everything flows. Everything makes sense. The narrative arc is perfect.

This isn’t just good editing. It is a fundamental shift in how we process truth. We have entered The Coherence Trap.

The Glitch vs. The Smoothness

In traditional human storytelling, if a fact contradicts the story, it’s a problem we have to wrestle with. We might have to rewrite the conclusion or admit that life is complicated.

But in the age of algorithmic storytelling, a fact that contradicts the narrative isn’t treated as a nuance. It is treated as “statistical noise”—an error to be smoothed out for the sake of coherence.

I’ve been researching this for my upcoming book, Reality in Beta, and the findings are unsettling. Large Language Models (LLMs) and recommendation algorithms operate on a logic that is the complete inversion of reality:

“Instead of the story being shaped to accommodate reality, reality is reshaped to accommodate the story.”

The Optimization Dilemma

Why does this happen? Because algorithms are not designed for truth; they are designed for plausibility.

When you ask an AI to explain a complex historical event, or when an algorithm curates a “story” about a current crisis, it is optimizing for the path of least resistance. It seeks the narrative that is most statistically probable to satisfy you, not the one that is most empirically accurate.

In the book, I call this The Optimization Dilemma. The system smoothes out the “noise” (the contradictory witness, the awkward counter-fact, the nuance that ruins the vibe) to create a seamless user experience.

The result? We get stories that feel right, even when they are factually wrong.

The Confidence-Accuracy Gap

The danger isn’t just that the machines are doing this. The danger is that we are starting to prefer it.

Psychologists call this the Confidence-Accuracy Gap. We instinctively trust information that is delivered with high confidence and narrative flow. A jagged, hesitant truth often loses out to a smooth, confident fabrication.

We are retraining our brains to reject “messy” reality in favor of “smooth” simulation. When we encounter raw, unpolished truth—with all its boring details and lack of resolution—it starts to feel “wrong.” It feels like bad content.

The Antidote: Intentional Roughness

So, how do we escape the trap? We have to stop trying to be so smooth.

In Reality in Beta, I propose a practice called “Intentional Roughness”. It is the act of deliberately preserving the glitches, the contradictions, and the unpolished edges of our lives and work.

  • For Creators: Leave the “umms” in the podcast. Post the photo where the lighting is bad but the moment is real. Share the data point that doesn’t support your argument.
  • For Leaders: Resist the urge to “narrativize” every quarterly result into a hero’s journey. Sometimes the numbers are just down, and there is no moral to the story yet.
  • For Everyone: Be skeptical of anything that fits together too perfectly. If a story has no loose ends, it’s likely a simulation, not a reality.

The next time you see something online that feels perfectly coherent, ask yourself: What had to be deleted to make this story flow so well?

The truth is usually found in the things that were smoothed out.


Coming Soon

This article is adapted from Chapter 3 & 5 of Reality in Beta: Algorithmic Storytelling, Simulation, and the Collapse of Narrative Agency, coming 2026.

Leave a Reply