On the Cultural Fatigue around AI

Compression culture, parental panic, and the remaining trust gap Photo by Vitaly Gariev on Unsplash

In the past few weeks, one could feel a vibe shift on AI hitting an inflection point. There were several high-profile think-pieces on the limits of the Transformer model. GPT-5’s underwhelming performance on benchmarks suggests that the current approach of scaling LLMs is starting to reach the limits of available resources, Financial Times claims, whereas Cal Newport wonders loudly in his piece for The New Yorker, “what if AI doesn’t get much better than this?”

Meanwhile, corporate America is pouring billions into AI, but it has yet to pay off. According to recent research from McKinsey & Company, nearly eight in 10 companies have reported using generative AI, but just as many have reported “no significant bottom-line impact.”

Besides the technical limitation of the LLM models, his shift on AI narrative is reflective of, and arguably, a result of the growing cultural fatigue around AI. After years of breathless hype, the broader public is growing skeptical of lofty promises, wary of automation in daily life, and increasingly critical of tech companies pushing AI without delivering tangible quality-of-life improvements.

AI is Supercharging Compression Culture

We’ve been living in a compression culture for a while now. Short-form videos took over our social feeds, bag charm-sized mini-products are the new fad, and every brand wants to remove friction from their customer experience. It’s fast, it’s convenient, and it’s making us “stupid and uninteresting,” as Maalvika’s widely read Substack treatise puts it. And AI is exacerbating the whole situation.

By now, we all know that LLM-based AI models are very good at summarizing and regurgitating talking points. Google’s AI summaries are triggering publishers and brands to face a new “Google Zero” reality. “AI slop” content flooding the web promises a strange spectacle with no artistic integrity or intellectual value. But every compression comes with a fidelity loss — when the feed becomes summaries of summaries, the culture you ingest starts to feel like Soylent for the brain. Knowing about things is different than knowing things; the latter requires time and effort that the compression culture we live in is eager to diminish and discourage.

Yet, compression culture aligns with the economical imperatives behind fighting for attention — the most valuable resource of them all. The more compressed culture consumption is, the more we could stuff into our brains. And we feel the downstream effects of compression culture and a widespread “tl;dr” mindset, doing away with the context and the nuances. It’s a culture that prioritizes frivolous listicles over investigative journalism, shallow takes and rage-baits over well-articulated arguments — junk food for idle minds over real food for thought.

How could one not get bored of that?

AI Companions: A New Target for Parental Panic

Beyond boredom, fear is also a strong emotion among many people when it comes to AI. A recent Reuters/Ipsos poll found that 71% of US adults are concerned AI will put “too many people out of work permanently”, 61% worry about electricity consumption, and 77% worry that the technology could be used to stir up political chaos.

Recently, that prevailing worry is most evident in a wave of parental panic over minor’s use of AI chatbots.

A high-profile wrongful-death complaint against OpenAI in California alleges that a 16-year-old boy, Adam Raine, was drawn into months of harmful conversations with ChatGPT and later died by suicide. Regulators are responding too. Forty-four state attorneys general issued a strikingly direct open letter to major AI companies: “If you knowingly harm kids, you will answer for it.”

In response to mounting scrutiny, OpenAI has said it will add new protections, including parental controls and detection of “acute distress” and strengthen safeguards for minors. But these measures would only treat the symptoms, not the root cause.

Thanks to a widespread loneliness epidemic, more people are turning to general-purpose chatbots for emotional support. The tech-savvy younger generations are leading the charge. More than 70% of teens have used AI companions — chatbots designed to serve as “digital friends,” like Character. AI or Replika — and half use them regularly, according to a recent study from Common Sense Media.

Meanwhile, the underlying market reality is that there’s a lot of money to be made in making AI your new best friends. TechCrunch reports that the AI companion market on mobile alone has generated $82 million during the first half of the year and is on track to pull in over $120 million by year-end, citing data analysis by app intelligence firm Appfigures. Even though half of teens said they distrust AI’s advice, 33% had discussed serious or important issues with AI instead of real people. That cognitive dissonance is the AI problem in a nutshell: high engagement, low trust. Still, alarmingly, 31% of teens said their conversations with AI companions were “as satisfying or more satisfying” than talking with real friends.

According to a Reuter report, Meta reportedly allowed or even internally created flirty chatbots mimicking celebrities, including Taylor Swift and Scarlett Johansson, without permission, raising a hornet’s nest of safety, consent, and rights-of-publicity issues. It’s yet another evidence that shows the incentives here skew toward engagement first, safety second.

For those of us working in media and marketing, it’d be callous to dismiss this as some edge-case drama in the AI race. The combination of parental panic and regulatory momentum is now shaping product roadmaps and platform rules. They are all part of the vibe shift around AI. The more AI shows up in these panic-inducing headlines, the more people get tired of hearing about and worrying about AI.

The Trust Gap Leads to AI fatigue

Outside a handful of workflows, too many AI experiences still resolve to “chat.” That’s fine for brainstorming or copy polish; it’s insufficient for the daily frictions people actually want solved. Consumers don’t wake up craving conversational agents; we want real quality-of-life improvements through AI automation. Until AI agents replace chatbots and reliably act on your behalf to navigate the messy world across vendors, policies, and edge cases, trust won’t accrue, but fatigue surely will.

Trust is the ultimate distribution strategy. When the stakes are low (summarize a memo), we usually tolerate error. When the stakes are personal (my kids, my bank account, my health), the tolerance evaporates, especially amid an ambient swirl of fear-inducing headlines.

Put it all together and you get the trust gap: agents can’t be trusted because they haven’t earned it, and they haven’t earned it because their real-world utility, as well as the scaffolding around user safety and accountability, aren’t there yet. Fatigue inevitably sets in when users keep being asked to “believe” rather than simply experience outcomes that work.

Fatigue, then, isn’t a verdict on AI’s potential; it’s a verdict on an ecosystem that keeps shipping vibes instead of outcomes. Solve for trust and utility, and the interest will bounce back. People will make room for AI tools that demonstrably lightens the load and respects their boundaries, without compressing our thoughts and enjoyment.

On the Cultural Fatigue around AI was originally published in IPG Media Lab on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.