Fast food research: The era of ultra-processed insights
Jack Bowen
CoLoop Co-Founder & CEO
For those who couldn't join me at the Qual360 EU conference in Berlin, I want to share the core of the message I delivered on the main stage. It’s a message that started with a provocative, perhaps even uncomfortable, prediction:
I believe that this year, we will produce more research than ever before, and trust it less than at any other point in the history of the insights industry.
This isn’t a critique of our professional capabilities as researchers. It is a warning about the unintended consequences of ubiquity. When something powerful becomes easy, we tend to underestimate what we lose along the way. Today, we are standing at a crossroads where AI has made the production of "insights" table stakes. But the real question for the next decade isn't "how much research can we produce?" It’s "what happens to our strategic decision-making when everyone can produce it?"
The Rise of "Ultra-Processed" Insights
Over the last two to three years, the research industry has been flooded with new AI tools, CoLoop included. On paper, this looks like an era of unprecedented progress. Speed is up, costs are down, and volume is through the roof. Teams across product, design, brand, and marketing can now generate polished reports and interactive dashboards in minutes rather than weeks.
But underneath these attractive metrics, something is quietly degrading. We have filled our organizations with what I call Ultra-Processed Insights.
Just like ultra-processed food, these insights look like the real thing, they scale fast, and they fill the space in our decks like the real thing. They are "tasty" and plentiful in the short term. But just as a diet of processed food erodes physical health, "ultra-processed insights" are eroding the health of our strategic decision-making. They look good on the surface, but they quietly damage the foundation of how businesses move forward. We are trading the "nutrition" of deep human synthesis for the "calories" of high-volume output.
A Warning from the Content Industry: The Jasper and ChatGPT Experiment
We don’t have to guess how this trajectory ends; we’ve seen this experiment run at full scale in the content industry.
In late 2021, the release of tools like Jasper AI made it possible for anyone with $50 a month to generate infinite content. Then OpenAI released ChatGPT in late 2022 and accelerated the pace even further. Overnight, the internet was flooded with articles and "thought leadership" that were technically coherent but fundamentally empty.
For a brief moment, it worked. AI-produced articles ranked high on SEO because they were proximal to what search algorithms were looking for. But then the backlash arrived. Audiences became immune to the "slop" and "clankers", those tell-tale AI phrases that lack human soul. Content marketing chased scale only to realize later they’d destroyed trust.
The most telling sign of this failure?
OpenAI itself.
In a move that signals the "trust ceiling" of automated output, ChatGPT recently began hiring human Content Strategists with salaries reaching nearly $400k. The very architects of generative AI realized that to move away from "AI slop" and reclaim strategic trust, they needed real human expertise to guide the narrative. They realized that while AI can generate volume, it cannot generate the authority or nuance required for long-term trust.
The research industry is currently on the same trajectory. Search volume for "AI qualitative research" has spiked exponentially. We are seeing a boom in end-to-end tools promising "good enough" answers to strategic questions. But just like the SEO content boom, the danger is that what we are producing isn't actually insight, it’s just information slop optimized for scale rather than truth.
The Three Hidden Costs of Research Acceleration
When we optimize relentlessly for efficiency, we face three specific risks that can crowd out true signal:
1. High Volume vs. Epistemic Clutter
When production costs are low, reports are shared en masse. This creates "epistemic clutter", a state where decision-makers must reconcile dozens of contrasting viewpoints generated in minutes with minimal effort. In this environment, clarity is lost. When everything is backed by "data," eventually any decision can be justified, and research loses its power as a neutral arbiter of truth.
2. Surface-Level Findings
AI-generated research often feels "safe." It is coherent and clear, but predictable, it’s "not wrong" but rarely groundbreaking. Just like industrial, pre-sliced white bread, it provides substance but lacks the nutritional value of a deeper dive. It identifies the obvious themes while missing the subtle "aha!" moments that only come from deep immersion.
3. The Loss of Real Context
Good research requires a strong grasp of business context, the nuances and history that often aren't written down but exist in people's heads. AI-supported research, when poorly executed, lacks the depth to capture this "subconscious" signal. Without human immersion, the ability to see what’s truly moving the needle, or to spot what is missing from the data, is severely constrained.
As the Jess Holbrook, the Head of UX at Microsoft AI recently noted, AI slop is high-volume, low-effort output that people don't take pride in. It floods communication channels, crowds out quality, and reduces the signal-to-noise ratio in any information ecosystem it inhabits.
Democratization: Learning from Figma and Canva
I am not here to bash AI or democratization; I believe both are ultimately positive. AI stands to accelerate the democratization of insights in the same way that Figma and Canva did for design.
Prior to Figma, good design was a luxury available only to those who could afford expensive Adobe packages and powerful workstations. Figma leveraged browser-based technology to break those barriers, making high-quality aesthetics a possibility for every team. Their CEO, Dylan Field, famously championed the idea that "design is everyone’s business."
The scale of this shift is staggering. In 2019, only 10% of Figma users were non-designers. By 2025, that number has skyrocketed to 67%. More than two-thirds of the people using the world's leading design tool today are not professional designers. This didn't ruin high-quality design; it scaled it.
However, we must recognize a fundamental difference between scaling design and scaling research. Design is self-correcting. If a non-designer creates a poor layout in Figma or Canva, the failure is visual, it’s immediately obvious to the eye that it looks "off."
Research is invisible. When a non-specialist produces a surface-level insight using AI, it isn't immediately visible. The report might look professional, use the right terminology, and follow the correct structure, but if it lacks context or rigor, it is simply "ultra-processed slop." This "invisible failure" is what can quietly lead an entire organization in the wrong direction.
Future 2: A Framework for Scale and Quality
The data from the 2025 State of User Research Report shows that researchers are leaning into the technology: 73% find AI makes their work more efficient, and 66% report better quality reports. Yet, a massive 91% remain concerned about accuracy and trust as they adopt these tools.
To reach a future that balances scale with quality, I propose a four-part framework for modern teams:
- Own Your Research: You should be happy to stake your name and reputation on your findings. This means ensuring you are the author and the AI is merely the thought partner or adversary. Edit findings to reflect your strategic view and always audit where they came from.
- Impact Over Volume: In academia, 20% of research papers make 80% of the citations. These are the foundational works that move fields forward. Stop aiming for a high volume of ignored reports. Aim to be in that top 20% of work that actually changes the trajectory of the business.
- Immerse Yourself: There is no substitute for first-hand experience. Do not outsource your thinking. Conduct a few interviews yourself. Sit in on the sessions. Discuss the findings with colleagues. You need to get a "feel" for the material so you can spot when an AI-generated claim sounds off or lacks the weight of evidence.
- Default Reproducibility: Transparency is the cure for "slop." Research reports should be honest and transparent about the tools, prompts, and processes used to produce the answers. Ship the "how" alongside the "what" so that your findings can be reliably defended and audited.
Trust is the Product
At CoLoop, we anchor our development on a simple idea: AI should feel like a bicycle for the mind. It amplifies your effort, but you are still steering. You decide what matters, when to slow down, and when something doesn't feel right. We built this platform because we were frustrated that most AI tools optimize for output, more answers, faster reports, but not for ownership. Ownership is what makes research trustworthy.
The winning research teams of the next decade won’t be the fastest or the cheapest. They will be the most trusted at scale. In a world where research is everywhere, just like content, quality becomes the only competitive advantage.
Trust isn't just a byproduct of research; in the era of AI, trust becomes the product.
Is your team ready to scale trust and quality? We’re trusted by 1/3rd of the Fortune 1,000 to make their most important decisions. We’re building the AI-first analysis and repository for modern research teams who refuse to settle for "ultra-processed" findings.
Book a Demo to see how CoLoop helps you scale insights without sacrificing rigor.
If you want to see the live presentation from Qual360 you can watch it here.
What is the difference between democratizing design and democratizing research?
Design failures are immediately visible, while research failures are invisible. When someone creates a poor layout in Figma or Canva, anyone can see it looks wrong. But when someone produces shallow AI-generated research, it can appear professional and use correct terminology while still lacking rigor. This invisible failure can guide organizations in the wrong direction without anyone noticing until strategic decisions fail. Design is self-correcting through visual feedback, but research requires expertise to evaluate quality.
What are the three hidden costs of AI research acceleration?
The three hidden costs are epistemic clutter, surface-level findings, and loss of context. Epistemic clutter happens when teams generate too many reports, forcing decision-makers to reconcile dozens of contrasting viewpoints with no clear direction. Surface-level findings occur because AI identifies obvious themes but misses deeper insights that require human immersion. Loss of context means AI cannot capture the unwritten business knowledge and historical nuances that exist in people's heads but are critical for strategy.
How can research teams maintain quality while using AI tools?
Research teams should follow four core principles to maintain quality with AI. First, own your research by staking your reputation on findings and positioning AI as a thought partner, not the author. Second, prioritize impact over volume by aiming to create work that actually changes business decisions rather than ignored reports. Third, immerse yourself in the data by conducting interviews and sitting in on sessions to develop intuition for what feels right. Fourth, default to reproducibility by being transparent about the tools, prompts, and processes you used.
What is "epistemic clutter" in research?
Epistemic clutter is the confusion created when organizations produce too many research reports too quickly. When AI tools make research production cheap and easy, teams share reports constantly. Decision-makers then face dozens of contrasting viewpoints generated with minimal effort, making clarity impossible. In this environment, any decision can be justified with some piece of data, so research loses its power as a neutral guide for strategy. The signal gets drowned out by noise.
Why did OpenAI hire human content strategists if AI is so powerful?
OpenAI hired human content strategists at salaries approaching $400,000 because they recognized AI-generated content was damaging trust. While AI can produce high volumes of content quickly, it cannot generate the authority, nuance, or strategic thinking that builds long-term credibility. This hiring decision signals the "trust ceiling" of automated output. Even the architects of generative AI needed real human expertise to move away from AI slop and maintain strategic trust with their audience.

