Why Human Data Still Matters in the Age of AI
Razorfish joined industry leaders at Oxford's OxGen AI Summit to explore the future of insights in an AI-driven world.

Razorfish was proud to return to the OxGen AI Summit in Oxford, England, last week, where over 250 global leaders gathered to unpack the accelerating impact and societal implications of generative AI. Our chief social and innovation officer, Cristina Lawrence, joined Graham Gannon, director of engineering for applied AI at Google, and Jason Mander, chief insight officer at GWI, for a critical discussion moderated by Beatrice Nolan, technology editor at Fortune.
The central question: As large language models consume existing data at an unprecedented rate, should the industry turn to synthetic data as the solution?
The State of Play: AI Adoption Accelerates
According to GWI research, 84% of advertising and marketing professionals already use AI tools at work. Insights generated by AI now rank as the third-most used input across business workflows—behind only intuition/experience and internal data.
The most revealing finding? When asked about blockers to AI adoption, "none" was the most common answer, particularly among younger professionals. Cultural and generational resistance, the panel agreed, will evaporate quickly.
The Complexity Problem: Why Synthetic Data Isn't Ready
But widespread adoption doesn't mean these tools are infallible. As Mander put it, "Humans are complex and messy. We say we care about things but buy things that contradict stated beliefs."
That gap between intention and behavior is where synthetic data struggles most. Political polling offers a clear example: Traditional research methods often fail to capture how people would actually vote. Those contradictions are where the best insights live—and what purely synthetic data risks erasing. "Synthetic data might cause us to lose that nuance," Mander warned. While synthetic data may eventually reach "good enough" status, GWI's current testing shows results are "nowhere near passable yet."
The Razorfish Approach: AI Enhanced by Human Truth
At Razorfish, we see AI as a force multiplier, not a replacement, for human insight. Our approach combines GDPR-compliant AI tools with real human data from partners like GWI to create personas rooted in actual insights.
"It's an amazing tool to get you started,” Lawrence said. “But to get to that unique insight, you need humans."
This is particularly critical when working with first-party data in enterprise environments, where security and compliance are non-negotiable. One of our most successful applications has been analyzing unstructured data from client research to create personas modeled on real behavior, not synthetic assumptions.
The Expertise Gap: Why AI Literacy Is the New Table Stakes
An active Q&A revealed another challenge: the growing expertise gap. Many organizations are discovering they need far deeper AI training than expected. "A person with less experience using an AI tool might take outputs at face value and be unable to catch mistakes or recommendations that don't make sense in the real world," Lawrence warned. When errors go unchecked, strategy suffers. Gannon added a word of caution about the "hall of mirrors effect" of relying too heavily on synthetic data. At Google, it’s used selectively to fill gaps under tight controls. "It can help us improve accuracy, but it's nowhere near as good as getting out in the field and getting the real data," he said.
Navigating the Paradoxes
The conversation surfaced some fascinating tensions shaping AI’s evolution:
On hallucinations: Hallucination is not a bug; it's a feature. When AI generates something that's never been written before or combines ideas in novel ways, there can be benefits. But verification remains essential.
On synthetic data in frontier models: While there's an element of synthesis in some advanced models, the underlying research must be real. People sometimes answer research questions inauthentically, and synthetic data can amplify that bias.
On developing AI common sense: Just as search engines evolved from requiring keyword expertise to understanding natural language queries, AI tools will meet users where they are. But this evolution requires humans in the loop—reviewing outputs, identifying problems, and feeding corrections back into the models.
The Path Forward
The panel reached consensus on several principles for organizations navigating AI adoption:
- Synthetic data has a role—but as a strategic gap-filler, not a foundation.
- Human expertise is non-negotiable—both in using the tools and interpreting outputs.
- Ground insights in real human data—it’s where contradiction and truth collide.
- Prioritize compliance and security—especially when working with first-party data.
- Invest in literacy—AI tools are only as smart as the people using them. As Gannon summarized, "Generative AI will require human thought and will continue to do so—humans need to be in the loop." The question isn't whether to use AI or synthetic data, but how to use them strategically while building on the solid foundation of real human insights.
The future of insights isn't choosing between human or synthetic. It's ensuring we don't mistake efficiency for accuracy, or probability for truth.

