- The Brand Strategy Brief
- Posts
- The Dangerous Lie Inside Your Customer Data
The Dangerous Lie Inside Your Customer Data
Why What Customers Say Isn’t What They Do—and What to Do About It
Reading Time: 9 min
What happens when your customer data is a lie?
Not a malicious lie. But a subconscious one. One they may not even be aware of.
It’s the "Say vs. Do" gap.
Customers say they want variety. But Spotify's data shows they listen to the same songs on repeat. Consumers say they'll pay more for eco-friendly products. But their behavior shows they only buy them when the price is the same.
If you build your strategy only on what customers say, you risk building it on a foundation of sand.
A few months back, we looked at the first challenge in innovation research. This is the customer's "blind spot." It’s their struggle to envision a new solution.
This "Say vs. Do" gap is even more dangerous.
It's not that they can't see the future; it's that misrepresent the present. The truth—the kind of insight that leads to sustainable growth—is only found in their behavior.
This is why true exploration can't only be a survey. It has to be a system.
The Foundation of Sand: A Persistent Problem
The gap between what people say and what they do isn't limited to business.
It's one of the most persistent problems in human behavior. And it shows up in several forms in human behavior research:
The Attitude-Behavior Gap: What someone says (for example, "I’m worried about the environment") often doesn’t match what they do.
Stated vs. Revealed Preferences: Stated Preferences are what people say in a survey. Revealed Preferences are what they do when they have to make a real-world trade-off.
The Intention-Behavior Gap: The failure of individuals to act on their own stated intentions.
This last point trips up marketers. Your surveys measure "purchase intent." So, you build strategies on the assumption that intention equals a sale.
The research proves this can create false assumptions. That intention is an unreliable predictor. It can break the instant it hits real-world friction like price, convenience, or habit.
It's not that the data are bad—they capture intent correctly.
It's that it represents something different from what gets used for.
The "Subconscious Lie": Why the Gap Exists
Your customers aren't trying to deceive you.
They are, as research confirms, participating in a subconscious lie.
This gap is created by three cognitive biases:
People tend to give socially acceptable answers. They often say they engage in "good" behaviors, like ethical consumption. And then downplay "bad" ones.
This isn't always a conscious choice to deceive. It’s often what's called Self-Deceptive Enhancement. Your customer thinks they are rational and ethical in the survey. But their purchase history tells a different story.
2. The "Knowing Why" Problem (Introspection Illusion)
This is the most profound driver.
It's a bias where people confidently believe they have direct access to the origins of their own decisions, but they don't. Research suggests 95% of our decisions are subconscious.
So, when your focus group asks, "Why did you buy that?", the customer has no access to that true subconscious driver. So, their conscious mind invents a plausible, rational story to explain it: "I bought it for the quality".
Your surveys are often not "truth-gathering" tools. They are "rationalization-gathering" tools.
3. The "Good Enough" Problem (Bounded Rationality)
In a survey, a customer can easily state, "Yes, I would absolutely buy the most sustainable, high-end option."
But the real world is messy.
Human decision-making is limited by time and cognitive capacity. In a real store, we don't optimize. We "satisfice"—we choose the first option that is "good enough."
The "good enough" rule often sways decisions more than "optimal" choices from the survey. Factors like familiar labels and low prices play a bigger role in decisions than what's objectively ideal.
The System: How to Find the Behavioral Truth
World-class brands don't get lucky. They build a scalable, repeatable process for uncovering the truth in what customers do, not just what they say.
It's not magic. It's a three-part discipline.
Part 1: The Synthesis — How Data Finds the "What" and Observation Finds the "Why"
The popular story of how Intuit created QuickBooks is a dangerous oversimplification.
The myth is that founder Scott Cook, trapped by data had a moment of insight that invalidated the data and led to a pivot. This myth is dangerous because it pits data and observation against each other.
The truth is far more useful. The opportunity for QuickBooks was first found using quantitative data:
1. Find the "What" (The Data Anomaly): Years after Quicken, Intuit's personal finance software, led the market, surveys revealed a surprising fact: nearly 50% of Quicken users were small businesses.
Scott Cook’s first reaction was denial. He didn't think the data made sense. So, he ignored it.
2. "Savor the Surprise" (The Mindset Shift): When the anomaly continued, Cook's team saw that their bias was the problem. They learned to, as Cook put it, "savor the surprise" and interrogate the data.
3. Find the "Why" (The Observation): Cook implemented the "Follow Me Home" program. Cook and his team would wait at a local Staples. They looked for customers who had just bought their software. Then, they would ask to follow them home. This way, they could see how they installed and used the program. No interviewing. Just observation.
The goal wasn't to refute the data, but to understand the 'why' behind it. They observed these small business owners and saw confusion.
The team discovered the unarticulated need. According to Cook, these users "hated accounting" and were "morbidly intimidated by accounting terminology."
They were paralyzed by the "psychological minefield" of "debits" and "credits" and were force-fitting Quicken just to avoid the "professional jargon."
4. The Real Pivot (The "Black Box"): The solution was a new market position. Cook called it "the first accounting software with no accounting in it."
Instead of forcing users to learn accounting, Intuit built a system that hid the complexity. They used the non-intimidating screen that looked and felt like their paper checkbook.
When they entered a payment to "Staples," the software performed the complex calculations in the background. And then translated it into simple language for the user. The user was shielded from jargon. This helped them overcome their fear without losing technical accuracy.
This system uses data to uncover the "what" and observations to explore the "why." It helped build an $8 billion-a-year business segment.
Part 2: The Observation — Defending Behavioral Truth from "Say" Data
The Swiffer story shows what to do when your observational data is in direct conflict with what your customers say.
It's a case study in strategic courage.
1. The "Confirmation" Trap (The "Better Soap" Fallacy): In the mid-1990s, P&G's home cleaning division was stagnant. P&G tried to innovate by asking customers about "more suds" or "a fresh scent." These were just small tweaks to a system that was fundamentally broken.
2. The Pivot to "Do" (The Ethnographic Insight): Frustrated, P&G's Corporate New Ventures group hired the design firm Continuum.
Continuum decided to stop asking questions. Instead, they would watch people clean their floors.
But they never got that far. When they went into homes, they found floors that were already clean. The existing process was so messy and burdensome that people were ashamed to be seen doing it.
Continuum labeled this the "Clean Floors Paradox."
People didn't have issues with the soap. They had issues with the multi-step process of cleaning.
3. The "Say vs. Do" Collision (The Failed Concept Test): The team developed a prototype, code-named "FastClean," with a disposable pad.
But the entire $5 billion innovation nearly died in research.
P&G ran a "say" test. They showed consumers a written concept of the product and asked for their opinions. Consumers were not enthusiastic. They said they were reluctant to buy another disposable product. And they were concerned about the environmental impact.
This is the "Say vs. Do" gap in its purest form. When asked, consumers respond as their rational, "looking good" self. Based on this "say" data, the project should have been killed.
4. The "Do" Wins (The Experiential Prototype): The team, armed with the conviction from their "do" research, ignored the "say" data. They built a crude prototype and got it into people's hands..
Everything changed. When potential customers were able to use the product, they fell in love with it. They had the "Oh my god" moment, as P&G's R&D Director recalled, turning over the pad to see the "visual proof" of the trapped dirt.
The visceral, behavioral "do" was the real signal. The rational, attitudinal "say" was just noise.
The behavioral insight is the catalyst. But it's the system that creates the billion-dollar brand.
Part 3: The Intervention — The "Unscalable" Experiment
You can't risk a multi-million dollar strategy on an insight alone. The next step is to run a fast, low-cost, unscalableintervention to prove the hypothesis in the real world.
The Airbnb photo experiment provides a clear blueprint for this process.
1. The Crisis & The Symptom (The "Forcing Function"): The experiment was born from profound desperation. In 2009, the company was in Y Combinator and close to going bust. Their growth, as co-founder Joe Gebbia described it, was a "horizontal drumstick graph." It was flatlined at a meager $200 per week.
This crisis was a strategic gift. It proved that their scalable, engineering-first solutions (like a photo upload wizard) were not working.
While reviewing New York listings with Y Combinator's Paul Graham, Gebbia, a RISD-trained designer, spotted the issue. As he said, he found a pattern of "photos that sucked."
This led to Graham's counterintuitive advice: "Do things that don't scale."
2. The "Trojan Horse" Intervention (The Unscalable "Excuse"): The founders diagnosed the problem as a skill gap, not a tool gap. Hosts couldn't take good photos, and no tool would teach lighting.
So they flew to New York and borrowed a $5,000 camera. And they offered free professional photos so they could get into hosts' homes and understand more about them.
3. The Real Diagnosis (Uncovering the "Why") This intervention allowed them to move from diagnosing a symptom to understanding the problem. While in the hosts' homes, they weren't just photographers; they were ethnographers.
This is where they uncovered the true "Say vs. Do" gap:
The Guest Gap ("Say"): Guests said their reasons for not booking were rationalizations like "price" or "location."
The Guest Gap ("Do"): Their behavior was "bouncing from ugly listings" that looked like Craigslist-quality photos.
The Insight: The "say" was a rationalization. The "do" revealed a trust issue. The ugly photos were a proxy for risk, amplifying the danger of staying in strangers' homes.
4. The Breakthroughs (the "Test Nugget" and the "Binder"): The intervention had two immediate, massive payoffs.
First, the quantitative "test nugget": The following week, revenue doubled from $200 to $400. This proved the hypothesis that better photos build trust and drive revenue.
Second, the qualitative breakthrough: Hosts gave them "binders full of suggestions."
The revenue-doubling proved their one hypothesis. The binder gave them ideas for their future product roadmap.
How to Build Your Exploration System: A 3-Step Process
You don't need a massive R&D budget to do this. You just need to follow the 3-step system our case studies revealed.
Stop asking your customers to predict their own future. Start building a process to observe their present.
Step 1: Find the Anomaly (The Intuit "Surprise"): Start with your data. Look for the quantitative "surprise" that contradicts your assumptions. Where are users doing something unexpected? That data point you've been dismissing as a fluke is your starting point. It tells you who to go observe.
Step 2: Find the Why (The P&G Observation): Now that you know who to observe, find their why. Schedule time to watch these users. Your goal is to find the behavioral truth they can't articulate.
Step 3: Test with an Unscalable Experiment (The Airbnb Intervention): Once you have a new hypothesis, don't build a new product. Run a fast, unscalable test to see if your hypothesis is correct.
The answers you're looking for aren't in your next survey. They are in a system that combines data and observation.
Find the anomaly. Observe the behavior. Test with a manual intervention.
Stop asking. Start watching.
Onward,
Aaron Shields
P.S. When was the last time your customer data truly surprised you? If it's just confirming what you already believe, you might be stuck in the "Say vs. Do" gap. If you want to start uncovering deeper insights, reply to this email and I'll set up a 20-minute call to help you start looking in the right direction.
Reply