Expert Guide Series

Why Do User Interviews Often Give Wrong App Design Answers?

A logistics company spent months interviewing drivers about their ideal route planning app. Every single driver said they wanted detailed maps, weather updates, and traffic predictions. The company built exactly what users asked for—a feature-packed app with every bell and whistle drivers mentioned. Six months after launch, usage was terrible. Drivers were actually using a basic competitor app that simply showed the quickest route without any extras. What went wrong?

This scenario plays out constantly in app development, and it highlights one of the biggest misconceptions in our industry: that users know what they want and can articulate it clearly. After years of building apps and watching some succeed while others fail spectacularly, I've learned that user interviews—whilst valuable—often lead us down the wrong path if we take them at face value.

The truth is, people are terrible at predicting their own behaviour. They'll tell you they want comprehensive features during an interview, but when they're actually using your app on a busy Tuesday morning, they just want to get their task done quickly. They say they care about privacy, then happily hand over their data for a small convenience. They claim they'd pay for premium features, but baulk when faced with an actual paywall.

What people say, what people do, and what they say they do are entirely different things

This doesn't mean user interviews are useless—far from it. But treating interview responses as gospel truth is one of the fastest ways to build an app that nobody actually wants to use. Understanding why users give misleading feedback, and how to work around these limitations, can mean the difference between an app that thrives and one that gets deleted after the first use. Let's explore why our brains trick us into giving wrong answers, and more importantly, what we can do about it.

The Psychology Behind What People Say vs What They Actually Do

There's this weird thing that happens in user interviews—people genuinely believe they're telling you the truth, but their actual behaviour tells a completely different story. It's not that they're lying to you; it's just that human brains are wired in ways that make self-reporting incredibly unreliable.

I've lost count of how many times clients have told me their users want more features, only to discover through analytics that people barely use the features that already exist. One project sticks in my mind where users insisted they needed a complex dashboard with loads of data visualisations. But when we tracked their actual usage? They spent 90% of their time on just three basic functions.

Why Our Brains Betray Us

The problem is that we're asking people to do something their brains aren't designed for—accurately predict future behaviour or recall past actions. Memory is selective and often wrong. We remember dramatic events but forget mundane daily habits. We also suffer from something called the "intention-action gap"—the difference between what we plan to do and what we actually do.

Think about it this way: how many people say they'll exercise more, eat better, or use that productivity app they downloaded? Loads. How many actually follow through? Not many.

Social Desirability Bias in Action

Then there's the social pressure aspect. Users often give answers they think make them look good or smart. Nobody wants to admit they spend hours scrolling through social media when they could be doing something "productive." So they'll tell you they want educational content or professional networking features, when what they really crave is entertainment and escapism.

  • People overestimate how much they'll use "good for them" features
  • They underestimate time spent on "guilty pleasure" activities
  • They confuse aspirational behaviour with actual behaviour
  • They give answers that align with their ideal self, not their real self

The key is understanding that what people say reflects their intentions and aspirations—but what they do reflects their actual needs and motivations. Both pieces of information are valuable, but you need to know which one you're getting.

Common Research Bias That Skews User Interview Results

After conducting hundreds of user interviews over the years, I can tell you that bias isn't just something that affects the people you're interviewing—it affects you too. And honestly, some of these biases are so sneaky that even experienced researchers fall into their traps without realising it.

The confirmation bias is probably the biggest culprit I see. You know what I mean? When you've already got a hunch about what users want, and you unconsciously ask questions that confirm your suspicions. I've caught myself doing this more times than I'd like to admit. You'll find yourself nodding along when someone says something that supports your idea, but glossing over comments that don't fit your vision.

Then there's the recency bias—users remember their most recent experience way more vividly than their typical behaviour. If someone had a frustrating experience with your app yesterday, that's going to colour everything they tell you, even if they normally love using it. It's a bit mad really, but our brains just work that way.

Record your interviews and listen back later. You'll be shocked at how much you missed or misinterpreted in the moment, especially the subtle cues that contradict what people are actually saying.

The Most Common Biases to Watch For

  • Social desirability bias: Users tell you what sounds good rather than the truth
  • Anchoring bias: The first thing mentioned heavily influences everything that follows
  • Survivorship bias: You only hear from users who stuck around, not the ones who left
  • Availability heuristic: People overestimate things they can easily remember
  • False consensus effect: Users assume everyone thinks like they do

The key is acknowledging these biases exist and building your interview process around them. Don't try to eliminate bias completely—that's impossible. Instead, design your questions and analysis methods to account for these natural human tendencies.

Why Users Can't Predict Their Own Future Behaviour

Here's something I've learned from years of building apps—people are terrible at predicting their own behaviour. Absolutely terrible. I mean, we all think we know ourselves pretty well, but when it comes to how we'll actually use a mobile app? We're basically guessing.

I've sat in countless user interviews where someone will confidently tell me they'd never use push notifications because they find them annoying. Then six months later, our analytics show that same user type engaging most with the app specifically because of our notification strategy. It's a bit mad really, but it happens all the time.

The problem is that our brains aren't wired to accurately predict future scenarios, especially when those scenarios involve new technology or changed circumstances. When you ask someone "would you use this feature?", they're basing their answer on who they are right now, not who they'll be when they're actually using your app in their daily routine.

The Context Problem

Think about it this way—when you're sitting in a research session or filling out a survey, you're in a completely different mindset than when you're rushing to catch a train and need to quickly check something on your phone. The context is everything, and people simply can't simulate that future context accurately in their heads.

I've seen users swear they'd never want an app to remember their preferences because "privacy is important to them". But once they start using the app regularly? They get frustrated every time they have to re-enter the same information. Their priorities shift when convenience becomes real rather than theoretical.

This is why we always tell our clients that what people say in interviews should inform your design process, not dictate it. The real insights come from watching actual behaviour once your app is in people's hands.

The Problem with Leading Questions and Assumptions

I've sat through hundreds of user interviews over the years, and honestly? The biggest mistakes I see aren't technical—they're in how we ask questions. Leading questions are like putting words in someone's mouth before they've even had a chance to think properly. When you ask "How much would you love a feature that automatically organises your photos?", you've already told them they should love it.

The thing is, most people want to be helpful. They'll go along with your suggestions because they think that's what you need to hear. I've watched developers get excited about features that users seemed enthusiastic about in interviews, only to find those same features completely ignored when the app launches. It's heartbreaking, really—and expensive.

Your Assumptions Are Showing

We bring our own baggage to every conversation. As app developers, we think we understand the problems users face because we've been living and breathing the project for months. But here's the thing—users don't see problems the same way we do. They don't even think about solutions the way we do.

The moment you start explaining why a feature would be useful, you've stopped learning what users actually need and started selling them on your preconceived ideas

Instead of asking "Would you use a feature that does X?", try "Tell me about the last time you struggled with Y." Let them describe their world in their own words. You'll be surprised how often their real problems are completely different from what you assumed. And those real problems? They're where the best app ideas come from—not from our clever feature lists, but from genuine user pain points we never saw coming.

When Users Tell You What They Think You Want to Hear

I've sat through countless user interviews where participants gave me textbook perfect answers that sounded brilliant on paper but were complete rubbish in practice. There's something about being interviewed that makes people want to be helpful—maybe too helpful. They'll tell you exactly what they think you want to hear, even if it's not remotely close to what they actually do or think.

This happens because users naturally pick up on cues from your questions, your body language, and even the app prototype you're showing them. If they sense you're excited about a particular feature, they'll suddenly become excited about it too. It's not that they're lying; they genuinely want to be good participants and give you useful feedback.

I once had a client who was convinced their fitness app needed social sharing features because every single user interview confirmed it. "Oh yes, I'd definitely share my workouts with friends," they'd say. But when we looked at the actual usage data from similar apps? Social features were barely touched. People thought sharing sounded like something they should want to do, but in reality, most fitness journeys are quite personal.

The People-Pleasing Problem

Users will also downplay problems or frustrations if they think it might hurt your feelings. They'll struggle with a navigation flow for minutes, then say "oh, it's fine once you get used to it." That's your cue that it's definitely not fine—they're just being polite.

The trick is learning to read between the lines. Watch what they do, not just what they say. Pay attention to their hesitations, the features they ignore, and the workarounds they create. That's where the real insights live.

Observational Methods That Reveal True User Patterns

Right, so if asking people what they want isnt reliable—and trust me, it usually isnt—what actually works? This is where things get interesting. Instead of asking users what they think they'll do, we watch what they actually do. It's a bit like being a detective, really.

The most powerful method I've used over the years is session recordings. You know when someone downloads your app and you can actually watch their finger movements, see where they tap, where they hesitate, where they get confused? That's gold dust right there. People might tell you in an interview that your navigation is "fine" but then you watch them spend thirty seconds hunting for the settings menu.

Heat mapping is another game changer—it shows you exactly where users are tapping on each screen. I had one client who was convinced users wanted a big search bar at the top of their homepage. The heat maps showed people were actually tapping on product categories at the bottom. Completely different behaviour from what they said in interviews.

Real-World Observation Techniques

Analytics tell a story that interviews simply cant. Look at your drop-off rates, session lengths, and user flows. If people are saying they love feature X but your data shows 90% of users never use it, believe the data.

Set up basic event tracking before conducting any interviews. This gives you hard data to compare against what people tell you, and often the contrast is quite revealing.

  • Session recordings show actual user behaviour patterns
  • Heat maps reveal where users really focus their attention
  • A/B testing lets users vote with their actions, not their words
  • User flow analytics highlight friction points people dont mention
  • Time-on-screen metrics expose engagement vs stated preferences

How to Structure User Interviews for Better Insights

Right, so we've established that users can be proper unreliable when it comes to telling us what they actually want. But that doesn't mean we should bin user interviews altogether—we just need to run them better. Over the years I've tweaked my interview process to get around these biases, and honestly? The insights are so much more useful now.

The trick is to focus on behaviour rather than opinions. Instead of asking "Would you use a feature like this?" I ask "Tell me about the last time you tried to solve this problem." See the difference? One gets you hypothetical nonsense; the other gets you real stories about actual behaviour.

Key Questions That Actually Work

Here's my go-to structure that cuts through the fluff and gets to the good stuff:

  • Walk me through how you currently handle [specific task]
  • What's the most frustrating part of that process?
  • Show me the last three apps you used on your phone
  • What made you stop using [competitor app]?
  • Describe a time when [problem] really annoyed you

Notice how these questions focus on past behaviour and concrete examples? That's where the gold is. When someone tells you about something that actually happened, you're getting real data instead of what they think sounds good.

The 'Show Don't Tell' Approach

One technique that works brilliantly is asking users to actually demonstrate things during the interview. Get them to show you their current process on their phone or computer. You'll spot problems they don't even mention because they've gotten so used to working around them.

The best interviews feel like conversations, not interrogations. Keep it casual, follow interesting threads, and remember—you're a detective gathering evidence, not a lawyer making a case.

Look, I get it—user interviews feel like the obvious solution when you're trying to figure out what users want from your app. You sit them down, ask thoughtful questions, and they give you answers. Simple, right? But as we've explored throughout this guide, its anything but simple.

The reality is that people are terrible at predicting their own behaviour. They'll tell you they want features they'll never use; they'll claim they hate designs they'd actually engage with daily. And honestly? This isn't their fault—it's just how our brains work. We're influenced by what we think sounds good in theory, not what actually works in practice.

But here's the thing—I'm not saying you should ditch user interviews entirely. That would be mad. What I am saying is that you need to treat them as just one piece of a much bigger puzzle. Combine them with observational research, A/B testing, and real usage data. Watch what people do, not just what they say they'll do.

The most successful apps I've worked on weren't built solely on user interview feedback—they were built on understanding the gap between what people say and what they actually need. When you start designing with this understanding, everything changes. Your conversion rates improve, your retention goes up, and users actually stick around because the app works for them in ways they couldn't have articulated in an interview.

So next time you're planning user research for your app, remember this: listen to what users tell you, but pay even closer attention to what they show you through their actions. That's where the real insights live.

Subscribe To Our Learning Centre