What Makes Some User Research Sessions Completely Useless?
You spend weeks planning the perfect user research session. You recruit the right participants, book the testing lab, prepare your questions, and gather the stakeholders. Everyone's excited—this is going to give you the insights you need to make your app truly brilliant. But then something goes wrong. Maybe the participants just tell you what they think you want to hear. Maybe the questions don't actually reveal anything useful. Or maybe you get loads of data but somehow feel more confused than when you started.
I've seen this happen countless times over the years, and it's honestly heartbreaking. Teams put so much effort into user research, but they end up with results that are either misleading or completely useless. The worst part? They often don't realise it until they've built features based on flawed insights and watched their app metrics plummet. It's like following a map that's pointing you in completely the wrong direction—you'll walk for miles before you realise you're lost.
Bad user research isn't just unhelpful, it's actively dangerous because it gives you false confidence in decisions that could sink your app
The thing is, user research isn't automatically valuable just because you're doing it. There are so many ways it can go wrong, from recruiting the wrong people to asking questions that bias the answers. But here's what I've learned: understanding these common mistakes is actually more valuable than learning the "right" way to do research. When you know what makes research sessions completely useless, you can spot the warning signs before they derail your entire project. And trust me, your app's success depends on getting this right.
The Wrong People Problem
You know what? I see this mistake all the time—teams spending weeks organising user research sessions, only to end up talking to completely the wrong people. It's honestly one of the fastest ways to make your research completely useless, and I've watched plenty of projects go sideways because of it.
Here's the thing that drives me mad: people often recruit participants based on convenience rather than actual user profiles. They'll grab colleagues, friends, or whoever responds first to their recruitment post. But if you're building a fintech app for busy professionals and you're testing with university students who've never had a mortgage... well, you can see the problem.
Who Actually Uses Your App?
Getting the right people starts with really understanding who your users are—and I mean beyond basic demographics. Sure, knowing someone's age and income matters, but what about their behaviour patterns? A 35-year-old lawyer who uses their phone constantly has very different needs from a 35-year-old teacher who barely touches technology outside work hours.
I always tell clients to think about the context too. Where will people use your app? When? What's their mindset? If you're building something people use during stressful situations, testing with relaxed participants in a quiet room won't give you realistic feedback.
The Screening Process
This is where many teams get lazy, but proper screening questions are your lifeline. Don't just ask "Do you use mobile apps?"—that's basically everyone these days. Ask specific questions about their current solutions, pain points, and actual behaviours. And here's a tip: always include a few disqualifying questions to filter out people who just want the incentive payment but aren't really your target audience.
Getting the wrong people doesn't just waste time; it can actually mislead your entire product direction. Better to delay research than to base decisions on feedback from people who'll never actually use what you're building.
Asking Leading Questions
Right, this one makes me genuinely frustrated because it's so bloody common and completely ruins your research quality. Leading questions basically push users towards the answer you want to hear rather than their honest opinion. I see it all the time—teams spend weeks setting up user sessions then accidentally sabotage their own research by asking things like "Don't you think this new checkout process is much clearer?" Well of course they're going to say yes!
The problem is that most people naturally want to be helpful and agreeable during research sessions. When you ask "How much do you love this feature?" instead of "What's your opinion of this feature?", you're essentially putting words in their mouth. And here's the thing—once you've planted that seed, all their subsequent feedback gets tainted. They start telling you what they think you want to hear rather than what they actually think.
I've watched perfectly good research sessions go completely sideways because someone couldn't resist asking "This is pretty intuitive, right?" after showing a new interface. Instead of getting genuine feedback about usability issues, they got polite nods and missed critical problems that later showed up in their analytics as high bounce rates.
Start your questions with "What", "How", "When" or "Tell me about" instead of "Don't you think" or "Wouldn't you agree". Let users form their own opinions without your influence.
Examples of Leading vs. Neutral Questions
- Leading: "How easy was that checkout process?" → Neutral: "What was your experience with the checkout process?"
- Leading: "You found that confusing, didn't you?" → Neutral: "What went through your mind when you saw that?"
- Leading: "Which of these designs do you prefer?" → Neutral: "What are your thoughts on these design options?"
- Leading: "This seems much faster now, right?" → Neutral: "How did that feel compared to before?"
Chasing Numbers Over Insights
Here's something I see all the time—teams get completely obsessed with metrics that sound important but tell them nothing useful about their users. They'll spend weeks tracking how many people tapped a button, but won't bother asking why half of them immediately backed out afterwards. It's like measuring how many people walk into your shop but ignoring the fact that most of them leave empty-handed and looking confused.
I've worked with clients who had beautiful dashboards full of data points that made them feel very scientific about their research. Click-through rates, time on screen, conversion funnels—all neatly colour-coded and updating in real time. But when I asked them what their users were actually trying to achieve, they went quiet. Numbers without context are just... well, numbers.
The problem is that vanity metrics are easy to collect and they make us feel productive. But they're also incredibly misleading. A high engagement rate might sound good until you realise users are tapping frantically because they can't figure out how to complete their task. Low bounce rates could mean people love your app, or it could mean your navigation is so broken they can't find the exit.
What Actually Matters
Good user research digs deeper than surface-level behaviour. Instead of just counting actions, we need to understand the story behind them:
- Why did users choose that particular path through your app?
- What were they expecting to happen when they tapped that button?
- How did they feel when your app responded differently?
- What would they change if they could redesign this flow?
These insights can't be captured by analytics alone—they require actual conversations with real people. The numbers tell you what happened; the insights tell you why it happened and what to do about it.
Testing Too Early or Too Late
I've seen teams rush into user testing with barely a wireframe sketched out—and honestly, it's painful to watch. Sure, you want to validate ideas early, but testing a concept that's not even properly formed yet? You're just going to confuse your users and waste everyone's time. On the flip side, I've worked with companies who spend months perfecting their app only to discover users hate the core navigation structure. That's equally frustrating and way more expensive to fix.
The sweet spot for testing depends entirely on what you're trying to learn. Early concept testing works brilliantly when you've got clear user flows mapped out and can explain what the app actually does; testing individual features makes sense when you've got working prototypes that behave realistically. But here's what doesn't work—testing half-baked ideas where users spend more time asking "what's supposed to happen here?" than actually using your app.
The best time to test is when you have something concrete enough to be meaningful, but flexible enough to be changed without breaking the bank
I've learned that timing your research sessions around your development cycle is absolutely critical. Testing too late in the process means you've already committed resources to features that might not work; testing too early means you're gathering feedback on something that barely exists. The key is matching your research method to your development stage. Paper prototypes for early concept validation, clickable prototypes for usability testing, and beta versions for performance and real-world usage patterns. Get this timing wrong and your research sessions become expensive exercises in frustration rather than valuable sources of insight.
The Observer Effect
You know when you're driving normally but suddenly spot a police car and instantly become the most cautious driver on the road? That's basically what happens in user research sessions—and it's killing your data.
The observer effect is dead simple: people behave differently when they know they're being watched. In user research, this means your participants aren't using your app the way they normally would. They're performing for you, trying to be helpful, or second-guessing every action because they think you're judging them.
I've seen this countless times. Users will spend ages reading every single word on a screen they'd normally scan in seconds. They'll explain their thought process out loud when they'd usually make split-second decisions. They'll even avoid clicking things they're unsure about because they don't want to "get it wrong"—when in real life, they'd just tap and see what happens.
The Fake Politeness Problem
Here's the thing that really gets me: participants want to be nice. They'll tell you your app is "quite good actually" when they're secretly frustrated. They'll struggle with a confusing interface but apologise for not understanding it properly. It's maddening because you end up with feedback that's basically useless.
The solution? Make your sessions feel less like tests and more like conversations. Use screen recording tools so people can use your app on their own devices in familiar environments. Ask them to show you how they'd normally do things rather than following specific tasks. And for crying out loud, stop hovering over their shoulder taking notes—it makes everyone nervous!
Poor Research Planning
I've seen research sessions fall apart before they even start—and it's usually because nobody bothered to think through what they actually wanted to learn. You know what happens when you wing it? You end up with a room full of confused participants, wasted budget, and data that tells you absolutely nothing useful.
The thing is, good research planning isn't rocket science, but it does require you to slow down and think before you act. I mean, you wouldn't build an app without wireframes, so why would you run research without a proper plan? Yet I see teams do this all the time, especially when they're under pressure to "just get some user feedback quickly."
Here's what proper planning actually looks like. First, you need clear research questions—not "let's see what users think" but specific questions like "can users complete checkout in under 3 minutes?" or "do they understand what this button does?" Without these, you're just having expensive conversations with strangers.
The Planning Checklist That Actually Works
- Define 3-5 specific research questions before you start
- Choose the right method for what you're trying to learn
- Plan your participant screening criteria in detail
- Create a discussion guide with timing for each section
- Test your prototype or materials beforehand
- Brief your observers on what to look for
And here's something people forget—you need to plan what you'll do with the data afterwards. I've worked with teams who collected brilliant insights but had no process for turning them into actionable changes. That's not research; that's just expensive theatre.
Write down exactly what decision you're trying to make with this research. If you can't answer that, you're not ready to start recruiting participants yet.
Misinterpreting What Users Actually Mean
Here's where things get properly tricky—and honestly, where I see most teams completely mess up their research. Users rarely say exactly what they mean, and they definitely don't mean exactly what they say. It's not their fault; they're just not trained to articulate their needs in ways that translate directly to app features.
When a user says "I want more customisation options," they might actually mean "I feel like this app doesn't understand my workflow." When they complain that your app is "too slow," the real issue could be that the loading screens don't give them enough feedback about what's happening. I've seen teams spend months building customisation features when what users actually needed was better default settings that worked out of the box.
What Users Say vs What They Need
The gap between user words and user needs is massive. Someone might tell you they want "more advanced features" but what they're really struggling with is finding the basic features they need. They'll ask for "better navigation" when the real problem is that your information architecture doesn't match their mental model of how the app should work.
- Listen for emotions, not just feature requests
- Ask "what were you trying to accomplish?" instead of "what do you want?"
- Watch their behaviour during the session—it tells a different story than their words
- Probe deeper when something doesn't add up
- Look for patterns across multiple users rather than individual complaints
The skill here is learning to translate user language into actionable insights. It takes practice, but once you get good at it, your research sessions become ten times more valuable. You'll start solving real problems instead of building features nobody actually uses.
Not Acting On What You Learn
Right, so you've done brilliant research. Your participants gave you gold-dust insights, you spotted patterns in their behaviour, and you've got a lovely report sitting on your desk. Now what? Here's where most research sessions actually become useless—when teams don't act on what they've learned. I see this all the bloody time, and it drives me mad.
The problem isn't that teams ignore research findings on purpose. Usually its more subtle than that. The research gets filed away "for later", or teams cherry-pick only the findings that support what they already wanted to build. Sometimes the insights get watered down as they move up the chain, losing their impact along the way. Before you know it, you're back to building features based on assumptions rather than evidence.
We spent three weeks conducting user interviews and discovered our onboarding flow was confusing users, but we launched it unchanged because we'd already committed to the deadline
Another common trap? Acting on feedback but missing the deeper insight. A user says "I want a dark mode" so you build dark mode—but you didn't dig into why they asked for it. Maybe they were struggling with readability, and better contrast would solve the real problem without adding another feature to maintain.
Making Research Actionable
The best research sessions include a clear action plan before you even start. What will you do if users struggle with feature X? How will you prioritise conflicting feedback? Who has the authority to make changes based on what you learn? Sort this out upfront, and your research actually has a chance of improving your app rather than gathering dust in a folder somewhere.
Conclusion
Right then—after building apps for countless clients and watching user research sessions go spectacularly wrong (and brilliantly right), I reckon there's one thing that separates useful research from complete waste of time sessions. It's not fancy equipment or expensive tools; it's whether you genuinely care about understanding your users or you're just ticking boxes.
The truth is, bad user research isn't just useless—it's actively harmful. When you base decisions on flawed insights, you end up building features nobody wants, fixing problems that don't exist, and spending months going in completely the wrong direction. I've seen teams waste six months of development time because they trusted research sessions where they asked leading questions to the wrong people at the wrong time.
But here's the thing—good user research is absolutely game-changing for mobile apps. When you get real insights from actual users, everything clicks into place. You know which features to prioritise, you understand the pain points that actually matter, and you can make design decisions with confidence rather than guesswork.
The key mistakes we've covered aren't rocket science to avoid. Test with your real audience, ask open questions, plan your sessions properly, and for crying out loud—actually use what you learn. Most importantly, remember that user research isn't a one-time thing you do before launch; it's an ongoing conversation with the people who'll make or break your app's success. Get that right, and you'll build something people actually want to use.
Share this
Subscribe To Our Learning Centre
You May Also Like
These Related Guides

How Do You Research Users When Your App Doesn't Exist Yet?

What Questions Should You Never Ask During User Research?
