Expert Guide Series

How Do I Find Out What Features Users Really Want?

Have you ever spent months building features nobody actually uses? I have, and its bloody frustrating when you realise you could have known this before writing a single line of code. The thing is—most app teams build what they think users want, not what users actually need, and there's a massive gap between those two things. I've worked with healthcare startups that built complex data dashboards because they seemed professional, only to find out users just wanted a simple way to book appointments. And I've seen e-commerce clients obsess over fancy product filters while their users were abandoning carts because checkout took too many steps.

The problem isn't that teams don't care about their users; its that they don't know how to properly figure out what features will actually get used. You know what? Most app ideas start with good intentions but somewhere along the way they get derailed by assumptions. Someone in a meeting says "users definitely want this" and everyone nods along, and suddenly you're building features based on gut feelings rather than real evidence. I mean, gut feelings have their place, but they shouldn't be your primary research method when you're investing tens of thousands of pounds into development.

Feature research isn't about asking users what they want—it's about understanding what problems they're actually trying to solve and watching how they behave when nobody's looking

What makes this tricky is that users often can't tell you what they need. They'll say they want more features when what they really need is the existing ones to work better. That's where proper feature research comes in, and honestly, it doesn't have to be complicated or expensive. Over the years I've refined a handful of methods that work reliably across different industries—from fintech apps handling sensitive transactions to education platforms where engagement is everything. The approaches I'll share in this guide come from actually doing this work, making mistakes, learning what works, and doing it again with better results.

Why Most Apps Get Feature Planning Wrong

I've watched dozens of apps launch with features nobody wanted, and it still happens more often than it should. The pattern is always the same—someone in a meeting room (usually the person paying for the app) decides what users need without actually asking them. I mean, it sounds obvious when you say it out loud, but you'd be surprised how many projects I've taken on where the feature list was written before anyone spoke to a single real user.

The biggest mistake? Building features because they seem cool or because a competitor has them. I worked on an e-commerce app where the client insisted on adding augmented reality try-on functionality because they'd seen it in another app. Sounds great, right? Except their users were primarily over 55 and just wanted faster checkout. We built it anyway (client insisted), and usage data showed less than 2% of users even attempted to use the AR feature. That's a lot of development budget for something that collected dust.

Another common problem is asking users what they want directly—which sounds like the right approach but actually isn't. People are terrible at predicting what they'll use. They'll tell you they want twenty features in a survey, then complain the app feels cluttered when you build them all. What users say they want and what they actually use are two completely different things, and that gap has cost clients tens of thousands in wasted development. This is where understanding what questions actually reveal user needs becomes crucial for successful app development.

The Real Reasons Feature Planning Fails

  • Building features based on internal assumptions rather than user behaviour
  • Copying competitor features without understanding why they exist
  • Taking user requests at face value instead of understanding the underlying need
  • Prioritising stakeholder preferences over actual user data
  • Adding features to justify budget rather than solve real problems
  • Skipping validation steps because "we already know what users want"

The truth is, successful feature planning requires looking at what people do, not just what they say. Its about finding patterns in behaviour, spotting friction points in existing solutions, and testing assumptions before you commit development resources. But here's the thing—most companies skip these steps because they take time and feel less productive than just building stuff.

The Five Methods That Actually Work for Feature Research

I'm going to be straight with you—most feature research methods sound great in theory but fall apart when you actually try them. Over nearly a decade of building apps, I've tested pretty much every approach out there and only five have consistently given me reliable insights about what users actually want (not what they say they want, which is often completely different).

First up is user interviews, but not the kind where you ask "what features would you like?" That question is useless. Instead, I ask people to walk me through their last experience with a similar problem. When I was working on a healthcare booking app, we asked users to describe the last time they tried to book an appointment—the frustrations, the workarounds, everything. The feature requests they gave us directly? Mostly rubbish. The insights from watching their actual behaviour? Gold. Knowing which research technique to use first can save you from months of building the wrong features.

Second method is analytics mining from existing products. If you've got any kind of current solution (even a website or manual process), track what people do, not what they say. On an e-commerce project, we discovered users were visiting the size guide page multiple times before purchasing; turns out they wanted augmented reality try-on features, not the chatbot support they kept requesting in surveys.

Third is support ticket analysis. I know it sounds boring but honestly, your support inbox is basically a list of features your product needs. When we analysed three months of tickets for a fintech app, 60% of queries were about one missing feature that never made it onto our roadmap because nobody thought to look.

Run a quick audit of your last 100 support tickets and categorise them by theme—you'll probably find that 3-4 issues account for the majority of problems, and those are your real feature priorities.

The Methods That Need Real Users

Fourth is prototype testing with actual target users (not your mates or colleagues). Build something rough—it doesn't need to work properly—and watch people try to use it. The confusion points? Those tell you what's missing. I've done this with everything from paper sketches to clickable prototypes; both work fine as long as you're watching real behaviour rather than asking for opinions.

Watching What Competitors Miss

Fifth method is competitor review mining. Not just looking at what features competitors have, but reading their one-star and two-star reviews to find out what they're doing wrong. When we were building a meal planning app, competitors had thousands of reviews complaining about the same issue (grocery list organisation). We made that our core feature and it became our biggest differentiator. The thing is, most companies don't actually read the negative reviews of their competition; they just look at feature lists and try to copy them. Learning what to look for when studying app store reviews can reveal gaps your competitors haven't addressed.

These five methods work because they focus on revealed preferences (what people actually do) rather than stated preferences (what people say they'll do). And that gap? It's massive. I've seen it trip up experienced product teams time and time again.

Reading Between the Lines of User Feedback

User feedback is brilliant when its accurate, but here's what nobody tells you—people are absolutely terrible at articulating what they actually want. I've lost count of how many times clients have shown me their feedback forms filled with requests for features that, when we dig deeper, aren't really what users need at all. Its like when someone says they want a faster horse, but what they really need is a car, you know?

The trick I've learned over the years is to focus on the problem being described rather than the solution being requested. When users say "I want a dark mode," they might actually be telling you the app is uncomfortable to use at night or the colours are too harsh. When they ask for "more filter options," they're often really saying they cant find what they're looking for quickly enough. See the difference? The stated request is just the tip of the iceberg—the real insight is buried underneath.

Looking for Patterns in Complaints

I worked on a healthcare app where users kept requesting a "notes section" in their appointment bookings. Simple enough, right? But when we actually spoke to these users (not just read their feedback), we discovered they wanted to remember which symptoms to mention to their doctor. The solution wasn't just a notes field—it was a pre-appointment checklist that prompted them with relevant health questions. That feature ended up with an 80% usage rate because we understood the underlying need.

Pay attention to the language people use too. Words like "confusing," "lost," or "cant find" suggest navigation problems. Terms like "slow," "laggy," or "waiting" point to performance issues. And when users say something "doesn't work," thats your cue to investigate what they were actually trying to accomplish. The emotional tone matters as much as the content—frustrated users often highlight your most pressing problems. Understanding how to turn user feedback into better app features requires reading beyond the surface complaints.

The Context Behind the Complaint

Another thing I always do is look at when and where feedback is being left. Reviews written at 2am often reveal different pain points than those written during business hours. A fintech app I worked on received loads of complaints about "complicated navigation" but only during market hours—turned out users needed faster access to specific features when making time-sensitive decisions. We redesigned the quick-access menu and those complaints dropped by 60%.

Don't ignore the silent users either. Sometimes the most telling feedback is the lack of it—if a feature has low engagement and no one's asking about it, thats valuable data too. I mean, why would they request improvements for something they've already decided to ignore? Track what people do after leaving feedback as well; if they uninstall shortly after complaining, that tells you the problem was serious enough to drive them away completely. This connects directly to why people delete apps they've paid for—often it's because their feedback went unheard.

Using Data to Spot What People Really Need

Here's what actually matters when you're digging through data—its not just about what users are doing, its about why they're doing it. I've seen plenty of teams get excited about high engagement on a particular screen only to discover (usually too late) that people were stuck there because the navigation was confusing. The data looked good but the experience was terrible.

Start with your analytics platform, whether that's Firebase, Mixpanel, or Amplitude. Look at where people drop off in your flows. On a fintech app we built, we noticed 40% of users abandoned the account setup at step three of five; turned out we were asking for their national insurance number too early and it felt invasive. Moving that field to after they'd experienced the apps value increased completion by 28%. Sometimes the data tells you exactly where the problem is if you know what to look for.

The best data reveals not just what users are doing, but what they're struggling to do

Session recordings are brilliant for this (we use Hotjar or FullStory depending on the project). Watch 20-30 sessions and you'll spot patterns you'd never see in aggregate data—the double taps that indicate confusion, the form fields people fill in then delete, the features they try to access that don't exist yet. On an e-commerce app, we saw users repeatedly tapping product images trying to zoom in... a feature we hadn't built. That became our top priority pretty quickly.

But here's the thing; quantitative data tells you what's happening but qualitative data tells you why. Combine your analytics with support tickets, app store reviews, and user interviews. When the data says "users aren't using feature X" but reviews say "I wish this app could do X", you've probably got a discoverability problem, not a demand problem. That distinction matters when you're deciding what to build next. Sometimes research sessions fail completely though, and knowing what makes research sessions useless helps you avoid wasting time on bad data.

Testing Ideas Before You Build Them

Look, I've lost count of how many times clients have come to me with a feature they're absolutely certain users will love—only to discover later that nobody actually wants it. Building features is expensive; testing ideas beforehand is cheap. That's why we never skip validation, even when someone is convinced they've got the next big thing.

The simplest test is a landing page with nothing behind it. I mean, literally just a page that describes the feature and has a signup button. We did this for a fintech client who wanted to add cryptocurrency trading to their app. Rather than spend three months building it, we created a page explaining the feature and tracked how many users clicked "Get Early Access". Turns out only 2% showed interest—nowhere near enough to justify the development cost. Saved them around £80,000, that did.

Prototypes are your next level up. Tools like Figma let you create clickable mockups that feel real enough to test with actual users. We built a prototype for a healthcare app's symptom checker in about two days; watching five users try to navigate it revealed three major flaws we'd never have spotted otherwise. And fixing those issues in a prototype? Takes hours. Fixing them in production code? Weeks.

Beta testing with a small group of real users is where you catch the stuff that doesn't show up in prototypes. When we added a meal planning feature to a nutrition app, the prototype tested brilliantly, but the beta revealed that people found it too time-consuming to set up. We simplified the onboarding from 12 steps to 4, and retention jumped by 40%. The thing is, you need to make it easy for beta users to give feedback—we usually include a feedback button right in the interface and actually respond to what people tell us. Timing matters here too, and understanding when to launch your app's beta test can make the difference between useful feedback and wasted effort.

Sometimes the best test is just talking to people. Before building a social feature for an e-commerce app, we ran hour-long interviews with ten users. Half of them said they'd never use it because they didn't want their purchases to be public. That's a feature killer right there, discovered for the cost of a few gift cards rather than months of development. When dealing with sensitive topics though, you need to be careful about how you research without making users lie or give you the answers they think you want to hear.

Making Sense of Competitor Apps

I'll be honest, when I started doing competitor research properly, it felt a bit like being a detective. You're not just downloading rival apps and clicking through them—you're trying to understand why they made specific choices and whether those choices actually worked. The mistake I see people make is treating competitor research like a shopping list: they just copy features they see without understanding the context behind them. That's not research, thats just replication.

Here's what actually works. When I analyse competitor apps for clients, I focus on three specific things: what features they prioritised in their first release, how theyve evolved those features over time (you can check old App Store screenshots and version histories for this), and most tellingly, what features they removed. If a competitor built something and then quietly took it away in a later update, that tells you loads about what users didnt find valuable. I've seen fintech apps remove entire budgeting features after investing months building them, because usage data showed nobody touched them.

What to Actually Look For

Don't just note what features exist. Pay attention to how theyre implemented. Does the competitor bury a feature deep in settings, or is it front and centre? That placement tells you how important they think it is. Look at their onboarding flow too—what features do they explain immediately versus what they assume users will discover? When I was working on a healthcare app, we noticed competitors spent their entire onboarding explaining appointment booking but barely mentioned prescription refills. Turns out prescription refills had much higher usage but worse discoverability, which was a massive opportunity for us.

Reading User Reviews Properly

Competitor app reviews are honestly more useful than reviews of apps that dont exist yet. Sort by recent reviews and look for patterns in complaints. Are people asking for features that dont exist? Are they frustrated with how existing features work? I always create a simple spreadsheet tracking common requests across competitor reviews. For an e-commerce project, we found that three different competitor apps had users complaining about the same thing: difficulty tracking orders from multiple purchases. None of the competitors had addressed it properly, which became our differentiator.

Version history is your secret weapon. App Store listings show when updates were released and what changed. If a competitor suddenly added a feature after months of nothing, they probably saw data suggesting users needed it. You can piggyback on their research without spending the money they did.

One thing to watch out for—just because a competitor has a feature doesn't mean its successful or that you need it. I've seen apps include features purely because investors or stakeholders demanded them, not because users wanted them. The best way to tell the difference? Check if the feature gets mentioned in their marketing materials. If they built something but dont promote it, its probably not performing well. Features that work get shouted about; features that dont get quietly maintained but never highlighted.

You should also pay attention to competitor pricing and monetisation. How do they charge for premium features? What's included in their free tier versus paid? This isn't just about copying their business model, its about understanding what users are willing to pay for and what they expect for free. When I worked on an education app, we found competitors were charging for basic progress tracking, which review data showed users resented. By making that free and charging for more advanced analytics instead, we positioned ourselves better from day one. This kind of competitive analysis is part of broader market research methods that work for new apps entering established markets.

What to Analyse Why It Matters Where to Find It
Feature evolution over time Shows what users actually need versus initial assumptions App Store version history, old screenshots
Removed features Reveals what didnt work despite investment Version notes, Reddit discussions, tech blogs
Review patterns Exposes gaps and frustrations you can address Recent reviews sorted by helpfulness
Onboarding priorities Shows which features they consider most valuable Fresh app installs, competitor user flows
Marketing focus Indicates which features actually drive conversions App Store descriptions, website, ads

How to Decide Which Features to Build First

I've sat through hundreds of feature prioritisation meetings over the years and honestly, most teams get this bit completely wrong. They either build everything at once (which takes forever and costs a fortune) or they pick features based on who shouts loudest in the meeting room. Neither approach works, trust me.

The framework I use with clients is pretty straightforward—I call it the Impact-Effort matrix but don't let that fancy name put you off. You list every feature idea you've gathered from your research, then rate each one on two scales: how much value it brings to users (impact) and how difficult it is to build (effort). The sweet spot? High impact, low effort features. These are your quick wins, the ones you build first because they give users real value without eating up months of development time.

Here's where it gets tricky though. Some features are what I call "table stakes"—things users expect as standard. When we built a healthcare app for a private clinic, secure messaging wasn't the most exciting feature on our list, but it was absolutely necessary before we could even think about the clever AI symptom checker they wanted. You've got to identify these foundational features and build them first, even if they seem boring.

I also look at dependencies between features. Sometimes Feature B can't exist without Feature A, which changes the entire priority order. For an e-commerce client, we wanted to build personalised product recommendations (the sexy bit) but we needed basic user accounts and purchase history tracking first. Its just common sense really, but you'd be surprised how often teams miss these connections.

The other thing I always consider is what I call the "wow moment"—that first experience that makes a user think "okay, this app gets it." For a fitness app we built, we discovered through testing that the workout logging feature wasn't the wow moment at all; it was the progress photos comparison tool that made people go "bloody hell, I can actually see the difference." We moved that feature way up the priority list and it made a massive difference to early retention rates.

One mistake I see constantly is teams trying to match competitors feature-for-feature from day one. Sure, competitive analysis matters (we covered that earlier) but if your competitor has 50 features after three years of development, you can't launch with all 50. Pick the core experience that solves your users main problem, then build outward from there. When we launched an app for a fintech startup, they were desperate to include budgeting tools, investment tracking, and spending analytics all at once. We convinced them to launch with just the spending insights feature—the one that solved the biggest pain point—and it worked brilliantly. Users loved it, we got real feedback, and we added the other features based on what people actually asked for rather than what we assumed they'd need.

I also factor in technical risk when prioritising features. If a feature requires integrating with a third-party API that might be unreliable, or involves new technology the team hasn't worked with before, that increases risk. Sometimes its worth building a simpler version of a feature first just to prove the concept works before investing in the full implementation. For a food delivery app, we built a basic restaurant listing feature with manual updates before we tackled the complex real-time inventory integration the client wanted.

The final piece of the puzzle? Leave room for learning. I never plan to build 100% of features in the first version because I know—I absolutely know—that once real users start using the app, they'll tell us things we never expected. Maybe through their behaviour, maybe through direct feedback, but they will change our priorities. The apps that succeed are the ones that launch with a solid core feature set, then evolve based on real usage data rather than educated guesses made in a conference room six months earlier.

Avoiding the Feature Bloat Trap

Feature bloat is the silent killer of mobile apps, and I've watched it happen more times than I care to admit. The pattern is always the same—an app starts with a clear purpose, then stakeholders get excited and start adding "just one more feature" until the thing becomes a bloated mess that confuses users and crashes under its own weight. I worked on a fitness app once where the client wanted to add meal planning, social features, a marketplace, video content, and personal training scheduling all in version one. The app took twice as long to build, cost three times the budget, and when we launched, users complained it was "too complicated" and went back to simpler alternatives. Painful lesson learned.

The thing about feature bloat is it doesnt just make your app harder to use—it makes it harder to build, more expensive to maintain, and nearly impossible to market effectively. When someone asks what your app does, you should be able to explain it in one sentence. If you cant, you've probably got too many features fighting for attention. I use what I call the 80/20 rule for feature prioritisation; identify the 20% of features that will deliver 80% of the user value, then build those first. Everything else goes on a roadmap that you revisit after launch based on actual user behaviour, not guesses. Poor design choices around feature complexity often lead to design mistakes that make users delete apps quickly, which is exactly what you're trying to avoid.

Every feature you add is a promise to maintain, update, and support it forever, or at least until you remove it and deal with angry users who relied on it.

Start with your core value proposition and protect it fiercely. If a feature doesnt directly support that core purpose, it probably shouldnt be in version one. I've seen healthcare apps try to be social networks and e-commerce apps try to become content platforms, and it rarely works out well. Focus on doing one thing brilliantly rather than ten things poorly—you can always expand later once you've proven your core concept works and users are actually asking for more.

Conclusion

The difference between apps that succeed and those that fail usually comes down to one thing—building features people actually want to use. I've seen brilliant technical execution wasted on features nobody asked for, and I've watched simple apps with the right features dominate their markets. It's not about having the biggest budget or the fanciest tech stack; its about listening properly and making smart decisions based on what you learn.

Throughout this guide we've covered methods I use on every single project—user interviews that dig past surface requests, analytics that reveal actual behaviour rather than stated preferences, prototype testing that saves thousands in wasted development. These aren't theoretical approaches; they're the same techniques I used when building a healthcare app where we discovered users didn't want the medication tracking feature we'd planned at all—they wanted a simple way to share their health data with family members. That insight saved the project.

But here's the thing—knowing these methods is only half the battle. The hard part is actually doing the research before you start building, especially when you're excited about your idea and just want to see it come to life. I get it. Starting with code feels productive while interviewing users feels slow. But spending two weeks on proper research can save you three months of building the wrong thing... and I've learned that lesson the expensive way more than once.

The mobile app world moves fast and user expectations keep rising. What worked last year might not work now. But if you make feature decisions based on genuine insight rather than assumptions, you'll be ahead of 90% of apps out there. Trust me on that one.

Frequently Asked Questions

How long should I spend on feature research before starting development?

In my experience, spending 2-3 weeks on proper research can save you months of building wrong features. I've seen projects where we discovered fundamental user needs had been misunderstood just by doing 10-15 user interviews and analysing existing data, which completely changed the roadmap and saved tens of thousands in development costs.

What's the biggest mistake teams make when asking users about features?

Asking users directly what features they want is almost useless—people are terrible at predicting what they'll actually use. Instead, ask them to walk through their last experience with a similar problem or watch how they currently solve it. The gap between what users say they want and what they actually use has cost my clients countless wasted features.

How can I tell if a feature request from users is worth building?

Look for the problem behind the request rather than the solution being suggested. When users asked for "more filter options" on an e-commerce app, they were really saying they couldn't find products quickly enough—the real solution was better search, not more filters. Focus on the underlying need and you'll often find simpler, more effective solutions.

Should I build features that my competitors have?

Only if your research shows users actually need them. I've seen apps waste months building AR try-on features because competitors had them, only to discover their users (over 55s) just wanted faster checkout. Check competitor app reviews to see what users complain about—that's often more valuable than copying their feature list.

How do I prioritise which features to build first?

Use an impact-effort matrix—rate each feature on user value versus development difficulty, then start with high-impact, low-effort wins. But also identify "table stakes" features that users expect as standard before you can add anything clever. I always build the core problem-solving feature first, then expand based on actual user behaviour rather than assumptions.

What's the best way to test feature ideas before building them?

Start with a simple landing page describing the feature and track signup interest—this saved one fintech client £80k when only 2% of users showed interest in cryptocurrency trading. Then create clickable prototypes in Figma to test navigation and flow. Beta testing with 10-20 real users catches issues that don't show up in prototypes and can dramatically improve retention.

How many features should I launch with in version one?

Focus on the 20% of features that deliver 80% of user value—usually 3-5 core features maximum. I've seen apps fail because they tried to launch with everything at once, making them too complex and expensive to build. Pick the one feature that creates your "wow moment" and build outward from there based on real user feedback.

How do I avoid feature bloat as my app grows?

Every feature you add is a promise to maintain and support it forever, so be ruthless about protecting your core value proposition. If you can't explain what your app does in one sentence, you've probably got too many features fighting for attention. I use the rule that new features must directly support the main purpose—everything else goes on a roadmap for later consideration.

Subscribe To Our Learning Centre