Expert Guide Series

Should I Listen to All User Feature Requests or Ignore Some?

A major automotive manufacturer launched their parking app with high hopes a few years back—one that promised to revolutionise how drivers found and paid for parking spaces. Within weeks of launch, they were drowning in feature requests. Users wanted integration with every conceivable payment method, real-time traffic updates, electric vehicle charging station locations, service reminders, and even a feature to track their vehicles mileage. The product team started building everything users asked for. Six months later, the app had become a bloated mess that took 12 seconds to load and crashed more often than it worked; the core parking functionality got buried under layers of half-baked features that nobody actually used once they were built.

I've watched this exact scenario play out dozens of times across different industries—from healthcare apps that tried to become everything to everyone, to fintech platforms that lost sight of their main value proposition. The question isn't really whether you should listen to user feedback (you absolutely should), its about knowing which feedback deserves action and which needs a polite decline. And honestly? Most founders and product managers get this spectacularly wrong because they confuse listening with agreeing.

Every feature request is a data point, but not every data point should become a feature

What makes this tricky is that users are rarely wrong about their problems—theyre just often wrong about the solutions. Someone requesting a dark mode might actually be struggling with eye strain from poor contrast ratios. A request for more customisation options might signal that your onboarding flow isn't properly segmenting users. The skill lies in hearing what users are really saying beneath their specific feature requests, and that takes experience to develop properly.

The Uncomfortable Truth About User Feedback

Users will tell you they want one thing, then do something completely different. I've seen this happen dozens of times—a healthcare app client of mine spent three months building a detailed meal planning feature because users kept requesting it in surveys. When we launched it? Less than 8% of users even opened it once. The thing is, those same users were spending most of their time on the simple medication reminder feature we almost didn't build because nobody asked for it.

Here's what I've learned after building apps for e-commerce brands, fintech companies, and education platforms: user feedback isn't wrong, but its often incomplete. People are terrible at predicting what they'll actually use. They'll request complex features that sound impressive but don't fit into their daily habits. A retail client had users demanding a visual search feature—seemed like a no-brainer, right? We built it. Turns out people preferred typing because it was faster and more precise for what they actually needed.

The real issue is that user feedback tells you about problems, not solutions. When someone says "I want feature X," what they're really saying is "I have problem Y and I think X will fix it." Your job is to dig into problem Y, not just build feature X. I learned this the hard way on a travel booking app where users kept asking for more filter options. We added filters. They still complained. What they actually needed was better default sorting—they didn't want more choices, they wanted fewer, better ones.

User feedback is valuable data, but it cant be your product roadmap. You need to look at what users do, not just what they say; analyse usage patterns, drop-off points, and where people spend their time. That tells you more than any survey ever will.

Why Your Most Vocal Users Might Be Wrong

The loudest voices in your user base are rarely representative of your entire audience—and I learned this the hard way on a fintech project where our most active forum users kept demanding a feature to manually categorise every single transaction. We spent six weeks building it. Want to know how many people actually used it? Less than 2% of our active users. The silent majority just wanted automatic categorisation that worked well enough, not perfect control over every detail.

Here's the thing though; your vocal users aren't wrong because they're being difficult or unreasonable. They're wrong because they represent a specific subset of your user base—typically power users who have very different needs from casual users. I've seen this pattern repeat itself across healthcare apps, e-commerce platforms, and educational tools. The people who take time to submit detailed feature requests are often the ones who use your app in ways you never intended, pushing it to its limits. That's valuable feedback, sure, but its not your typical user experience.

The real problem starts when you build your entire roadmap around these vocal requests. I worked with an e-commerce client who kept adding advanced filtering options because their power users demanded them. Meanwhile, their conversion rates dropped because new users found the interface overwhelming. We eventually had to strip out half the features we'd added and rebuild the onboarding flow from scratch. Bloody expensive mistake that could've been avoided with proper validation.

Who's Actually Talking to You

Understanding who submits feedback helps you weigh it properly. In most apps I've built, the feedback breakdown looks something like this:

  • Power users (5-10% of base) submit 60-70% of feature requests
  • Casual users (70-80% of base) submit maybe 15% of requests
  • New users (first 30 days) submit 10-15% but often highlight onboarding issues
  • Churned users rarely tell you why they left unless you specifically ask them

The silent majority—your casual users who open the app occasionally and just want it to work—they're not sending you emails. They're not filling out surveys. But they're the ones paying your bills through subscriptions or ad revenue. When I'm working on a healthcare app, I always remind myself that the doctor who wants 47 customisation options for their dashboard isn't representative of the GP who just needs to check patient notes between appointments. The latter group is much larger, but you'd never know it from your inbox.

Track who's giving you feedback against their usage patterns. If someone's been using your app daily for two years, their needs are fundamentally different from someone in their first week—and you need to weight that feedback accordingly in your decision-making process.

The Expertise Trap

Vocal users often become pseudo-experts in your app, and that expertise creates a blind spot. They've learned workarounds for your app's limitations and now they're requesting features that cement those workarounds into the product. I've seen this happen with a logistics app where experienced drivers wanted a complex multi-stop route editing feature because they'd learned to game the existing system. New drivers didn't need it; they needed clearer navigation instructions. We built the complex feature first because the vocal users were so insistent. Six months later, our onboarding completion rate was terrible and new driver retention was dropping. We ended up building a simplified "beginner mode" that became the default for everyone, which annoyed our power users but actually improved overall engagement by 30%. Sometimes the best decision is the one that makes your loudest users temporarily unhappy but serves the silent majority better.

Building a Framework That Actually Works

After years of dealing with feature requests across dozens of apps, I've developed a scoring system that saves me from making expensive mistakes. It's not fancy but it works. Every request gets scored on three things: how many people actually need it (not just want it), how much it aligns with our core app purpose, and the technical effort required. If something scores low on all three? That's a polite no. High on one but low on the others? Needs more investigation.

I learned this the hard way with a healthcare app where the client wanted to add a complex symptom checker because three users requested it. We nearly spent six weeks building it before I looked at the data properly—only 2% of users ever mentioned anything related to symptom checking, and it would've pulled focus from the appointment booking feature that 89% of users relied on daily. The scoring system would've flagged this immediately. These days I run every request through it before even discussing timelines.

The framework also includes a "why now" test. Sure, dark mode might be a good idea, but if your app crashes on Android 13 for 30% of users? Dark mode can bloody well wait. I keep a backlog spreadsheet with all requests categorised as critical (affects core functionality), important (enhances main use cases), or nice-to-have (everything else). During sprint planning we only pull from the first two categories unless we've got spare capacity, which honestly... we never do.

What makes this work is being transparent about it. I share the scoring criteria with clients so they understand why some features get prioritised over others. It takes the emotion out of decision-making and grounds everything in user data and business impact. No more heated debates about whether we should add that social sharing feature that literally nobody will use—the framework gives us the answer before the argument even starts.

When to Say No to Feature Requests

Saying no is bloody hard, isn't it? I mean, you've got users who love your app enough to actually take the time to suggest improvements—that's rare these days—and your instinct is to make them happy. But here's what I've learned after building dozens of apps: saying yes to everything is the fastest way to build a bloated mess that nobody wants to use.

I worked on a healthcare app a few years back where we had about thirty feature requests sitting in our backlog. The users wanted everything from dark mode to medication reminders to integration with five different fitness trackers. Sounds reasonable? Sure, until you realise that adding all of those would have pushed our release date back six months and tripled our development costs. We needed a system to figure out what actually mattered.

The Three-Question Filter

I use three questions before agreeing to build anything: Does this align with our core purpose? Will more than 20% of our users actually use it? And can we build it without compromising the experience for everyone else? That last one trips people up. Adding features isn't free—each one makes your app more complex, harder to maintain, and potentially slower.

Every feature you add is a promise to maintain it forever, and that promise has a cost most founders dont anticipate

The healthcare app? We ended up saying no to eighteen of those thirty requests. We focused on the medication reminders and one fitness tracker integration because our data showed those genuinely solved problems for our core users. The rest were nice-to-haves that would've diluted what made the app special in the first place. Six months after launch, our retention rate was 40% higher than the industry average... and nobody even remembered asking for those other features.

The Difference Between What Users Say and What They Need

Here's something I've learned the hard way—users are terrible at diagnosing their own problems. They're brilliant at experiencing pain points, but rubbish at prescribing solutions. I once worked on a fitness tracking app where users kept requesting a feature to manually log every single meal with precise calorie counts. They were adamant this was what they needed. But when we dug deeper through actual usage data and proper interviews, what they really wanted was to feel in control of their nutrition without spending 20 minutes a day entering data. We built a simple photo-based meal logger with AI estimation instead, and engagement went up by 60%.

The thing is, users communicate in features because that's the language they think developers speak. Someone says "I need a dark mode" but what they're actually telling you is "your app hurts my eyes at night." The solution might be dark mode... or it might be adjusting your colour contrast, reducing bright whites, or adding an auto-brightness feature. You won't know unless you look past the request to the underlying need.

What Users Actually Tell You

When analysing feedback, I always break down what users are really communicating:

  • Requested features are symptoms, not diagnoses—they point to problems worth solving
  • Emotional language reveals priority levels; "frustrating" matters more than "nice to have"
  • Workarounds users create show you what's genuinely important to their workflow
  • Feature requests that come up across different user segments signal core functionality gaps
  • Timing of feedback matters—requests during onboarding vs after months of use tell different stories

I've seen apps add dozens of requested features only to watch engagement drop because they solved the wrong problems. The skill isn't in collecting feedback, its in translating what users say into what they actually need from your app.

How to Validate Features Before Building Them

Here's what I do before writing a single line of code for any feature—I test it without building it. Sounds mad doesn't it? But I've saved clients hundreds of thousands of pounds by figuring out what won't work before we've committed to months of development. The validation stage is where you separate genuinely valuable features from expensive mistakes.

I worked on a fintech app where users kept asking for cryptocurrency trading. The requests were loud and frequent. But when we created a simple landing page describing the feature and tracked sign-ups, only 2% of our active user base showed interest. That's not enough to justify the regulatory nightmare and development cost. We validated the demand before building something nobody actually wanted enough to use.

Quick Validation Methods That Actually Work

You don't need fancy tools to test features. I use these techniques regularly, and they've never failed to provide useful data:

  • Fake door testing—add a button for the feature that leads to a "coming soon" message and track clicks; if less than 10% of users click it, the demand isn't there
  • Prototype testing with tools like Figma or Marvel—show users a clickable mock-up and watch how they interact with it; you'll spot confusion within minutes
  • Beta programmes with a small group of actual users—I usually pick 20-30 people who represent your target audience and give them early access
  • A/B test the feature request itself—send different messaging to user segments and measure response rates
  • Manual concierge approach—manually perform what the feature would do before automating it; we did this for a healthcare app's prescription reminder system

The Numbers Don't Lie

I set clear success metrics before validation begins. If you're testing a new checkout flow, you need to know what improvement you're looking for. Is it a 15% reduction in cart abandonment? A 20% increase in conversion? Without specific targets, you cant tell if the feature is worth building. And here's the thing—validation often shows that tweaking existing features delivers better results than building new ones. That's been true in about 40% of the projects where I've run proper validation.

Build a minimum testable version in one week maximum; if you're spending longer than that on validation, you're overthinking it and probably building too much already.

Managing Expectations Without Losing Trust

This is where most teams mess up—they either promise everything and deliver nothing, or they're so vague about roadmap decisions that users feel completely ignored. I've found that being honest about your decision-making process actually builds more trust than trying to please everyone. When we tell users "that's a great idea, but here's why we're not building it right now", they respect the transparency. It's not about saying no and walking away; its about explaining the why behind your choices.

One approach that's worked really well for us is creating visibility into what we call our "consideration pipeline". Not a full roadmap (because those change and that causes problems), but a simple system that shows users we've logged their request, we're evaluating it, and here's roughly where it sits in our thinking. For a healthcare app we built, we used a basic Trello board that users could view—requests moved through columns like "under review", "researching feasibility", "planned for future", or "not aligned with our direction". Sounds simple, but the number of support tickets dropped by about 40% because people could see their voice was actually heard.

The language you use matters here too. Instead of "we can't do that", try "we're focusing our resources on X because it impacts more users" or "that doesn't fit our current direction, but here's how feature Y might solve your underlying need". I mean, you're still saying no, but you're giving context. And you know what? Most users are reasonable—they just want to know their feedback went somewhere other than a black hole. When you decline a request, offer an alternative solution or workaround if one exists...even if its not perfect, it shows you care about solving their actual problem, not just ticking boxes. This transparency becomes especially important when managing negative feedback during critical launch periods.

Balancing Vision With Reality

Here's what I've learned after building dozens of apps—your product vision isn't some sacred document that should never be questioned, but it shouldn't be abandoned every time a user asks for something either. I mean, it's a bit mad really how many founders swing between these two extremes. They either treat their vision like gospel or they become so user-driven that their app turns into a Frankenstein's monster of disconnected features.

I worked on a healthcare booking app where we had this clear vision: make appointment scheduling stupidly simple, three taps maximum. Users kept asking for features like detailed doctor profiles, reviews, before-and-after photos, integration with their calendar app, the ability to book multiple appointments at once... you get the picture. Each request made sense in isolation. But if we'd built them all? We would've ended up with another bloated healthcare platform that nobody actually uses because its too overwhelming.

The trick—and I say trick but its more like a discipline—is filtering requests through your core vision whilst staying honest about whether that vision still makes sense. Sometimes user feedback reveals that your original vision was based on assumptions that don't hold up in reality. That's not failure; that's learning. I've had to pivot apps mid-development because user testing showed us we'd fundamentally misunderstood the problem we were solving. This is where breaking down your app concept into manageable phases becomes crucial for maintaining focus.

Your vision should be strong enough to guide decisions but flexible enough to bend when reality proves you wrong

The best approach? Set clear principles for what your app will and won't do, then test every feature request against those principles. If a request aligns with your vision and solves a real problem for a significant portion of your users (not just the loudest ones), build it. If it doesn't, have the confidence to say no—even when that feels uncomfortable. Understanding which features to prioritise first can help maintain this balance between user demands and product vision.

Conclusion

After building apps for nearly a decade—from healthcare platforms processing thousands of patient records daily to fintech apps handling sensitive transactions—I can tell you that managing feature requests never gets easier, but it does get clearer. You'll still have users who swear they'll delete your app if you don't add their suggested feature (they usually don't), and you'll have stakeholders pushing for functionality that makes absolutely no sense for your core user base. But here's what I've learned: the apps that succeed aren't the ones that build everything users ask for; they're the ones that build what users actually need, even when those two things seem completely different.

Look, I've made mistakes on both sides of this. I've ignored feedback that would've saved us months of poor retention, and I've built features that maybe three people ever used because they seemed like good ideas at the time. The difference between those early projects and the successful ones we ship now? We have a system. We validate before we build, we look at behaviour over words, and we're not afraid to tell users no when their request would compromise the experience for everyone else.

The truth is, your users don't expect you to build every feature they suggest. What they do expect is to be heard, to understand why decisions are made, and to see that you're genuinely trying to solve their problems—even if the solution looks different to what they originally requested. That's the balance you're aiming for. Not saying yes to everything, not ignoring everyone, but building thoughtfully with a clear vision that can flex when the data supports it. And honestly? When you get that balance right, your app becomes something people genuinely want to use, not just something that ticks boxes on a feature list.

Frequently Asked Questions

How do I know if a feature request is worth building or just noise from vocal users?

I use a three-question filter: does it align with your core purpose, will more than 20% of users actually use it, and can you build it without compromising everyone else's experience? From my experience, vocal users typically represent only 5-10% of your base but submit 60-70% of requests—they're valuable but not representative of your silent majority who actually pay the bills.

What's the fastest way to validate a feature request without spending months building it?

Fake door testing works brilliantly—add a button for the feature that leads to a "coming soon" message and track clicks. If less than 10% of users click it, the demand isn't there. I've saved clients hundreds of thousands by using this method; one fintech app we worked on had users screaming for crypto trading, but only 2% showed actual interest when we tested it this way.

How do I say no to feature requests without losing users or damaging relationships?

Be transparent about your decision-making process rather than just saying "we can't do that." I tell users "that's a great idea, but here's why we're not building it right now" and explain the reasoning—whether it's resource allocation or alignment with core vision. Most users are reasonable; they just want to know their feedback didn't disappear into a black hole.

Should I prioritise requests from long-term power users or focus on new user needs?

This depends on your business model, but generally new user needs should take priority unless power users represent significant revenue. I've seen apps add complex features for experienced users that made onboarding terrible for newcomers—one logistics app saw 30% better engagement when we built a simplified "beginner mode" that became the default, despite annoying our vocal power users initially.

How do I tell the difference between what users say they want and what they actually need?

Look at their behaviour, not just their words—users are brilliant at experiencing problems but terrible at prescribing solutions. When someone requests dark mode, they might actually be struggling with poor contrast ratios. I always ask "what problem are you trying to solve?" rather than just building the requested feature, because the real solution is often completely different.

What's the biggest mistake teams make when handling feature requests?

Confusing listening with agreeing—most founders think they need to build everything users ask for to show they care. I've watched apps become bloated messes because teams said yes to everything; one automotive parking app took 12 seconds to load after six months of building every request, and the core functionality got buried under features nobody actually used.

How many feature requests should I expect, and how do I manage the volume?

In my experience, you'll get flooded within weeks of launch if your app gains traction—it's actually a good sign. I keep a backlog spreadsheet categorised as critical (affects core functionality), important (enhances main use cases), or nice-to-have, and only pull from the first two categories during sprint planning. The key is having a system before you need it, not scrambling to create one when you're drowning in requests.

When should I ignore my product vision and pivot based on user feedback?

Your vision should guide decisions but bend when reality proves you wrong—I've pivoted apps mid-development when user testing revealed we'd fundamentally misunderstood the problem. The trick is filtering requests through your core principles whilst staying honest about whether those principles still make sense. If multiple user segments consistently struggle with something your vision doesn't address, it's time to reconsider your assumptions.

Subscribe To Our Learning Centre