A dating app company spent six months building a video chat feature after users kept requesting it in feedback surveys. They launched it to great fanfare, complete with a PR campaign and featured placement on their home screen. Three months later, less than two percent of their active users had tried it even once, and engagement with their core matching features had dropped by fourteen percent because the interface had become cluttered and confusing. The feature cost them £140k to build and was quietly removed in a subsequent update.
The features that sound most exciting in planning meetings often become the most expensive mistakes in production.
I've been building mobile apps for a decade now, and this pattern repeats itself across every industry we work in. A team gets excited about adding something new, something that feels fresh and different, something their competitors might not have yet... and they convince themselves it's the missing piece that will transform their business. The problem is that great features don't automatically create great apps, and adding more functionality can actually make your product worse rather than better. Every feature you build carries costs that extend far beyond the initial development budget, and the wrong additions can quietly undermine the parts of your app that were already working well. The challenge lies in learning to say no to ideas that sound good but don't serve your actual business goals, and that's much harder than it sounds when you're sitting in a room full of stakeholders who all have opinions about what the app should do.
The Real Cost of Building Features Nobody Uses
When you ask someone what it costs to build a feature, they'll usually quote you the development time. A simple feature might be two weeks of work, a complex one could be eight weeks or more. That's anywhere from £8,000 to £40,000 depending on the rates you're paying. But that's just the beginning of what you'll actually spend.
Every new feature needs design work before development starts, which means wireframes, user flows, visual design, and often several rounds of revisions. It needs testing across different devices and operating systems, which takes time every single release cycle from that point forward. It needs documentation for your team, and if it's user-facing it might need help content, tutorial screens, or support materials.
Then there's maintenance. Features break when operating systems update. They need refinement when you discover edge cases you didn't anticipate. They require server capacity if they involve backend processing, and that's an ongoing cost that compounds over time.
We worked with a fitness app that added social sharing features, allowing users to post their workouts to various platforms. Development took six weeks. But maintaining those integrations as each social platform updated their APIs cost them roughly fifteen hours per month of developer time for the next two years. That's an extra £45k they hadn't budgeted for when they greenlit the feature.
The worst part is what happens when usage is low. That fitness app discovered only nine percent of users ever tried the sharing feature, and just three percent used it more than once. They'd spent roughly £70k building and maintaining something that the vast majority of their users completely ignored, and removing it would require another round of development work to ensure nothing broke in the process.
How User Data Should Actually Drive Your Roadmap
Most teams collect feedback from users through surveys, support tickets, and app store reviews. They tally up the requests, see what gets mentioned most frequently, and build those features. This sounds logical but it's often the wrong approach.
The users who contact you are not representative of your entire user base. They're the most vocal minority, often the most engaged or most frustrated, but they don't speak for the quiet majority who simply use your app without commenting. A healthcare app we worked with received dozens of requests for a symptom checker feature. But when we analysed their actual usage data, we found that eighty-one percent of their active users came to the app for one specific function, appointment booking, and spent less than three minutes per session completing that task.
Building a complex symptom checker would have cost them £120k and added several new screens to their navigation. It would have made their core appointment booking flow less accessible. We recommended they focus on making that booking process faster instead, reducing it from eight taps to three. That cost them £18k to implement and increased their booking completion rate by twenty-three percent.
Look at what users do, not just what they say. Track which features get used daily versus weekly versus never. Measure how long users spend in different sections. Watch where they abandon flows without completing actions. This behavioural data tells you where to invest your resources.
The other problem with user requests is that people often ask for specific solutions rather than describing their actual problems. Someone might request a dark mode because they use the app at night and the bright screen bothers them. But maybe the real solution is adjusting your colour contrast or reducing animation intensity, which would help far more users without the complexity of maintaining an entire secondary theme.
The Technical Debt Problem That Compounds With Every New Feature
Technical debt is what happens when you build something quickly or take shortcuts during development, creating code that works now but will cause problems later. Every app accumulates some technical debt, but adding new features accelerates the process exponentially.
Each feature integrates with your existing codebase in multiple places. It touches your data models, your user interface components, your navigation logic, your analytics tracking. If that feature was built under time pressure or requirements changed during development, you end up with code that's tangled and fragile. Six months later when you want to update something else, you discover you can't make changes without risking breaking that feature you added.
I've seen this pattern countless times. An e-commerce app we inherited had nineteen different promotional feature types, each one added for a specific campaign or season over the previous three years. The codebase had become so intertwined that making a simple change to the checkout flow required testing against all nineteen promotion types to ensure nothing broke. Every new feature release took longer than the last because the testing burden kept growing.
The Refactoring Cycle
At some point, technical debt reaches a threshold where your team spends more time working around problems than building new things. You're then forced into a refactoring project, essentially rebuilding portions of your app to clean up the accumulated mess. This costs tens of thousands of pounds and delivers no visible improvements to users, making it a difficult conversation with stakeholders who want to see new features instead.
The e-commerce app needed four months of refactoring work before they could implement their planned updates for the next year. That's £80k that could have been avoided if they'd been more selective about which promotional features to build in the first place, or had allocated time to clean up the code as they went along.
When Features Compete For Attention in Your Interface
Your app's interface has limited real estate, and every feature you add competes for space within that constraint. Navigation bars typically accommodate four to five items comfortably. Home screens can highlight maybe three or four primary actions before becoming cluttered. Adding a sixth feature means something else gets pushed into a menu, buried under more taps, or removed entirely.
A delivery app we worked with kept adding features over two years. They started with restaurant delivery, then added grocery delivery, then alcohol delivery, then pharmacy delivery, then flowers and gifts. Each vertical had its own category on the home screen. By the time they contacted us, their interface was overwhelming. New users didn't know where to start. Testing showed people spent an average of eight seconds looking at the home screen before choosing something, compared to three seconds in their earlier version. That hesitation might seem small but it reduced overall order frequency by eleven percent.
Every feature you add makes everything else in your app slightly harder to find.
We redesigned their navigation to prioritise their two highest-revenue categories and consolidated everything else into a single "More" section. Orders increased within three weeks. Some users complained about the change, particularly those who used the less common categories, but the overall business metrics improved substantially because the majority of users could now complete their most common tasks more quickly.
The Paradox of Choice
More options don't create better experiences. Research in behavioural psychology shows that too many choices lead to decision paralysis and decreased satisfaction. This applies directly to app design. If users see fifteen things they could do, they're less likely to do any of them compared to seeing three clear options. The delivery app's original interface offered value to users who wanted all those options, but it created friction for the larger group who just wanted to order dinner quickly.
Measuring What Matters Beyond Download Numbers
Downloads tell you almost nothing about whether your app is succeeding. I've worked with apps that had hundreds of thousands of downloads but terrible retention rates, meaning people tried them once and never returned. I've worked with apps that had modest download numbers but exceptional engagement metrics, meaning users opened them daily and spent meaningful time inside.
The metrics that actually matter are retention (what percentage of users return after day one, day seven, day thirty), session frequency (how often users open your app), session duration (how long they stay), and completion rates (whether they finish the tasks they started). For apps with business models, you need to track conversion rates, average order values, and customer lifetime value. Understanding meaningful metrics beyond vanity measurements is crucial for making informed product decisions.
A fintech app we worked with was focused on growing their user base. They spent heavily on acquisition, paying £8 per install on average. They hit their goal of reaching fifty thousand users. But their day-seven retention rate was just eighteen percent, meaning eighty-two percent of people who downloaded the app hadn't opened it again within a week. They were essentially throwing away £6.56 of every £8 they spent on acquisition.
Retention Over Acquisition
We shifted their focus to improving retention before scaling acquisition further. We simplified their onboarding flow, added personalisation to their home screen based on user behaviour, and improved the speed of their most common transactions. Day-seven retention increased to thirty-nine percent over the next four months. That meant their effective cost per retained user dropped from £44 to £20. They could now afford to spend more on acquisition because each user delivered more long-term value.
The temptation is to measure success by vanity metrics like total users or total downloads. But what matters is whether those users find enough value to keep coming back, and whether their behaviour supports your business model. An app with ten thousand active users who engage daily is more valuable than an app with one hundred thousand downloads where ninety thousand people tried it once and deleted it. Understanding why users abandon apps can help prevent churn and improve retention strategies.
Building What Users Need vs What They Ask For
There's a famous quote about Henry Ford saying that if he'd asked people what they wanted, they would have said faster horses. Whether he actually said it or not, the principle holds true. Users can tell you their problems but they're rarely qualified to design solutions.
An education app we worked with received consistent feedback requesting more course content. Users said they wanted more topics, more lessons, more material to work through. The team started planning an expansion of their content library, which would have cost them roughly £200k to produce and added months to their development timeline.
But we looked at their completion data and found something different. Only twenty-seven percent of users who started a course actually finished it. The problem wasn't lack of content, it was that users weren't engaging with what already existed. We recommended they focus on improving completion rates first, adding progress tracking, reminder notifications, and breaking longer lessons into shorter segments.
When users request features, ask them what problem they're trying to solve rather than immediately building what they describe. Often there's a simpler solution that addresses the underlying need without the complexity of their suggested implementation.
Those changes cost them £35k and increased course completion rates to forty-four percent. More users were now getting value from the existing content, which led to better retention and more word-of-mouth referrals. If they'd built more content first, they would have made the completion problem worse by giving users even more material they wouldn't finish. This mirrors patterns we see in other industries where conversion-focused features outperform feature bloat.
Sometimes the challenge is building solutions that work across different user segments simultaneously. For example, when developing business tools, you might need to consider scalability across different operation sizes rather than building separate feature sets for each group.
Early warning signs that your feature strategy isn't working often appear in your data before users start complaining. Knowing what metrics indicate strategy problems can help you pivot before investing too heavily in the wrong direction.
When you do launch new features, your initial development sprint should focus on core functionality rather than trying to build everything at once. Planning what belongs in your first sprint helps avoid feature creep from day one.
Conclusion
The hardest part of building a successful app isn't coming up with feature ideas. That's easy. Everyone has opinions about what would be nice to add. The hard part is having the discipline to say no to ideas that don't serve your core user needs or business objectives, even when those ideas come from senior stakeholders or vocal users who feel strongly about them.
Every feature carries costs beyond its development time, including maintenance, technical debt, interface complexity, and opportunity cost from not building something else instead. The apps that succeed aren't the ones with the most features, they're the ones that do a smaller number of things exceptionally well. They know what problems they're solving, who they're solving them for, and they make decisions based on data about actual user behaviour rather than assumptions about what users might want.
Your roadmap should be a reflection of your strategy, not a collection of every idea someone thought sounded good. Before building any feature, you need clear answers to why it matters for your users, how it supports your business model, and what you're willing to sacrifice to make room for it. Without those answers, you're just adding complexity and hoping it works out. Sometimes it will. More often, it won't.
If you're struggling with feature decisions or want help analysing your app data to understand where to focus your development resources, get in touch with us and we can walk through your specific situation.
Frequently Asked Questions
Look at your usage data to see what percentage of your user base actually encounters the problem the feature would solve. If less than 20% of users would benefit, or if the request comes from users who don't represent your core audience, it's likely not worth the investment. Focus on problems that affect your most engaged users who drive your key business metrics.
Plan for ongoing maintenance costs of roughly 15-25% of the original development cost annually. This covers bug fixes, OS compatibility updates, API changes, and minor improvements. A feature that costs £40k to build will likely require £6-10k per year to maintain properly.
Track adoption rate (what percentage of users try it), retention (how many use it repeatedly), and impact on core metrics (does it improve or hurt your main business goals). A successful feature should have at least 40% adoption within three months and shouldn't negatively impact your primary user flows.
Yes, if the feature is used by less than 10% of your user base and maintaining it creates technical debt or interface clutter that hurts the majority experience. The cost of keeping rarely-used features often outweighs the benefit to the small group who uses them.
Present data showing completion rates and engagement metrics for current features, then calculate the potential revenue impact of improving those versus building something new. It's easier to increase usage from 30% to 50% than to build a feature that achieves 50% adoption from zero.
Technical debt is the extra work required because of shortcuts or poor decisions in previous development. Normal costs are predictable maintenance, but technical debt means simple changes take longer because you have to work around existing problems. When adding a feature takes twice as long as it should, that's technical debt.
Focus on 3-5 core functions that solve your users' primary problems. Your main navigation should highlight no more than 4-5 options, and your home screen should emphasize 2-3 key actions. More than that creates decision paralysis and reduces overall engagement.
Only when the requests come from your most valuable user segment and address problems that affect your core business metrics. If power users who generate 80% of your revenue consistently request something, it deserves serious consideration. Requests from casual users who don't drive business value should rarely override strategic planning.


