How Do I Find Out What Features Matter Most in My Industry?
In-app purchases generate nearly all mobile app revenue—95% in fact—but here's what most people miss: the features that drive those purchases are completely different depending on which industry you're in. What works brilliantly for a fitness app will absolutely tank in fintech, and I've seen too many clients waste months building features that seemed important but turned out to be totally irrelevant to their specific market.
The best way to find out which features actually matter? Start by downloading every single competitor app in your category and using them properly for at least a week. I'm not talking about a quick browse through the interface—I mean sign up, go through their onboarding, try to complete the core actions, and pay attention to where you get frustrated or delighted. When I worked on a healthcare booking app, we spent two weeks using 17 different competitor apps and discovered that nobody had nailed the post-appointment follow-up experience; that insight became our differentiating feature and drove most of our early growth.
The features users talk about wanting and the features they actually use are often completely different things, which is why studying behaviour matters more than reading reviews.
Look at app store reviews but don't take them at face value—people often complain about missing features they'd never actually use. Instead, focus on patterns across multiple apps. If five different e-commerce apps in your space all have a certain checkout feature, its probably not optional anymore; it's become an industry standard that users expect. But here's the thing—you also need to check industry reports and case studies because some features are table stakes even if users don't explicitly mention them. Payment options in retail apps, for example, or biometric login in banking apps. Missing these basics will kill your app before users even see your unique value proposition.
Understanding What Your Users Actually Need (Not Just What They Say They Want)
Here's something I've learned after building apps across dozens of industries—people are rubbish at telling you what they actually need. I mean it. A client once insisted their healthcare app needed a complex calorie tracking system because "everyone asks for it". We did the research anyway, and you know what? Their users wanted something much simpler: a way to take photos of their meals and share them with their dietitian. That's it. The fancy calorie counter they thought was critical? Nobody was asking for that in real scenarios, they were just repeating what they'd seen in other apps.
The gap between stated preferences and actual behaviour is massive, and its one of the biggest reasons apps fail. When you ask users what they want in an interview or survey, they'll tell you what sounds good or what they think they should want. But when you watch them use an app—or better yet, when you analyse how they actually behave—you see the truth. I worked on a fintech app where users kept saying they wanted detailed investment analytics and charts. Made sense, right? But our usage data showed that 80% of them never scrolled past the first screen; what they really needed was a simple dashboard that answered one question: "Am I on track?"
The Jobs-to-be-Done Framework
This is where the jobs-to-be-done approach comes in handy. Instead of asking "what features do you want?" ask "what are you trying to accomplish?" I use this constantly with clients. For an e-commerce app we built, users said they wanted better product filtering. But when we dug deeper, what they were really trying to do was find gifts quickly without looking clueless. The solution wasn't more filters—it was curated gift guides and a "buying for someone like this" feature.
Observation Beats Conversation
Watch people use similar apps or complete similar tasks in real life. I spent two days in a warehouse watching staff use a competitor's inventory app before designing ours, and the insights were gold—they held their phones differently than I expected, they were wearing gloves half the time (touch screens don't work well with gloves!), and they needed one-handed operation because their other hand was always holding something. None of that would've come up in an interview. This observation directly influenced our approach to designing for one-handed use, which became crucial for worker productivity.
Look at these methods for uncovering real user needs:
- Contextual inquiry sessions where you watch users in their actual environment doing their actual work
- Usage analytics from existing systems to see what people do versus what they say
- Support ticket analysis to find patterns in what people struggle with or complain about
- Session recordings that show where users get confused or abandon tasks
- Time-based diaries where users log what they're doing throughout their day
The support ticket thing is underrated honestly. For a property management app, we analysed six months of support emails and found that 40% of issues related to one specific workflow around tenant communications. Users hadn't mentioned it as a priority feature, but it was clearly causing them daily frustration. We rebuilt that entire section and support tickets dropped by half.
Studying Your Competitors Without Copying Them
I've done hundreds of competitive analyses over the years and here's what I've learned—most people do it completely wrong. They download their competitors apps, take screenshots of every feature, then build a shopping list of things to copy. That's not competitive analysis; that's just feature replication and it rarely works out well.
When I start a competitive analysis, I'm looking at three specific things: what problems are competitors solving, how well are they solving them, and more importantly, what are they missing? I worked on a healthcare app where the client wanted to copy everything their main competitor had built. But when we actually studied user reviews and app store ratings, we found that competitor's appointment booking feature was causing massive frustration—it took seven taps to complete what should've been a three-tap process. We built ours differently, made it faster, and that became our main selling point.
What to Actually Look For
Download your top five competitors and use them properly. Don't just click through screens, actually try to complete real tasks. Book an appointment. Make a purchase. Update your profile. Notice where you get frustrated, where things feel slow, where the app confuses you. Those friction points? They're your opportunities. Understanding how to stay competitive long-term requires this deep analysis of what's not working in your industry.
Look at their app store reviews but read between the lines. When users say "the app is terrible" that tells you nothing. When they say "I can't find my order history without going through three menus" that's gold—you know exactly what not to do. I built an e-commerce app feature set almost entirely from studying what users complained about in competitor reviews; it worked brilliantly.
Understanding Their Technical Choices
Pay attention to how competitors handle core functionality. Do they use native features or web views? How do they manage offline states? What's their approach to notifications? These technical decisions tell you a lot about their priorities and constraints. I analysed a fintech competitor once and noticed they'd built their entire dashboard as a web view—probably to maintain consistency across platforms but it made the app feel sluggish. We went native and that performance difference became a key differentiator.
Create a simple spreadsheet tracking which features competitors have, how well they work (rate them honestly), and what users say about them in reviews. This gives you a clear picture of where the market is and where the gaps are.
But here's the thing—competitive analysis isn't about matching features one-to-one. Its about understanding the competitive landscape so you can position yourself differently. Some of my best projects succeeded because we deliberately left out features that competitors had, focusing instead on doing fewer things exceptionally well. When everyone in your industry has bloated apps trying to do everything, sometimes the winning move is simplicity.
| Analysis Focus | What to Look For | Why It Matters |
|---|---|---|
| User Onboarding | Number of steps, information requested, time to value | Shows how they balance data collection with user experience |
| Core Task Completion | Number of taps, loading times, error handling | Reveals where you can streamline and improve |
| App Store Reviews | Specific complaints, feature requests, praise patterns | Direct user feedback on what works and what doesn't |
| Technical Performance | Load times, offline capability, battery usage | Indicates their technical priorities and trade-offs |
Don't forget to look at what competitors aren't building. Sometimes the most valuable insight is spotting a feature or approach that nobody in your industry has tried yet. I worked on an education app where every competitor focused on video lessons, but we noticed nobody had proper progress tracking that parents could easily access. We built that feature and it became our main acquisition driver—parents loved it.
The Features That Every Industry Gets Wrong
After building apps across healthcare, fintech, retail, and about a dozen other sectors, I've noticed something odd—every industry has its own blind spots when it comes to features. And I mean proper blind spots, the kind where everyone just copies what the market leader did five years ago without questioning if it actually works anymore. Its fascinating really, because these aren't small mistakes; they're massive assumptions that cost companies millions in development budgets and lost users.
In healthcare apps, everyone obsesses over building complex symptom checkers and diagnosis tools when what users actually need is dead simple appointment booking and prescription refills. I've worked on three different healthcare projects where the client insisted on this elaborate AI-powered symptom analysis feature—which took months to build—only to find that 80% of user sessions were people just trying to book an appointment or check their test results. The fancy stuff barely got used. And here's the thing, those symptom checkers create liability issues that most startups haven't even considered from a legal standpoint.
Fintech apps make the opposite mistake. They focus so heavily on security theatre—you know, all those extra authentication steps and complicated verification processes—that they forget people need to actually use the app quickly. I mean, yes, security matters (obviously), but I've seen banking apps require three separate authentication steps just to check your balance. Meanwhile, users are abandoning the app because it takes 45 seconds to log in. The data shows that excessive friction in fintech apps leads to a 60% drop-off rate during onboarding, but companies keep adding more steps because it "looks" more secure to their compliance teams. When building financial tools like loan calculators, the balance between security and usability becomes even more critical.
Common Feature Mistakes by Industry
E-commerce apps get social features wrong almost universally. Every retail client wants to add social sharing, user reviews, and community features because that's what Amazon does—but they forget that Amazon spent years building trust and scale before those features became useful. For a new e-commerce app with 5,000 users, a reviews section with three reviews per product just highlights how small you are. Better to focus on lightning-fast checkout and brilliant product imagery first.
Education apps? They always build elaborate progress tracking and gamification systems before they've nailed the actual learning content. I've watched ed-tech startups spend £80,000 on badges, leaderboards, and achievement systems while their core lesson delivery was clunky and slow. Users don't stick around long enough to earn those badges if the fundamental experience is frustrating. The retention numbers don't lie—apps with solid core functionality but basic progress tracking consistently outperform feature-heavy apps with poor fundamentals. Understanding how progress indicators affect user behaviour during setup phases is often more valuable than complex gamification.
What Actually Drives User Engagement
Restaurant and food delivery apps waste development time on complex menu customisation features when what really matters is accurate delivery tracking and estimated arrival times. Sure, being able to customise every ingredient sounds great, but that feature gets used by maybe 12% of users while 98% of users are refreshing the app every thirty seconds wondering where their food is. I built a food delivery app where we stripped out half the menu customisation options and invested that development time in a really accurate, map-based tracking system—user satisfaction scores went up by 34%. When designing apps for restaurants, the key is understanding what diners actually care about versus what restaurant owners think they care about.
The travel industry keeps building itinerary planning tools that nobody uses because people already have their itineraries sorted before they download your app. What travellers actually need in the moment is offline access to bookings, quick customer support chat, and the ability to make last-minute changes without navigating through six screens. I worked on a hotel booking app where we removed the entire "plan your trip" section and focused exclusively on managing existing bookings—engagement doubled because we were solving the actual problem people had.
Here's what I've learned works better than guessing: look at your support tickets and see what people are actually asking for help with; track which features people use in their first week versus their tenth week; run simple surveys asking users what they'd pay extra for. The features people will pay for are usually the ones that matter, not the ones that sound impressive in a pitch deck. And honestly? Sometimes the best feature is just making the existing ones work faster and more reliably than adding something new.
How to Test Feature Ideas Before You Build Them
Testing feature ideas before you spend thousands on development is probably the smartest thing you can do—and its something I wish more clients did before coming to us with a fully detailed spec. I've seen companies waste six months building features that users completely ignore, and it's painful every time because most of it could've been avoided with a week of proper testing.
The simplest way to test a feature? Build a fake version of it. I mean really fake. When we were working on a healthcare app that needed appointment rescheduling, we didn't build the entire booking system first—we created a prototype with clickable screens that looked real but didn't actually do anything. We showed it to 15 patients and watched what they did. Half of them couldn't figure out how to reschedule because the button placement was wrong; we fixed that before writing a single line of real code.
The best feature tests happen when users don't know they're being tested—they just think they're using your app.
Another method that works brilliantly is the landing page test. Before building a premium feature for a fintech app, we created a simple page describing what the feature would do and tracked how many existing users clicked "notify me when this launches." Less than 2% showed interest, which saved the client about £40,000 they would've spent building something nobody wanted. You can also use your existing app to test ideas—add a menu item or button that leads to a "coming soon" message and measure the click rate. If people aren't clicking a button that costs you nothing, they definitely won't use the actual feature. Sometimes the data tells you things you don't want to hear, but better to know now than after you've spent the budget. This approach works particularly well when you're building an email list before launch to gauge genuine interest.
Understanding Industry-Specific Performance Benchmarks and KPIs
Look, I'm going to be honest with you—every client I work with wants to know if their app is "good enough" but most of them are measuring the wrong things. They'll obsess over download numbers whilst ignoring the fact that 80% of users never open the app a second time. Its a bit mad really, but the metrics that matter vary wildly depending on what industry you're in; what works for a fitness app is completely useless for a banking app.
For e-commerce apps, I've found that session duration is actually less important than conversion rate and cart abandonment—nobody wants to spend ages browsing on a tiny screen, they want to get in and buy. We built an app for a fashion retailer where the average session was only 2.3 minutes but the conversion rate was 4.7%, which absolutely crushed their desktop performance. Compare that to a healthcare app I worked on where session duration averaged 8-12 minutes because users were inputting detailed symptom information... that's not a bad thing, it's just a different use case entirely. For apps with specific functionality like receipt scanning, the key metric might be accuracy rate rather than session length.
The mistake I see people make is comparing their fintech app's retention rates to some generic "industry average" they found on a blog somewhere. Banking apps typically see 65-75% monthly retention because people need to check their balance regularly—if your meditation app has those numbers you're basically building the next Headspace. But here's the thing; you need to track the metrics that actually predict revenue for your specific business model. For subscription apps, I always focus on Day 7 retention (if they make it a week they'll probably convert), time to first value (how quickly do users get something useful), and feature adoption rates. For ad-supported apps, its all about session frequency and screen views per session. The numbers themselves don't mean much without context, and that context comes from understanding what drives value in your specific industry.
Building Your Feature Roadmap Based on Real Data
Right, so you've done your research and you've got a spreadsheet full of features that could work. Maybe you've even tested a few ideas with real users. But here's where most projects go wrong—they try to build everything at once. I've seen companies burn through their entire budget trying to launch with 30 features when they really only needed 5 good ones to start. The trick isn't knowing what features to build; its knowing what order to build them in and what can wait.
Your roadmap needs to be built around data, not opinions. I usually start by creating a simple scoring system that weighs three things: user impact (how many people will actually use this?), business value (does this help us make money or retain users?), and technical complexity (how long will it really take?). For a fintech app we built, push notifications scored high on impact and value but low on complexity, so that went in phase one. Meanwhile, the client wanted blockchain integration—which scored terribly on all three metrics—so we parked that for later. They weren't thrilled initially, but the app launched on time and within budget. When you're tracking this process, it helps to understand how to measure development progress effectively.
The biggest mistake I see is treating your roadmap like it's set in stone. It's not. Once you launch your MVP and start collecting real usage data, you'll learn things that completely change your priorities. I worked on an e-commerce app where we thought product reviews would be the most-used feature, so we spent ages perfecting it. Turns out, nobody used them... but they were obsessed with the wishlist function we'd almost cut from the first release. That's why you build in phases and measure everything.
Creating Your Priority Matrix
Here's how I actually do this with clients. We list every feature, then score each one from 1-5 on impact, value, and technical effort. The features with high scores in the first two categories and low scores on effort go first—these are your quick wins. Then we tackle high-impact features that need more work. Everything else waits for version 2 or later, and honestly? Most of those "nice to have" features never get built because you'll discover better ones through real user data.
| Feature | User Impact (1-5) | Business Value (1-5) | Complexity (1-5) | Priority |
|---|---|---|---|---|
| User authentication | 5 | 5 | 2 | Phase 1 |
| Push notifications | 4 | 5 | 2 | Phase 1 |
| Social sharing | 3 | 4 | 2 | Phase 2 |
| Advanced analytics dashboard | 2 | 3 | 5 | Phase 3+ |
| AI recommendations | 4 | 5 | 5 | Phase 2 |
Build in two-week review cycles where you look at actual usage data and adjust your roadmap. I've never worked on a project where the original roadmap survived contact with real users unchanged—and that's a good thing, not a failure.
When Data Tells You to Change Course
The hardest part isn't building the roadmap; it's being willing to throw it away when the data says you're wrong. I built a healthcare app where we'd planned this elaborate appointment booking system for phase two. But after launch, we noticed users were spending most of their time in the medication reminder feature—something we'd almost made a separate app. We completely reworked the roadmap to enhance that feature first, and it became the app's main selling point. If we'd stubbornly stuck to the original plan, we'd have wasted months building something nobody wanted whilst ignoring what they actually used every day.
Your roadmap should have clear decision points built in—moments where you stop, review the data, and decide whether to continue as planned or pivot. For every feature you launch, define what success looks like with specific metrics. If a feature doesn't hit those targets, don't be afraid to kill it or reimagine it completely. I've archived entire features that took weeks to build because the data showed they weren't working, and that's fine... better to learn quickly and move on than to keep investing in something that users don't care about.
When to Follow Industry Standards and When to Break Them
Here's what nobody tells you about industry standards—they exist because they work, but they also exist because everyone's too scared to try something different. I've built apps across healthcare, fintech and retail and the pressure to stick with what everyone else is doing is massive. Your stakeholders want familiar patterns, your users expect certain things to work a certain way, and deviating from the norm feels risky. But sometimes breaking the rules is exactly what makes an app successful.
The trick is knowing which standards are there for good reasons and which ones are just... well, habit. Take navigation patterns. Bottom tab bars on iOS and hamburger menus on Android became standards because they genuinely help users find their way around an app. Breaking these without a bloody good reason usually backfires. I worked on an e-commerce app where the client wanted a "unique" navigation system that hid everything behind gestures—users couldn't figure out how to browse products and the abandonment rate was awful. We ended up reverting to standard navigation and engagement jumped 40%. This is why avoiding outdated design patterns matters—but knowing when to stick with proven approaches matters more.
Standards Worth Following
Some patterns are standards because they solve real usability problems. Platform-specific design guidelines (Human Interface Guidelines for iOS, Material Design for Android) exist because users have learned these interactions through years of using their phones. Payment flows, security features, and accessibility requirements aren't just standards—they're often legal requirements or best practices that protect both you and your users. In fintech especially, deviating from expected security patterns can make users nervous about trusting you with their money.
When Breaking Rules Makes Sense
But here's where it gets interesting. Sometimes your industry's "standard features" are actually holding everyone back. I've seen this play out in healthcare apps where every competitor had the same clunky appointment booking system because that's what people expected. When we simplified it down to three taps instead of seven screens, users loved it—even though it broke from the industry norm. The key is breaking rules that frustrate users, not ones that help them. For specialised industries like construction, understanding how to design for specific worker needs often means departing from generic app patterns entirely.
The best approach? Test both. Build a version that follows standards and one that tries something new, then see which performs better with real users. Data beats opinions every time, and you'd be surprised how often the "risky" approach wins because its genuinely better. Just make sure you're breaking rules for the right reasons—to improve user experience, not just to be different.
- Follow standards for core navigation, security patterns and accessibility features
- Break standards when industry norms create friction or poor user experiences
- Always test unconventional approaches with real users before full implementation
- Consider platform expectations—iOS and Android users have different learned behaviours
- Document why you're breaking standards so future designers understand the reasoning
Making Sense of It All: Turning Research Into Action
Here's what happens after you've done all the research—you'll have a massive pile of data, user feedback, competitor features, and industry benchmarks staring back at you. Its overwhelming. I've seen clients freeze at this stage because they're trying to process everything at once, and that's exactly when projects start to go sideways. The trick is turning all that information into a decision-making framework that actually works for your specific situation.
Start by creating three columns: Must Have, Should Have, and Nice to Have. Sounds simple? It is, but here's where experience comes in—your Must Have list should only include features that directly support your core value proposition. When I worked on a healthcare appointment booking app, the client wanted to include medication reminders, health tracking, telemedicine, and about fifteen other features. We stripped it back to just booking, notifications, and medical history access for launch. Everything else went into later phases. The app hit 100,000 downloads in its first year because it did one thing brilliantly rather than ten things poorly. And crucially, we made sure the app would be worth talking about by focusing on that one exceptional feature rather than spreading ourselves thin.
Now take your competitive analysis and look for gaps, not similarities. If every competitor has a feature, you probably need it too (that's table stakes) but it won't differentiate you. What I do is map features against two axes: user impact and implementation effort. High impact, low effort? Build those first. High impact, high effort? Phase two. Low impact, low effort? Maybe never, honestly. For a fintech app we built, integrating with open banking APIs was high effort but massive impact—it became the entire selling point. Meanwhile, custom app themes? Low impact despite being easy to build, so we never bothered.
The final step is validation. Pick your top three features and create simple prototypes or mockups. Show them to real users, not your team or investors. I mean actual potential users who don't know you and won't sugar-coat their feedback. If they don't immediately understand the value within about thirty seconds, rethink how you're presenting it or whether its actually solving their problem. This saved us six months of development time on an e-commerce project when users couldn't figure out what our "smart recommendation engine" actually did for them—we redesigned the entire feature based on that feedback before writing any code.
Frequently Asked Questions
I recommend dedicating at least two weeks to proper competitive research—download your top 5-7 competitors and actually use them daily for real tasks, not just quick browsing. When I worked on a healthcare booking app, we spent two weeks using 17 competitor apps and discovered nobody had solved post-appointment follow-up, which became our main differentiator and drove most of our early growth.
There's a massive gap between stated preferences and actual behaviour—people tell you what sounds good or what they think they should want, not what they'll actually use. I had a healthcare client insist on complex calorie tracking because "everyone asks for it," but research showed users just wanted to photograph meals and share them with their dietitian—much simpler but far more valuable.
If five different apps in your space all have the same feature, it's probably become table stakes that users expect, so yes, you'll likely need it too. However, these won't differentiate you—focus on finding gaps where competitors are failing users, like the seven-tap booking process I found in a healthcare competitor that we simplified to three taps.
Use a simple scoring system weighing user impact, business value, and technical complexity on a 1-5 scale—high impact and value with low complexity go first. I've never worked on a project where the original roadmap survived contact with real users unchanged, so build in two-week review cycles to adjust based on actual usage data.
Follow standards for core navigation, security patterns, and accessibility—users have learned these behaviours through years of phone use. Break standards when industry norms create genuine friction, but always test both approaches first—I've seen "risky" unconventional designs win because they actually solve user problems better than following the crowd.
Build fake prototypes with clickable screens that look real but don't function, or create landing pages describing features to gauge interest from existing users. When testing appointment rescheduling for a healthcare app, our fake prototype showed 50% of users couldn't find the reschedule button—we fixed that before writing any real code.
Trying to build everything at once instead of focusing on 5 excellent core features that directly support your value proposition. I've seen companies waste six-figure budgets building 30 features when they needed to nail the basics first—better to do one thing brilliantly than ten things poorly.
Industry benchmarks vary wildly—banking apps typically see 65-75% monthly retention because users check balances regularly, while that same number for a meditation app would mean you're building the next Headspace. Focus on metrics that predict revenue for your specific business model rather than comparing to generic "industry averages" from blog posts.
Share this
Subscribe To Our Learning Centre
You May Also Like
These Related Guides

How Can I Find Out What Users Expect in My App Category?

How Can I Tell If My App's Pricing Fits the Market?



