Expert Guide Series

Which Features Should I Test Before Adding Them to My App?

How many features do you actually need to test before adding them to your app, and which ones deserve the most attention? The fact is that most app projects I've worked on over the past ten years have wasted somewhere between thirty and fifty grand building features that users either ignored completely or found confusing, and the reason always comes back to skipping proper testing phases. When you're sitting in a Thursday afternoon planning session and someone suggests a feature that sounds brilliant on paper, it doesn't mean users will understand how to use it or even want it in the first place. Testing before development isn't about being cautious or slowing things down... it's about making sure you're building something people will actually use, which saves you months of rework and keeps your budget intact.

The most expensive features are the ones nobody asked for and nobody uses

Why Testing Saves You Money and Headaches

Building a feature for a mobile app costs anywhere from £3,000 for something simple like a ratings system to £40,000 or more for complex functionality like real-time chat or payment processing. Every feature you build without testing first is a gamble, and the odds aren't in your favour. I worked with a fintech client who spent eighteen grand building an investment portfolio tracker because their CEO thought it would be useful, only to discover during beta testing that their target users preferred checking their investments through their bank's app instead. We scrapped the whole thing. It hurt to delete that code.

The development time alone should make you think twice about skipping tests. A typical feature takes between two and six weeks to build properly, then another week or two for testing and bug fixes. That's at least three weeks you could save by discovering through a simple prototype test that users don't want the feature at all. The money adds up quickly when you consider developer salaries, project management time, and the opportunity cost of not building features that users actually need. Before committing significant resources to development, it's worth considering whether your app needs business protection measures in place early in the process.

Core Features vs Nice-to-Have Features

Your app needs a backbone... the features that make it useful enough that someone would download it and keep it on their phone. These core features solve the main problem your app addresses, nothing more. For an e-commerce app, that's browsing products, adding items to a basket, and checking out. Everything else falls into the nice-to-have category, like wish lists, product recommendations, or sharing items with friends. The mistake most teams make is treating nice-to-have features as if they're just as important as core ones, which spreads your testing resources too thin.

I split features into three buckets during planning sessions. The first bucket holds features that must work perfectly or the app is useless, like login systems or payment processing. The second bucket contains features that improve the experience but aren't deal-breakers, like push notifications or dark mode. The third bucket is where experimental features live, things you're curious about but aren't sure users want yet. Test everything in bucket one extensively before launch, test bucket two features with a smaller group, and treat bucket three as ongoing experiments you can test after launch. When working with core functionality, you'll often need to test your app's API integration thoroughly to ensure everything works seamlessly.

Write down your features on separate cards or sticky notes, then physically sort them into these three buckets with your team. If people disagree about which bucket a feature belongs in, that's a sign you need to do user research before making assumptions.

Testing Your Value Proposition First

Before you test individual features, you need to know if anyone actually wants your app to exist in the first place. Your value proposition is the answer to why someone would bother downloading your app when they already have twenty others on their phone competing for attention. Testing this comes before anything else, before you write a single line of code or design a single screen. I use a simple landing page test where we describe what the app does and ask people to sign up for early access, and if we can't get at least a hundred sign-ups within two weeks of pushing the page out to our target audience, something's wrong with the core idea. This approach works particularly well when you build an email list before your app launches to validate demand.

The healthcare app market taught me this lesson the hard way. We nearly built an app for tracking medication schedules before discovering that our target users, people over sixty with multiple prescriptions, were already using physical pill organisers and didn't trust phone apps for something as important as medication. The landing page test got exactly twelve sign-ups in three weeks, which told us everything we needed to know. We changed direction completely and built a simpler reminder system that family members could manage remotely instead, which solved a real problem for adult children worried about their parents missing doses.

User Research Methods That Actually Work

The best research happens when you watch people use your competitors' apps or try to solve the problem your app addresses using whatever tools they currently have. I sit with users for sixty to ninety minute sessions and ask them to show me how they do things right now, not how they would do things with a hypothetical app that doesn't exist yet. People are terrible at predicting their own behaviour. They'll tell you they want features they'll never use. Using research tools that streamline the user research process can help you collect and analyse this data more effectively.

One method that consistently produces useful insights is the five-second test, where you show someone a screen design for just five seconds then ask them what they remember and what they think they could do on that screen. If they can't tell you the main purpose of a screen in those few seconds, your design is too complicated or unclear. This works for testing navigation structures too... show someone your app's main menu for five seconds and see if they can recall where to find the key features afterwards. When conducting this research, it's important to validate user personas through data analysis to ensure your findings represent real user behaviour patterns.

Users will tell you what they want, but watching them shows you what they need

Interview Questions That Get Real Answers

Ask people to describe the last time they experienced the problem your app solves, then dig into exactly what they did about it. Don't ask if they would use a feature, ask them about the last time they needed something similar and what happened. The difference between these questions is enormous. The first one gets you wishful thinking, the second gets you actual behaviour patterns you can design around. Research techniques that reveal hidden user motivations can help you understand the deeper reasons behind user behaviour beyond what they explicitly tell you.

Prototyping and Validation Techniques

You don't need a working app to test features. I use Figma to create clickable prototypes that look and feel like real apps but take a tenth of the time to build, usually three to five days instead of three to five weeks. These prototypes let users tap through flows and complete tasks, which gives you feedback on whether features make sense before you've committed to the expense of actual development. The key is making prototypes realistic enough that people forget they're not using a real app, which means including realistic content and data rather than placeholder text and dummy images.

For testing complex interactions like gesture controls or animation sequences, I build what developers call throwaway code... quick and messy implementations that work just well enough to test with users but aren't built to proper standards. A client in the education sector wanted to test a card-swiping interface for vocabulary learning, and we built a rough version in React Native that took two days instead of the two weeks a production-ready version would have needed. Users hated it. They kept accidentally swiping when they meant to scroll. We saved ourselves ten days of wasted development time. When considering whether to prototype or build fully, it's worth understanding whether no-code solutions can handle your specific requirements for rapid testing.

Wizard of Oz Testing

Sometimes the smartest way to test a feature is to fake it completely. Wizard of Oz testing means showing users what looks like a working feature but having a human manually process things behind the scenes. I used this for testing a restaurant booking feature where the app appeared to check availability and confirm reservations automatically, but actually sent requests to our team who called restaurants to make bookings manually. We learned that users wanted confirmation within five minutes or they'd just call the restaurant themselves, which told us we needed proper API integrations with booking systems rather than a simpler email-based system we'd originally planned.

Understanding A/B Testing Results

A/B testing splits your users into groups and shows each group a different version of a feature, then measures which version performs better. The tricky part is knowing when you have enough data to make a decision and not jumping to conclusions too early. I run A/B tests for at least two weeks or until each variation has been seen by at least five hundred users, whichever comes later. Testing with smaller sample sizes gives you unreliable results that lead to wrong decisions.

What you measure matters more than how long you test. Downloads and installs are vanity metrics that don't tell you if a feature works. Look at completion rates, time on task, and whether users come back to use the feature again. An e-commerce client tested two different checkout flows, and version A had a higher completion rate but version B had lower cart abandonment over time because users found it clearer and made fewer mistakes that required support calls. We went with version B even though it didn't win the obvious metric. Throughout this process, ensuring your app remains accessible for users with disabilities should be a constant consideration in both variations.

Set your success criteria before you start the test, not after you see the results. Write down what metric needs to improve by how much for you to consider the new feature successful, then stick to that decision framework.

Metric What It Actually Tells You
Completion Rate Whether users can figure out how to finish a task
Time on Task If something is too complicated or confusing
Error Rate Where users are getting stuck or making mistakes
Return Rate If the feature is genuinely useful or just novel

When to Kill a Feature

Knowing when to scrap a feature is harder than knowing when to build one, because you've already invested time and money and nobody wants to admit they made the wrong call. I use a simple rule... if less than fifteen percent of active users touch a feature within their first week of using the app, and less than five percent use it regularly after a month, the feature isn't pulling its weight. Features cost money to maintain, create bugs, slow down your app, and make the interface more complicated. Dead features are worse than no features. This is particularly important to consider when you're evaluating whether API rate limiting and security measures are necessary for features that nobody uses.

The sunk cost problem kills good judgement. I worked with a retail app that had built a barcode scanning feature for comparing prices, which cost about twenty-two grand to develop and integrate. After six months, usage data showed that only three percent of users had ever tried it, and those who did only used it once. The founder resisted removing it because of how much they'd spent, but keeping it meant maintaining the code, supporting users who had questions about it, and cluttering the interface. We removed it. Nobody complained.

  • Less than 10% of users try the feature in their first month
  • Support tickets related to the feature cost more than development time saved
  • The feature breaks regularly or causes performance problems
  • New users mention the feature makes the app feel complicated during testing
  • The business goal the feature was meant to achieve can be met more simply another way

Conclusion

Testing features before building them isn't about perfection or eliminating all risk... it's about making smarter bets with your budget and development time. The apps that succeed are the ones that solve real problems in ways people actually understand and want to use, and you only discover what that means by testing with real users before you commit to expensive development work. Start with your value proposition, test your core features thoroughly, treat nice-to-have features as experiments, and be willing to kill features that don't earn their keep. Every feature you don't build because testing showed it was wrong saves you money and keeps your app simpler and better for the features that matter.

The truth is that testing feels like it slows you down until you've wasted six months building the wrong thing, then suddenly it seems like the smartest investment you could have made. I've built enough apps to know that the ones we tested properly shipped faster and performed better than the ones where we trusted our instincts and skipped validation. Your instincts about what features users want are probably wrong, mine usually are too, and that's exactly why we test.

If you're planning an app project and want help figuring out which features to test and how to validate your ideas before spending your budget on development, get in touch with us and we can talk through your specific situation.

Frequently Asked Questions

How much should I budget for testing before building app features?

Plan to spend around 10-15% of your development budget on testing, which typically means £300-600 for testing a simple feature that costs £3,000 to build. This upfront investment can save you thousands by identifying features users don't want before you waste weeks of development time. The cost of a two-week prototype test is always less than scrapping a finished feature nobody uses.

What's the minimum number of users I need for reliable A/B testing?

Run tests for at least two weeks or until each variation has been seen by at least 500 users, whichever takes longer. Testing with smaller groups gives you unreliable data that leads to wrong decisions. If you don't have 500 active users yet, focus on qualitative testing methods like user interviews and prototype testing instead.

How do I know if a feature is worth keeping after launch?

Use the 15/5 rule: if less than 15% of users try a feature in their first week and less than 5% use it regularly after a month, consider removing it. Dead features cost money to maintain, create bugs, and make your app more complicated. Don't let sunk costs cloud your judgment - unused features hurt more than they help.

Can I test features without building a working app?

Yes, clickable prototypes in tools like Figma work perfectly for testing user flows and feature concepts. These take 3-5 days to create instead of 3-5 weeks for actual development, letting you validate ideas cheaply. For complex interactions, build throwaway code that works just enough for testing but isn't production-ready.

What's the difference between testing core features versus nice-to-have features?

Core features must work perfectly or your app is useless, so test these extensively with larger user groups before launch. Nice-to-have features can be tested with smaller groups or treated as post-launch experiments since they won't break the app if they fail. Focus your testing budget on features that solve your app's main problem first.

How long should user research interviews last to get useful insights?

Conduct 60-90 minute sessions where you watch people solve the problem your app addresses using their current tools. Ask about the last time they experienced the specific problem rather than hypothetical questions about what they might want. Observing actual behavior patterns gives you better insights than asking people to predict what they'd do.

When should I kill a feature that isn't working?

Remove features immediately if they generate more support tickets than they're worth, break regularly, or consistently confuse new users during testing. Don't wait for usage to improve - features that aren't intuitive from the start rarely become popular later. The money you spent building it is already gone; don't waste more maintaining something nobody wants.

What's the most important thing to test before building any features?

Test your value proposition first using a simple landing page that describes your app and asks for early access sign-ups. If you can't get at least 100 sign-ups within two weeks from your target audience, something's wrong with the core idea. No amount of feature testing will save an app that solves a problem people don't actually have.

Subscribe To Our Learning Centre