Expert Guide Series

How Do I Find Out What Price Users Will Pay for My App?

Getting pricing wrong is one of the fastest ways to kill an app before it even gets started. I've seen it happen more times than I care to remember—brilliant apps with genuinely useful features that either charge too much and scare everyone away, or charge too little and can't sustain development. The thing is, most developers and founders just guess at pricing based on what feels right, or they look at a competitor and shave off a quid or two thinking that'll do the trick. It won't.

Finding out what users will actually pay requires proper research, and I mean real research, not just asking your mates what they think. Over the years I've worked on apps across healthcare, fintech, e-commerce and education—each sector has wildly different willingness to pay. A productivity app user might balk at £4.99 whilst a business professional will happily drop £29.99 on something that saves them an hour each week. Context matters more than you'd think.

The gap between what users say they'll pay and what they actually pay when it comes time to hand over their card details is often massive, and that's where most pricing research falls apart.

This guide pulls from nearly a decade of testing pricing strategies, running experiments with real users, and yes, making some bloody expensive mistakes along the way. We'll look at practical methods you can use before you've even built your app, how to interpret competitor pricing without falling into common traps, and ways to test different price points once you're live. The goal isn't just to pick a number—its to understand the relationship between what your app offers and what people genuinely value enough to pay for it.

Understanding What People Are Actually Willing to Pay

The hardest part of pricing an app isn't picking a number—it's figuring out what value you're actually providing and whether users see that value the same way you do. I've worked on fitness apps where users happily paid £15 a month because it replaced their gym membership, and I've seen productivity apps struggle to charge £2.99 one-time because users couldn't immediately see what made them different from the free alternatives. The gap between what you think your app is worth and what users will actually pay can be massive, and bridging that gap starts with proper research before you write a single line of code.

Your pricing doesn't exist in a vacuum—it's always relative to what else is available and what problem you're solving. When I built a recipe app for a client, we initially thought £4.99 was reasonable given all the features we'd packed in. But here's the thing; users weren't comparing us to other recipe apps that cost £4.99. They were comparing us to BBC Good Food (free) and their grandmother's recipe book (also free). We had to completely rethink our value proposition before pricing made any sense.

The willingness to pay changes dramatically across different app categories and this is something you need to factor in early. Healthcare and finance apps can command premium pricing because they solve high-stakes problems—users will pay £10+ monthly for a meditation app that genuinely helps with anxiety or a budgeting tool that saves them hundreds. Gaming apps work on entirely different psychology where 95% of users expect free gameplay but 5% might spend £50+ on in-app purchases. Education apps sit somewhere in between; parents will pay for their children's learning but expect to see measurable progress quickly. Different research approaches work better for different app types, so understanding your category's unique characteristics is crucial.

What Actually Influences Pricing Perception

Through building apps across different sectors, I've noticed several factors that consistently affect what users will pay:

  • Whether your app replaces an existing paid service (gym membership, magazine subscription, professional service)
  • How often users need to open your app—daily use apps can justify subscriptions, occasional-use apps usually cant
  • The perceived complexity of building it (users assume AI features or real-time data cost more to provide)
  • Platform expectations—productivity apps on iOS can charge more than the same app on Android in most markets
  • Whether users can achieve their goal in a single session or need ongoing access

I worked on a property search app where we tested £3.99 monthly versus £29.99 annually. The monthly option felt "safe" to users but the annual pricing actually converted better once we framed it as "less than a coffee per month." The same £48 annual price presented differently changed conversion by 40%. Its not always about finding the lowest price users will accept—sometimes its about presenting your pricing in a way that makes the value obvious. This connects directly to creating personalised app experiences that resonate with specific user needs.

The Different Ways to Test Your App Pricing

Right, so you need to figure out what price point works for your app—there's actually more ways to test this than most people realise, and I've used pretty much all of them over the years. Some work better than others depending on what stage you're at and what kind of budget you have to play with.

The most straightforward method is A/B testing different price points once your app is live. I worked on a fitness tracking app where we tested £2.99 versus £4.99 for the premium version;we sent half the users to one price and half to the other. Took about three weeks to get meaningful data, but we discovered that the higher price actually converted better because it signalled quality. Mad really, but it happens more often than you'd think. The tricky bit here is making sure you have enough traffic to get statistically significant results—if you're only getting 50 downloads a day, it'll take forever. If you're struggling with user numbers, check out strategies for finding your first 1000 users without breaking the bank.

Landing page tests are brilliant for pre-launch pricing research. You set up a simple page that describes your app, show different prices to different visitors, and see who clicks the "notify me" button or attempts to purchase. I've done this for several e-commerce clients and its way cheaper than building the full app first. You can get decent results with just a few hundred pounds spent on Facebook ads.

Don't test more than two price points at once—it splits your traffic too thin and you won't get reliable data. Start with your expected price and one variant (either 30% higher or lower) then iterate from there.

Methods Worth Considering

  • A/B testing live in the app stores (requires decent download volume)
  • Landing page price tests with paid traffic (works pre-launch)
  • Van Westendorp price sensitivity surveys (asks users multiple pricing questions)
  • Conjoint analysis if you have budget (shows how users trade off features vs price)
  • Fake door tests where you show pricing before the feature exists

Fake door testing is something I use quite a bit for new features. You show users a "premium" option at a specific price point, track how many tap it, then explain the feature isnt available yet but you're gauging interest. Some people think its a bit dodgy, but as long as you're transparent about it being a test, most users understand. I did this for a healthcare app's telemedicine feature—we found that 18% of users were willing to pay £9.99 per consultation, which gave us the confidence to actually build it out.

The method you choose really depends on where you are in the development process and how much data you need. If you've already launched, A/B testing is your best bet. If you're still in planning stages, landing page tests or surveys make more sense. And honestly? You should probably use multiple methods because they each tell you something slightly different about user behaviour.

Free vs Paid vs Freemium—Which Model Fits Your App

I've launched apps using all three models and here's what I've learned—there's no universal right answer, but there are definitely wrong choices for specific app types. The model you pick needs to match both your app's value proposition and your users' expectations. Getting this wrong means leaving money on the table or worse, having nobody download your app at all.

Free apps work best when you're building an audience first and monetising later through ads or data (with proper consent, obviously). I built a weather app that went this route and it made sense because users expect weather information to be free. They won't pay upfront for it. But here's the thing—you need serious volume to make ad revenue worthwhile, we're talking hundreds of thousands of active users before the numbers start looking decent. The flip side? User acquisition is easier when there's no payment friction.

Paid upfront apps are becoming rare, and for good reason. Users hate paying for something they haven't tried yet. I've seen this work exactly twice—once for a professional photography app where the target users were already spending thousands on equipment, and once for a specialised medical reference tool for doctors. Both had very specific audiences who understood the value before downloading. If you're targeting consumers with a new concept? Forget it. They wont pay.

Freemium is where most apps should be looking, honestly. It lets users try before they buy, which massively reduces that initial barrier. I worked on a fitness app where we offered basic workouts free but locked advanced programs and nutrition tracking behind a subscription. The conversion rate sat around 4%, which doesn't sound impressive until you realise that's 4% of people who actually used the app and understood its value. With a paid app, we'd have missed 96% of those users entirely because they'd never have downloaded it in the first place. Understanding the psychology behind free trials is crucial for making this model work effectively.

Key Factors That Should Drive Your Decision

Your monetisation model isn't just about preference, its about matching your app to market realities. A few things that should influence your choice:

  • How quickly users understand your app's value—if it takes time, you need a free trial period
  • Your target market's payment habits—B2B users expect to pay, teenagers expect everything free
  • Your competition's pricing—if everyone else is free, a paid app needs to be significantly better
  • Your app's ongoing costs—if you're running expensive servers or AI processing, you need recurring revenue
  • How much you need to invest in user acquisition—paid apps cost more to market because the conversion funnel is tighter

What Actually Converts in Real Usage

The data from apps I've worked on shows some clear patterns. Freemium apps with a well-designed paywall convert between 2-5% of active users to paid. Thats active users, not downloads—if you count everyone who downloaded and never opened the app twice, the percentage looks terrible. Free apps monetising through ads typically earn between £0.50-£2 per user per year, depending on the niche. Finance and insurance apps earn more, games and entertainment earn less. Paid apps have the highest revenue per user obviously, but the smallest user base—sometimes by a factor of 100 or more.

One thing that surprised me early on was how much the onboarding experience affects monetisation success. An e-commerce app I worked on tested two approaches—one that asked for payment details immediately after signup, another that let users browse freely first. The second approach converted 3x better. People need to experience value before they'll pay, even if the price is reasonable. This is why most successful subscription apps now offer a proper trial period, not just a feature-limited free version. Poor onboarding is also one of the main reasons users delete apps quickly.

Running Price Tests Without Building the Full App

You don't need to spend six months and £50,000 building a complete app just to find out if people will pay £4.99 for it. That would be mad. I've seen too many founders burn through their entire budget only to discover their pricing was completely off, and its something that could have been avoided with some smart testing upfront.

The best approach I've used with clients is creating a landing page that looks like a real product—screenshots, features list, the works—and then running Facebook or Google ads to it with a clear call to action. But here's where it gets interesting; instead of "Download Now" you use "Pre-order for £X" or "Join the Waitlist". This tells you who's willing to commit. We did this for a fitness tracking app where the founder was torn between £2.99 and £9.99. Ran two identical landing pages with different price points for a week each, spent about £200 total. The £2.99 version got 47 email signups, the £9.99 got 12. Simple maths told us the higher price actually generated more potential revenue (£107 vs £140) even with fewer conversions, plus those higher-paying users tend to be more committed long-term.

The people willing to pay more are often the ones who'll actually use your app, not just download it and forget about it

Another method that works well is building a simple Typeform or Google Form survey but making it look like a checkout process. Show your value proposition, list the features, then ask "Would you pay £X for this?" with radio buttons for Yes/No/Maybe. Follow up with "What would you pay?" as an open text field. Its not perfect—people lie on surveys, they really do—but combined with landing page data it gives you a clearer picture before writing a single line of code. For potential users who aren't using similar apps yet, you'll need different research approaches to understand their needs and pricing expectations.

How to Use Surveys and Interviews to Learn About Pricing

Surveys and interviews are brilliant for pricing research, but they come with a massive caveat—people lie. Not intentionally, mind you, but what someone says they'll pay and what they actually pay are often two completely different things. I've had countless clients tell me their survey respondents said they'd happily pay £4.99 monthly, only to launch at that price and see conversion rates tank. People want to seem reasonable in surveys, they want to help you out, but when it comes to actually pulling out their credit card? That's when reality hits. This is actually part of why user interviews can give misleading results for product decisions.

That said, surveys are still useful if you know how to use them properly. The trick is asking the right questions—don't just ask "would you pay £5 for this app" because everyone will say yes or no based on how they're feeling that day. Instead, I always recommend using the Van Westendorp Price Sensitivity Meter; it asks four questions: at what price would this app be too expensive, at what price would it be getting expensive but you'd consider it, at what price would it be a bargain, and at what price would it be so cheap you'd question its quality. This gives you a range to work within rather than a single misleading number.

Interviews are even better because you can dig deeper into the why behind peoples answers. When I'm working with clients on pricing strategy, I'll often sit in on their user interviews (or run them myself if they dont have the resources) and what I'm listening for isn't just the price they mention—its their hesitation, their comparisons to other apps they pay for, their justification for why they would or wouldn't pay. A fintech app I worked on a few years back thought their target market would pay premium prices because "finance apps are professional tools"...but through interviews we discovered their users were already paying for three other finance subscriptions and were genuinely at their limit. That insight saved them from launching at an unsustainable price point.

Getting Honest Answers from Surveys

The biggest mistake I see is surveying people who aren't your actual target users. Your mum's friends on Facebook aren't going to give you accurate pricing data for a B2B productivity app, you know? You need to survey people who actually have the problem your app solves and are actively looking for solutions. I usually recommend offering a small incentive—£5 Amazon vouchers work well—but make sure you're screening participants properly before they even start the survey. Having the right interview questions will help you get more honest and useful responses.

When to Trust Interview Data (and When Not To)

User interviews give you qualitative depth that surveys cant match, but they're also subject to something called hypothetical bias—people genuinely believe they'll behave one way, but when push comes to shove they behave differently. I've found that interviews work best when you're trying to understand perceived value rather than exact price points. Ask people to compare your app to other things they already pay for, ask them what features would make them pay more, ask them what would make them immediately reject a price. These context clues are more valuable than any specific number they give you. And here's something I've learned the hard way—always interview people who have already paid for similar apps, not people who only use free alternatives, because their willingness to pay is fundamentally different.

What Your Competitors Pricing Can (and Cant) Tell You

I spend a lot of time looking at competitor apps when we're working on pricing strategies, and I've learned its both helpful and misleading in equal measure. Sure, you can download every fitness app in your category and see that most charge £9.99 monthly or £59.99 yearly—but that doesn't actually tell you if those prices work or if users are happy paying them. I mean, your competitors might be struggling with terrible retention and you'd never know just from looking at their App Store listing.

The thing is, competitor pricing gives you a baseline for market expectations, which is genuinely useful. When we built a meditation app a few years back, we found that users expected monthly subscriptions between £8-15 because that's what Headspace and Calm had established. Pricing outside that range—either much higher or lower—required extra justification. Going lower made people question quality; going higher meant we needed demonstrably better features. But here's what competitor research cant tell you: conversion rates, churn rates, or whether they're actually profitable at those prices. You might see an app charging £4.99 monthly and assume its working brilliantly when in reality they're barely covering their server costs. For a deeper dive into competitor strategies, understanding how rival monetisation models work provides more comprehensive insights.

What You Should Actually Look For

When analysing competitor pricing, I focus on these specific things that do provide real insight:

  • The range of pricing tiers and what features justify each level
  • Whether they offer free trials and how long (7 days vs 14 vs 30)
  • Their pricing structure—monthly, yearly, lifetime, or credits-based
  • How they position their mid-tier option (this is usually their target sale)
  • What they include in free versions vs paid

Don't just screenshot competitor prices—actually subscribe to their apps for a month. You'll learn more from their onboarding flow, paywall placement, and upgrade prompts than from any external analysis.

The Dangerous Assumptions

I've seen clients make costly mistakes by copying competitor pricing without understanding context. A fintech app we worked with wanted to match a competitors £14.99 monthly price, but that competitor had raised £2M in funding and was clearly buying users at a loss to build market share. Our client didn't have that runway, so matching that price would've meant months of negative unit economics. You dont know if your competitors are venture-backed and willing to lose money, targeting different user segments than you, or actually failing despite appearances.

The best approach? Use competitor pricing as one data point among many. It tells you what the market has been conditioned to expect, but your actual price should come from testing with your specific users and understanding your own costs and business model.

Testing Different Price Points After Your App Launches

Once your apps live, the real learning begins—because you can finally see what people actually do versus what they say they'll do. I've watched countless apps launch at what seemed like the perfect price point based on research, only to find users behaving completely differently in the wild. The beauty of post-launch testing is you're working with real data from people who've already shown interest in your app; they've downloaded it, opened it, maybe even used it a few times.

Here's something most developers get wrong though—they change the price for everyone at once and then wonder why their metrics went haywire. You cant tell if a revenue drop is because of the price change or because you coincidentally changed it during a slow season or right when a competitor launched their big update. What you want to do instead is segment your users and test different price points with different groups. iOS and Android both support this through their developer consoles, though Android's price experiments through Google Play are honestly more straightforward to set up than Apple's approach.

Setting Up A/B Price Tests

For subscription apps (which most successful apps are these days), you can show different pricing to different user cohorts. I worked on a meditation app where we tested £4.99 monthly versus £39.99 annually versus a middle option of £19.99 for six months. The six-month option? Nobody expected it to win, but it actually had the highest conversion rate because it felt like less commitment than annual but better value than monthly. That insight only came from live testing with real users making real purchase decisions. The key is also making sure your purchase flow is optimised—streamlined checkout processes can significantly improve conversion regardless of pricing.

What Metrics Actually Matter

Don't just look at conversion rates in isolation—that's a rookie mistake. You need to track:

  • Conversion rate at each price point (obviously)
  • Lifetime value of users who converted at different prices
  • Churn rate by pricing tier—sometimes cheaper plans have worse retention
  • Time to conversion (how long users waited before subscribing)
  • Revenue per user across the entire cohort, not just paying users

I've seen apps where a higher price point had a lower conversion rate but actually generated more revenue overall because those users stuck around longer and were more engaged. The fintech app I mentioned earlier? When we increased the price from £2.99 to £4.99, conversions dropped by about 15% but revenue increased by 22% because the users who did convert stayed subscribed for an average of 8 months instead of 4. That's the kind of insight you only get from live testing with proper analytics in place.

One thing to watch out for—make sure your test runs long enough to account for weekly patterns. I usually recommend at least 3-4 weeks of data before making decisions, because user behaviour changes dramatically between weekdays and weekends, and between the start and end of the month when people are thinking about their budgets differently. Its tempting to call a test after a week when you see promising numbers, but you'll often regret it when the pattern doesn't hold.

Conclusion

Look, figuring out what people will pay for your app isn't a one-time thing—its an ongoing process that needs to evolve as your app grows. I've seen too many developers spend months perfecting their product, pick a price that "feels right", and then wonder why downloads are disappointing or why nobody's converting to paid. The truth is you need to treat pricing research like any other part of your development process; test it, measure it, and be ready to adjust based on what the data actually tells you.

The apps I've worked on that got pricing right didn't do it by accident. They combined multiple research methods—surveys to understand user priorities, competitor analysis to know the market context, landing page tests to gauge real interest, and post-launch experiments to refine the numbers. A healthcare app we built started with a £4.99 price point based on competitor analysis, but after running conversion tests we discovered that £2.99 with a higher-priced annual option actually brought in 40% more revenue. Would we have found that without testing? Absolutely not.

Start small but start somewhere. Even if you're pre-launch, you can run a simple survey or create a landing page to test interest at different price points. The worst thing you can do is avoid the question altogether and hope it sorts itself out—it won't. Your monetisation strategy deserves just as much attention as your UI design or your backend architecture, because without sustainable revenue none of the rest matters. Users will tell you what they're willing to pay if you ask them in the right way... so go ask them.

Frequently Asked Questions

How long should I run pricing tests before making a decision?

I recommend running tests for at least 3-4 weeks to account for weekly patterns and monthly budget cycles—user behaviour changes dramatically between weekdays and weekends. It's tempting to call a test after a week when you see promising numbers, but I've seen patterns completely reverse when the full month's data comes in.

Should I always price lower than my competitors to win users?

Absolutely not—I've worked on apps where higher pricing actually converted better because it signalled quality and attracted more committed users. A fitness app I built tested £2.99 versus £4.99, and the higher price won because users associated it with premium features and stuck around 8 months instead of 4.

Can I test pricing before building my full app?

Yes, and you absolutely should—I've saved clients thousands by running landing page tests with different price points using Facebook ads for just £200. Create a realistic product page, run "pre-order" campaigns at different prices, and see which generates more committed interest before writing a single line of code.

What's the biggest mistake people make with freemium pricing?

They give away too much value in the free version and leave nothing compelling for the paid tier. I worked on a recipe app where we initially offered almost everything free—conversion was terrible because users had no reason to upgrade, even though they loved the app.

How accurate are surveys for pricing research?

Surveys are useful for understanding perceived value and price ranges, but people consistently lie about what they'll actually pay—often unintentionally. I use the Van Westendorp method with four pricing questions to get ranges rather than specific numbers, and always combine survey data with real conversion tests.

Is it worth offering both monthly and annual subscription options?

Definitely—annual options often convert better than expected, especially when framed correctly. A property app I worked on saw 40% better conversion with £29.99 annual versus £3.99 monthly when we presented it as "less than a coffee per month," even though the annual price was higher.

What metrics should I track when testing different price points?

Don't just look at conversion rates—track lifetime value, churn rate by pricing tier, and revenue per user across entire cohorts. I've seen apps where higher prices had lower conversion but generated 22% more revenue because those users stayed subscribed longer and were more engaged.

How do I know if my app category can support premium pricing?

Look at whether you're replacing an existing paid service and solving high-stakes problems—healthcare and finance apps can command £10+ monthly because users see clear value. Gaming and entertainment apps typically need volume-based models, while productivity apps fall somewhere between depending on their target market.

Subscribe To Our Learning Centre