Expert Guide Series

How Do I Know Which User Problems Are Worth Solving?

Have you ever looked at your product roadmap and wondered if you're actually solving the right problems? I've built dozens of apps over the years and honestly, one of the biggest mistakes I see teams make is jumping straight into solving problems without stopping to ask whether those problems are actually worth solving in the first place. It's a bit mad really—we spend months building features that nobody ends up using because we never properly validated whether the problem we were solving mattered to our users.

Here's the thing: your users will tell you about hundreds of problems if you ask them. Some are genuine pain points that cost them time or money. Others are just minor annoyances they mention in passing. And some? Well, some aren't really problems at all—they're just users describing solutions they've imagined without understanding what they actually need. I mean, if we built every feature users requested we'd end up with a bloated mess that nobody wants to use.

Not all problems are created equal, and solving the wrong ones can be more damaging than solving none at all.

The challenge isn't finding problems to solve; its working out which problems deserve your limited time and budget. Should you fix that checkout flow issue that affects 2% of users, or focus on the onboarding experience that 50% of new users struggle with? What about that feature request that keeps coming up in support tickets versus the problem you've identified through your analytics data?

This guide will walk you through a practical framework for problem validation and user research prioritisation. We'll look at how to separate surface-level complaints from deep user needs, how to measure whether a problem is worth your resources, and how to build a system that helps you make these decisions consistently. Because at the end of the day, good design research isn't just about understanding problems—it's about knowing which ones matter enough to solve.

Understanding the Difference Between Surface Problems and Real Problems

Here's the thing—most people who come to me saying they want an app built are solving the wrong problem. They've identified something that bothers them or their users, but they haven't dug deep enough to find what's actually causing that frustration. Its like treating a headache when you really need glasses, you know?

A surface problem is what people complain about. A real problem is why they're complaining in the first place. Let me explain this properly because it matters more than almost anything else when building an app. When users say "your app is too slow," that's a surface problem. The real problem might be that they're trying to complete a task during their commute and need results in under 30 seconds or they'll give up entirely. See the difference?

I've spent years learning to spot this distinction, and honestly it still catches me out sometimes. Users are brilliant at telling you what annoys them but terrible at explaining why. That's not their fault—they're just describing their experience, not analysing it. That's our job. Understanding these nuances is particularly important when you're developing personas for different cultural contexts, where surface complaints might mask completely different underlying needs.

Common Examples of Surface vs Real Problems

  • Surface: "The checkout process has too many steps" — Real: Users don't trust us with their payment details yet
  • Surface: "Users aren't enabling push notifications" — Real: We haven't given them a compelling reason to stay connected
  • Surface: "People aren't using our social features" — Real: Our core functionality doesnt create shareable moments
  • Surface: "The onboarding takes too long" — Real: Users can't see value until screen 5, but most quit at screen 3

The way I figure out which is which? I ask "why" at least three times. Why do users want fewer steps? Because it feels like too much effort. Why does it feel like too much effort? Because they haven't decided if they want to buy yet. Why haven't they decided? Because they dont trust the product quality. Now we're getting somewhere useful.

Solving surface problems makes users slightly happier for a moment. Solving real problems changes behaviour and builds loyalty—and that's what successful apps actually need to do.

Validating Problems Through User Research

Right, so you've identified what you think are real problems—but here's the thing, your assumptions about what users need are probably wrong. I mean, not completely wrong, but wrong enough that you'll waste time and money if you don't validate them first. And I've seen this happen more times than I'd like to admit; a client comes to me absolutely convinced their users need feature X, we do some basic research, and it turns out users actually couldn't care less about feature X but are desperate for feature Y instead.

User research doesn't need to be complicated or expensive. Sure, you can hire a fancy research agency if you've got the budget, but honestly? Some of the best insights I've gathered came from simple 20-minute conversations with actual users. The key is talking to the right people and asking questions that get beyond surface-level responses. You want to understand not just what users say they do, but what they actually do—because those two things are often very different. This is especially crucial if you're working on something complex like explaining AI personalisation features, where users often can't articulate what they want until they see it in action.

Methods That Actually Work

Here are the research methods I use most often when validating problems:

  • One-on-one interviews (these give you the richest insights but take time)
  • Surveys for quantitative validation (great for confirming patterns you've spotted)
  • Observation sessions where you watch people use existing solutions
  • Social media listening to see what people complain about naturally
  • Support ticket analysis if you're working with an existing product

The mistake people make is treating research as a one-time thing. Its not. You should be talking to users constantly throughout development, not just at the start. What seems like a massive problem in early interviews might turn out to be less important once you dig deeper—or vice versa. I always recommend starting with at least 10-15 user conversations before making any major decisions about what problems to prioritise.

Record your user interviews (with permission obviously) so you can review them later. You'll catch things you missed the first time, and its useful to have actual quotes when presenting findings to stakeholders who weren't in the room.

Measuring Problem Severity and Frequency

Right, so you've found some problems your users are facing—but how do you know which ones actually matter? I mean, not every problem is created equal, is it? Some problems will make users delete your app immediately, whilst others are just minor annoyances they'll put up with. The trick is figuring out which is which before you spend months building the wrong thing.

Here's what I do: I plot problems on two axes. Severity on one side (how painful is this problem?) and frequency on the other (how often does it happen?). A problem that's super painful but only affects three users once a year? That's probably not worth solving. But a moderately annoying issue that hits 80% of your users every single day? That's your priority right there. This approach is similar to what you'd do in measuring app feasibility study effectiveness, where you need clear metrics to guide decision-making.

The Simple Scoring System

You don't need fancy tools for this—honestly, a spreadsheet does the job perfectly. For severity, I ask users to rate the problem from 1-5 where 1 is "barely noticed it" and 5 is "almost deleted the app because of this". For frequency, track how often it actually happens in your analytics or just ask users directly. Once a month? Daily? Every time they use a specific feature?

Then multiply severity by frequency. A problem with severity 4 that happens daily (let's say frequency score 5) gets a score of 20. A problem with severity 5 that happens once a month (frequency score 1) only gets 5. See the difference? You want to tackle the high-scoring problems first because they're affecting the most people in the most painful ways.

What the Numbers Actually Tell You

But here's the thing—don't just trust the numbers blindly. Sometimes a low-frequency, high-severity problem is actually a deal-breaker for a specific user segment you really care about. Maybe it only affects premium subscribers or your most active users. Context matters more than the raw score. For luxury brands, for instance, even small problems can damage premium customer relationships in ways that basic scoring doesn't capture.

I've seen teams obsess over problems that scored high on paper but turned out to be edge cases that didn't really impact the core experience. And I've seen them ignore problems with lower scores that were actually blocking entire user journeys. Use the scores as a starting point, not the final answer. Look at the data, yes, but also talk to your users and understand whats really stopping them from loving your app.

Problem Type Severity (1-5) Frequency (1-5) Priority Score Action
Login fails on first attempt 5 4 20 Fix immediately
Search returns slow results 3 5 15 Prioritise soon
Settings page layout broken 2 1 2 Low priority
Checkout process confusing 5 3 15 High priority

Track these scores over time too. A problem that starts small can grow as your user base changes or as you add new features. What wasn't worth solving six months ago might be your biggest issue today. Keep measuring, keep listening, and adjust your priorities accordingly.

Understanding the Cost of Solving Each Problem

Right, so you've identified a bunch of user problems and validated that they're actually worth paying attention to. Good start. But here's where things get tricky—just because a problem is real and affects loads of users doesn't mean you should rush off and fix it straight away.

Every problem has a cost attached to solving it, and I'm not just talking about money here (though that's obviously a big part of it). You need to think about development time, testing requirements, maintenance overhead, and the knock-on effects each solution might have on your existing app. I've seen teams spend three months building a feature that solved a genuine user problem, only to realise they've introduced two new problems in the process. Its a bit mad really. The complexity varies wildly depending on your app type—broadcasting apps or location-based features can add significant technical overhead that affects every solution you consider.

Start by breaking down the technical complexity of each solution—does it require new infrastructure? Will you need to integrate third-party services? Are there security considerations that'll add weeks to your timeline? A simple-looking feature can turn into a six-month project once you dig into the technical requirements. And then there's design complexity; some problems need a complete rethink of your user interface whilst others might just need a button moved or some clearer copy.

The cheapest solution isn't always the best one, but the most expensive solution is rarely the only option

Maintenance is another thing people forget about. That shiny new feature you build today? Someone's going to have to support it, update it, and fix it when it breaks. I always tell clients to think about the lifetime cost of a feature, not just the upfront development expense—it makes the conversation about priorities much more honest and way less painful down the line. There are also legal considerations that can impact feasibility, especially for apps handling sensitive data or operating in regulated industries.

Looking at What Your Competition Gets Wrong

Your competitors are making mistakes right now—and that's brilliant news for you. I mean, every time an app leaves a user frustrated or confused, thats an opportunity for your app to step in and do things better. The trick is knowing where to look and what you're actually looking for.

Start by downloading your competitors apps and using them properly. Not just opening them once, but actually trying to accomplish real tasks. Create accounts, go through their onboarding, use their main features for at least a week. You'll be amazed at what you discover when you stop treating this like research and start treating it like being an actual user? I do this with every project and honestly, its where some of the best insights come from. If you want to be more systematic about it, you can even set up automated tracking of competitor updates to stay on top of their changes over time.

Common Areas Where Apps Get It Wrong

Most apps fail in predictable ways. They make their onboarding too long, they ask for permissions before explaining why, they hide their best features behind confusing navigation. But here's the thing—these aren't just random mistakes. They're usually signs that the team didn't properly understand what problems users were trying to solve in the first place.

Pay close attention to app reviews, especially the ones with 2 or 3 stars. Those are gold. The 1-star reviews are often just people venting, but the middle ratings? Those are from users who wanted to love the app but couldn't because of specific issues. They'll tell you exactly what's not working.

What to Track When Studying Competitors

  • How many steps does it take to complete basic tasks
  • What features do users mention wanting in reviews
  • Where do users get stuck or confused in the flow
  • What complaints appear repeatedly across different reviews
  • How quickly can you accomplish your main goal
  • What information do they ask for that seems unnecessary

The apps that succeed aren't necessarily the ones with more features—they're the ones that solve the right problems in the simplest way possible. When you study what your competition gets wrong, you're not looking to copy what they do right; you're looking for the gaps they've left wide open for you to fill.

Testing Problems Before Building Solutions

Right then—this is where most people get it wrong, and honestly its one of the biggest mistakes I see in our industry. You've done your research, you've validated the problem, you've measured how severe it is...and now you want to jump straight into building the solution. But here's the thing—you don't actually know if your solution is going to work yet, do you?

Testing problems before you build anything is basically about creating quick experiments that prove whether solving this particular problem will actually matter to your users. I mean, you can have the most severe, frequently-occurring problem in the world, but if your solution doesn't resonate with people or they won't change their behaviour to use it, you've wasted a lot of time and money. This is particularly important for emerging technologies—testing AR features requires even more validation because users often don't know how they'll respond to new interaction patterns.

The beauty of problem testing is that its cheap and fast. You're not building features yet; you're testing assumptions about user needs and whether your proposed solution direction makes sense. I've seen projects save hundreds of thousands by spending just a few weeks on proper problem testing first.

Quick Ways to Test Problems

There are loads of ways to test problems without writing code, and some of them take literally hours rather than weeks. Prototype testing is probably the most common—you create a simple mockup (it can even be on paper!) and watch people try to use it. Landing page tests are brilliant too; you create a page describing your solution and see if people actually sign up for updates or express genuine interest. This is also a great way to build your email list before launch while validating demand.

Fake door testing is a personal favourite of mine, though its a bit cheeky. You add a button or menu item for a feature that doesn't exist yet and track how many people click it. If nobody clicks? That problem probably isn't worth solving right now.

Always test with real users from your target audience, not your team or friends—they'll tell you what you want to hear rather than what's actually true about the problem you're trying to solve.

What Good Problem Tests Look Like

A good problem test should answer specific questions about user needs and behaviour. Will people actually change what they're doing now to use this solution? Is the problem painful enough that they'll invest time learning something new? Does your approach make sense to them, or are you solving it in a way that doesn't match their mental model?

The tests that work best are the ones that mimic real usage as closely as possible. Sure, watching someone interact with a prototype isn't the same as using a live app, but it's damn close enough to spot major issues with your problem-solving approach before you've committed serious development resources.

You know what? Some of the best insights come from what users don't do during testing. If they're confused about where to start, or they try to accomplish the task in a completely different way than you expected, that's incredibly valuable information about whether you've really understood the problem properly.

Keep track of your test results in a simple format—what you tested, what you learned, and whether it validated or invalidated your assumptions about the problem. Here's how I structure my problem testing results:

Test Method What It Reveals Time Required
Prototype Testing Whether users understand your solution approach 3-5 days
Landing Page Test If there's genuine interest in solving this problem 1-2 days
Fake Door Test How often users would actually use the feature 2-4 weeks
Wizard of Oz Test Whether manual solution works before automating 1-3 weeks

The biggest mistake people make with problem testing is treating it like a box-ticking exercise. They run through the motions but don't actually listen to what the tests are telling them about user needs and problem validation. If your tests are showing that people aren't that bothered about the problem, believe them—don't try to convince yourself that they just don't understand the vision yet.

Creating a Simple System for Problem Prioritisation

Right, so you've validated problems, measured severity, worked out costs—now what? You need a way to actually decide which problems to tackle first. And honestly, this is where a lot of teams get stuck because they overthink it.

I use a really straightforward scoring system that's served me well across dozens of app projects. Its not fancy but it works. You score each problem on three factors: severity (how much pain does it cause users?), frequency (how many users experience it?), and feasibility (how realistic is it to solve?). Give each factor a score from 1-10, multiply them together, and boom—you've got a priority score. This systematic approach is something you might want to document properly in your feasibility study report to justify decisions to stakeholders.

Here's the thing though—don't get too caught up in making the scores "perfect". I mean, whether something gets an 8 or a 9 for severity doesn't really matter that much; what matters is you're being consistent across all the problems you're evaluating. The point isn't mathematical precision, its making sure you're comparing apples to apples.

Quick Priority Framework

Score Range Priority Level Action
700-1000 Critical Start immediately
400-699 High Plan for next sprint
200-399 Medium Add to backlog
Below 200 Low Document and revisit later

The beauty of this system? You can adjust it based on your specific situation. Maybe you want to weight feasibility more heavily if you're working with limited resources. Or perhaps frequency matters more for a consumer app trying to grow quickly. The framework should serve you, not the other way around.

One last bit—review your priorities regularly. User needs change, market conditions shift, and what seemed critical three months ago might not matter anymore. I typically revisit the priority list every quarter or after any major release. Sometimes you'll discover that your app idea needs more time to mature in the market, which completely changes your problem prioritisation.

Conclusion

Look—I'm not going to pretend that figuring out which problems to solve is easy, because honestly it's not. But here's what I've learned after building apps for years and years; the companies that take time to properly validate user needs always end up with better products than those who just build whatevers on their mind that day. Its really that simple.

The thing about user research prioritisation is that it doesn't stop once you've launched your app. You'll keep discovering new problems, users will report issues you never expected, and the market will shift in ways you cant predict right now. That's just how mobile works. What matters is having a system in place—something you can return to whenever you need to make decisions about feature planning or where to invest your development time next.

I mean, you dont need fancy tools or expensive research agencies to validate problems properly. Start small. Talk to your actual users (not just the ones who love everything you do). Watch how people interact with your app. Look at the data but also listen to the frustration in someones voice when they describe a problem they're having. Both matter.

The best apps I've worked on are the ones where we said no to good ideas so we could focus on solving the right problems first. And yeah, saying no is hard—especially when youve got stakeholders pushing for their favourite features or competitors launching something that looks impressive. But design research shows us time and again that depth beats breadth. Solve one problem brilliantly rather than ten problems badly.

So take what you've learned here, adapt it to your situation, and remember that understanding user needs isn't a one-time thing—its an ongoing conversation with the people who actually use what you build.

Frequently Asked Questions

How do I know if a user problem is worth solving or just a minor complaint?

Look at both the severity (how much pain it causes) and frequency (how many users experience it) of the problem. A good test is asking "why" at least three times to get beyond surface complaints to the real underlying issue. If the problem affects many users regularly or blocks critical user journeys, it's likely worth prioritising.

What's the quickest way to validate whether users actually care about a problem I've identified?

Start with simple 20-minute conversations with 10-15 actual users from your target audience. You can also create basic prototypes or landing pages to test interest, or use "fake door" testing by adding buttons for non-existent features to see if people click them. These methods take days rather than months and give you real validation data.

How should I prioritise problems when I have limited development resources?

Score each problem on severity (1-10), frequency (1-10), and feasibility (1-10), then multiply these scores together. Problems scoring 700+ should be tackled immediately, whilst those under 200 can wait. Remember to weight feasibility more heavily if you're resource-constrained, and always consider the long-term maintenance costs of solutions.

Should I build every feature that users request in feedback or reviews?

Absolutely not. Users are brilliant at describing what annoys them but often suggest solutions rather than explaining their underlying needs. Focus on the problems mentioned repeatedly across multiple users, and always dig deeper to understand why they're requesting specific features before building anything.

How often should I reassess my problem priorities?

Review your priorities at least quarterly or after any major product release. User needs change, market conditions shift, and problems that seemed critical months ago might not matter anymore. Keep measuring problem severity and frequency over time, as issues can grow or shrink as your user base evolves.

What's the best way to study competitor problems without just copying their solutions?

Use competitor apps properly for at least a week, focusing on completing real tasks rather than just browsing features. Pay special attention to 2-3 star reviews (not just 1-star rants) to understand where users get frustrated. Look for gaps in their solutions and repeated complaints across different apps in your space.

How can I test if my solution will work before spending months building it?

Create quick experiments like paper prototypes, landing page tests, or "Wizard of Oz" testing where you manually provide the service before automating it. These methods help you validate whether users will actually change their behaviour to use your solution, which is often the biggest risk in product development.

What's the difference between a surface problem and a real problem?

Surface problems are what users complain about directly, whilst real problems are the underlying causes of those complaints. For example, "your checkout has too many steps" might be the surface issue, but the real problem could be that users don't trust you with their payment details yet. Always ask "why" multiple times to uncover the deeper need.

Subscribe To Our Learning Centre