Expert Guide Series

How Do I Research Competitors Without Copying Their Mistakes?

Most app downloads happen because someone searched for a solution to their problem, found three or four options, and picked the one that looked best in about ninety seconds of scrolling through screenshots and reviews. That's your window. And if you've spent months building an app based on copying what your competitors did, you've already lost—because you're now competing on their terms, with their mistakes baked into your product from day one.

I've watched this happen more times than I'd like to admit. A client comes to us with detailed notes about competitor features, convinced they need everything their rivals have plus "just one or two extra things" to stand out. But here's what they don't see; those competitor apps are probably struggling too. Maybe their retention is terrible. Maybe users hate that feature everyone assumes is working great. Maybe it only exists because a stakeholder insisted on it three years ago and nobody's had the courage to remove it since.

The apps that win aren't the ones that copy the competition better—they're the ones that solve user problems the competition missed entirely.

When I'm doing competitive analysis for a fintech project or a healthcare app, I'm not taking screenshots of features and ticking boxes on a spreadsheet. I'm trying to understand where users are frustrated, where they're working around limitations, where they're abandoning the experience. That's where the real opportunities live. The trick is learning how to research competitors without inheriting their baggage, and that's exactly what this guide is about—showing you how to extract genuine insight from competitive analysis without falling into the trap of building "me too" products that nobody actually needs.

Why Most Competitor Research Actually Makes Your App Worse

I've seen this happen more times than I can count—someone comes to me with their competitor research spreadsheet filled with features from the top 10 apps in their category, and they want us to build something that includes everything those apps do, plus a few extras. Sounds logical, right? But here's the thing—that approach almost always leads to bloated, confused apps that try to do too much and end up doing nothing particularly well. Its a feature wish-list masquerading as research, and honestly? Its one of the quickest ways to waste your development budget.

The problem is that copying what you see in competitor apps means you're copying their solutions without understanding their problems. When I worked on a fitness tracking app a while back, the client was adamant they needed a social feed because all their competitors had one. After actually speaking to users, we discovered they didn't want social features at all—they just wanted better integration with their existing fitness equipment. The competitors had built social features because they'd launched years earlier when that was trendy, not because users needed it. Some of them were even planning to remove those features based on usage data we later discovered.

The Three Ways Competitor Analysis Goes Wrong

Most people make these mistakes without even realising it. They're easy traps to fall into:

  • Copying features without understanding user context—what works for a competitor with 5 million users might be completely wrong for your launch audience
  • Assuming popular features are successful features—I've analysed apps where the most prominent features had less than 10% engagement but they kept them because they looked good in screenshots
  • Missing the business model differences—a free app can justify different features than a premium one, and competitors charging £10/month have different priorities than those making money from ads

You see, when you look at competitor apps, you're seeing the finished product but not the journey. You dont know which features failed in testing, which ones they're keeping just because they're too expensive to remove, or which ones are there to satisfy an investor requirement rather than user needs. I mean, think about it... you're essentially trying to reverse-engineer decisions without access to any of the data that informed those decisions in the first place.

What Actually Happens Behind The Scenes

When I analyse a competitor's app for a client, I'm not just looking at what they built—I'm trying to figure out why they made specific choices and whether those choices are actually working. For a healthcare app we developed, everyone was copying this one market leader's onboarding flow that took about 8 minutes to complete. Seemed like it must be effective if the market leader was using it? But when we dug into app store reviews and did our own testing, we found users were abandoning it at massive rates; the market leader was succeeding despite their onboarding, not because of it. They just had enough brand recognition and marketing budget to overcome that friction.

The apps you're studying have their own technical debt, legacy decisions, and compromises that aren't visible from the outside. That beautiful design might be hiding a complete mess of code that crashes on older devices. That smooth feature might only work because they have a team of 20 engineers maintaining it... something you definitely cant replicate with your budget. Understanding whether your app team can build your vision is crucial before you start copying complex features from well-funded competitors.

Setting Up Your Research Framework The Right Way

When I started doing competitor research properly (after making plenty of mistakes early on), I created a simple system that's helped me analyse hundreds of apps across different industries. The thing is, most people start in the wrong place—they download competitor apps and start clicking around aimlessly, taking screenshots of features they like. That's not research; that's just app tourism.

Here's what I do instead. Before I even open a competitor's app, I write down three things: who the app is for, what problem its supposed to solve, and what success looks like for that user. This might sound basic but you'd be surprised how many people skip this step and end up confused about why a competitor made certain design choices. I worked on a healthcare app where the client wanted to copy a competitor's complex symptom checker, but when we actually mapped out the user journey, we realised their target audience (elderly patients) needed something much simpler. The competitor was targeting medical professionals, not patients.

Your research framework needs structure, otherwise you'll drown in information. I use a spreadsheet (nothing fancy, just Google Sheets) with these columns: App Name, Primary User Type, Core Problem Solved, Onboarding Flow, Key Features, Monetisation Method, Pain Points Observed, and Opportunities Spotted. That last column is where the magic happens—its where you identify what competitors are doing wrong or missing entirely.

What To Document During Research

  • User types the app serves and which segments it ignores
  • Time to complete core actions (checkout, booking, search results)
  • Number of steps in critical user journeys
  • Error messages and how they guide users
  • Permission requests and when they appear
  • Loading states and empty states design
  • Support options available and their visibility

One thing I've learned is that you need to separate what competitors are doing from why they're doing it. A fintech app might have a complex verification process—not because they wanted to add friction, but because regulatory requirements forced their hand. If you don't understand the constraints competitors face, you'll misinterpret their decisions and possibly copy the wrong things. For financial apps, there are specific regulatory requirements that drive many design decisions, regardless of user experience preferences.

Set a timer for 20 minutes per competitor app. This forces you to focus on what actually matters rather than getting lost in minor details. I usually research 5-7 direct competitors thoroughly rather than 20 superficially.

Creating Your Analysis Categories

I break my research into four categories: User Experience (how does it feel to use?), Business Model (how do they make money?), Technical Performance (speed, crashes, bugs), and Market Position (who are they targeting?). For an e-commerce app I worked on, we discovered that none of the competitors had optimised their checkout for returning customers—everyone had to re-enter payment details every time. That single insight from our framework led to a feature that increased conversion by 23%.

The framework shouldn't be rigid though. Adjust it based on your industry and what matters to your users. A meditation app needs different research categories than a banking app. But whatever structure you choose, stick to it consistently across all competitors so you can compare like-for-like. Otherwise you're just collecting random observations that won't help you make better decisions.

Finding What Users Actually Need vs What Competitors Build

Here's something I've seen countless times—someone shows me their competitor analysis spreadsheet with every feature neatly listed out, and they're ready to build something remarkably similar. The problem? They've documented what competitors built, not what users actually need. These are two completely different things, and confusing them is one of the fastest ways to waste your development budget.

I worked with a fintech client who wanted to add a complex portfolio tracking feature because three of their competitors had it. But when we actually spoke to their users, we found out most people just wanted to see their account balance faster; they weren't managing multiple portfolios at all. The competitors had built that feature to look sophisticated to investors, not because users were asking for it. We built a simple balance widget instead and user engagement went up by 40%. This is why finding genuine gaps in the market often comes from understanding user needs rather than copying competitor features.

Start With User Problems Not Feature Lists

The way I approach this now is to look at competitor app store reviews first, before I even open their apps. Read the 2-star and 3-star reviews especially—the 1-stars are often just people having a bad day, and the 5-stars don't tell you much. But those middle reviews? That's where users explain what they actually wanted versus what they got. You'll spot patterns quickly. This approach also helps when you need to understand how to position your app differently from competitors who might be struggling with review quality.

For a healthcare app we developed, every competitor had medication reminders (obviously) but the reviews kept mentioning how hard it was to enter multiple prescriptions quickly. That's the gap we focused on—a faster input method using voice recognition—and it became our strongest differentiator. The competitors had the right feature but hadn't solved the actual friction point users experienced daily.

Watch What Users Do Not What Competitors Say

I always tell clients to look at support forums, social media mentions, and Reddit threads about competitor apps. Users complain there freely, and more importantly, they explain their workarounds. If people are creating elaborate workarounds for something, that's a massive signal about an unmet need. One e-commerce client discovered users were screenshotting products and sharing them in WhatsApp groups because the competitor apps didn't have proper wish list sharing. We built that feature properly and it drove significant viral growth... something we'd never have found just by analysing competitor feature sets. Learning what social media reveals about competitor strategies can uncover these hidden user behaviours that point to opportunities.

Testing Competitor Apps Like a User Not a Developer

When I first started doing competitive analysis, I'd download rival apps and immediately start pulling them apart—checking their tech stack, looking at how they'd structured their navigation, getting all technical about it. Complete waste of time, honestly. Because here's what I learned after building apps for healthcare companies and fintech startups: your users don't care about your elegant navigation system or whether you're using React Native or Swift. They just want to book an appointment or send money to their mate without thinking about it.

The trick is to actually use competitor apps the way a real person would. I mean properly use them, not just click through screens whilst taking notes. Download a competing fitness app? Actually try to log a workout. Testing a banking app? Attempt to set up a standing order like you would for your rent. You'll spot issues you'd never find by just looking at the interface—loading times that feel too long even if they're technically fine, confirmation messages that make you second-guess whether something worked, onboarding steps that seem simple but somehow confuse you. While the underlying tech stack choices matter for development, they're invisible to users who just want things to work smoothly.

The best insights come from the moments when you're genuinely frustrated or delighted, not from systematic feature comparisons

I always test apps in real-world conditions too; on the bus with a dodgy connection, in bright sunlight when you can barely see the screen, or when you're genuinely tired and just want something to work. That's when you discover that a competitor's app loses its connection gracefully whilst another just crashes. Its these real usage patterns that reveal opportunities—not feature lists or design audits. Sure, document what you find, but focus on how things feel rather than how they look or what framework they're built with.

Spotting the Gaps Your Competitors Missed

The best opportunities in the app market aren't where your competitors are looking—they're in the spaces between what users need and what currently exists. I've found some of the most successful apps I've worked on came from spotting these gaps, but honestly, it takes a different way of looking at the market than most people use.

When I was working on a healthcare booking app, every competitor was focusing on adding more features to their calendar systems. More colours, more views, more customisation. But here's what they all missed; patients weren't struggling with how appointments looked, they were struggling with cancellations and last-minute changes. We built a simple waiting list feature that let people grab cancelled slots, and it became the apps most used feature by far. The gap wasn't technical—it was about understanding what actually frustrated users in their daily experience.

Looking at what people complain about

App store reviews are gold for this, but you need to read between the lines. Don't just look at what users say they want; look at the problems they describe. I spent hours going through reviews for fintech apps once, and noticed people kept mentioning they "forgot" to check their spending. Nobody was asking for notifications, but that's what they needed. The gap was in helping users remember, not in giving them prettier graphs to look at. Understanding why users ignore most notifications can help you design reminder systems that actually work instead of annoying people.

Finding the underserved use cases

Most apps are built for the average user, which means edge cases get ignored. But sometimes those edge cases represent huge opportunities? When analysing e-commerce apps, I noticed every competitor optimised for single purchases—but nobody thought about people who buy the same items repeatedly. Adding a "buy again" feature seems obvious now, but it was a massive gap that saved users tons of time. These underserved scenarios often represent your most loyal potential users because they're the ones current solutions frustrate the most.

Separating Good Features From Popular Ones

This is where things get tricky, because what's popular isn't always what's good—and I've seen too many apps fail because they couldn't tell the difference. A feature might have millions of users and still be a terrible idea for your app. Take dark mode, for example. Every client asks for it these days because "all the big apps have it." Sure, its popular. But if you're building a photo editing app where colour accuracy matters, dark mode could actually make your app worse for users who need to see true colours. I've had to have this exact conversation with clients who wanted dark mode just because Instagram has it, without thinking about whether it served their users needs.

The real skill in competitive analysis is figuring out why a feature works for one app but might not work for yours. When I built a fitness tracking app a few years back, the client wanted social sharing because all the big fitness apps had it. We tested it with actual users first—turns out most people didn't want their workout data shared publicly. They found it embarrassing. The popular feature would have hurt retention, not helped it. We kept it but made it private by default, and engagement was much better. Understanding what makes apps genuinely engaging versus just popular helps you make better feature decisions.

How to Test Feature Value

Here's my process for separating genuinely useful features from ones that just look good in screenshots:

  • Check how often users actually access the feature—not just its presence
  • Look at user reviews mentioning the feature (positive and negative)
  • Time yourself using the feature; if it takes more than three taps to access, question its real value
  • Ask whether the feature solves a problem or creates a new one
  • Test if removing it would genuinely upset users or just look bad in marketing

I worked on an e-commerce app that had a "compare products" feature because every competitor had one. Our user testing showed that literally nobody used it—they'd just open multiple tabs instead. We removed it and the app got faster, simpler, and users didn't even notice it was gone. That's the thing about popular features... sometimes they're just cargo cult design, copied from app to app without anyone asking if they're actually needed.

Download your top three competitors apps and use them for real tasks, not just browsing. Time how long it takes to complete actions and count the steps involved. Features that take more than 5-6 taps are probably adding complexity without much value, no matter how "popular" they seem in the market.

The Retention Test

The best way to judge a feature? Look at retention data if you can access it, or use app analytics tools to estimate engagement. A feature might drive downloads but if it doesn't keep people coming back, its not actually good—its just good marketing. I've seen apps with dozens of features that users try once and never touch again. Meanwhile, the best apps I've built often have fewer features but much higher engagement because every feature earns its place by solving a real problem users have repeatedly. This is also why free trials work better than feature comparisons for conversion—users can experience real value rather than just seeing a list of capabilities.

Turning Research Into Design Decisions That Work

Here's where most people mess up—they've done all this research, they've got notebooks full of competitor screenshots and feature lists, and then they basically recreate what everyone else is doing with a slightly different colour scheme. I've seen it happen so many times its almost predictable at this point. The whole point of your research was to avoid copying mistakes, not to build a Frankenstein app made of bits from your competitors.

What I do with my team is create what we call a decision framework. Sounds fancy, but it's really just a structured way to turn observations into actual design choices. For each feature or pattern you found during research, you need to ask three questions: Does this solve a real user need we've identified? Can we do it better than the existing solutions? And honestly, do we have the resources to build and maintain it properly? That last one is critical because I've worked on fintech projects where clients wanted to copy every feature from Monzo and Revolut, but they had about 5% of the budget. You've got to be realistic. Remember that even after launch, apps need ongoing investment to maintain and improve features over time.

From Notes to Wireframes

The actual process I use looks something like this—I take my research findings and group them into themes rather than keeping them as a list of features. So instead of "competitors have biometric login, push notifications, dark mode" I'm thinking about the broader theme of "users need quick, secure access with minimal friction." That's what lets you make design decisions that solve the same problem but in your own way. Maybe your solution isn't biometric login at all; maybe its a smart PIN system that learns usage patterns.

Priority Matrix That Actually Makes Sense

I always build a simple priority matrix with my clients that maps user value against development effort. Its not rocket science but you'd be surprised how many teams skip this step and just start building whatever seems cool. The sweet spot? High user value, medium effort. Those are your wins. I worked on a healthcare app where we found that competitors all had complex symptom checkers that nobody really used because they were too time-consuming. Our research showed users actually just wanted quick access to their appointment history and test results—way simpler to build, way more valuable to users.

One thing that's helped me over the years is documenting why you're making each decision, not just what you're building. When a stakeholder asks "why don't we have feature X like competitor Y does?" you can point to your research and explain exactly why you chose a different approach. This saved my neck on an e-commerce project where the client wanted to copy Amazon's navigation structure—our research showed their specific user base would be completely confused by it, so we built something simpler that actually tested better. For professional apps especially, balancing functionality with ease of use often means choosing different solutions than consumer-focused competitors.

Research Finding Design Decision Why It Works
Competitors use complex multi-step onboarding Single screen with progressive disclosure Users abandon after step 2; we get same info with less friction
All apps have extensive feature menus Context-aware interface showing relevant features Reduces cognitive load; users find what they need faster
Standard notification patterns everyone uses AI-driven notification timing based on user behaviour Better engagement without being annoying

The biggest mistake I see is treating your research as a checklist. "Competitor A has this, competitor B has that, so we need both." That's not design thinking, thats just copying with extra steps. Instead, use your research to understand what problems exist in the market, then solve those problems in ways that fit your specific users and your business model. Sometimes that means doing less than competitors, which feels counterintuitive but often leads to better apps.

Conclusion

The truth about competitor research is that its real value isn't in copying what works—it's in understanding why certain things work for specific users and then deciding whether those same principles apply to your situation. I've seen too many apps fail because they treated competitive analysis like a shopping list, ticking off features without understanding the thinking behind them. When we built a healthcare appointment booking app, we looked at what competitors were doing with reminders and notifications, but instead of copying their exact approach we spoke to actual patients about what they needed. Turns out they wanted less communication, not more, which was completely opposite to what every competitor was doing.

What matters most is building your own framework for evaluation. You need to test competitor apps as a real user would, document the gaps between what users want and what currently exists, and then separate genuinely good features from ones that are just popular because everyone else has them. I mean, that's the real skill here—knowing when to follow patterns and when to break them.

The research phase should inform your design decisions, not dictate them. When you've done this properly you'll have a clear picture of where your app can genuinely add value rather than just existing as another option in a crowded market. And honestly? That clarity is worth more than any feature list you could steal from competitors. Your job isn't to build what already exists; its to build something that solves problems better than what's currently available, even if that means doing less or doing things differently to everyone else in your space.

Frequently Asked Questions

How long should I spend researching each competitor app?

I recommend setting a 20-minute timer per competitor app to force yourself to focus on what actually matters rather than getting lost in minor details. It's better to research 5-7 direct competitors thoroughly than 20 apps superficially, as you'll get more actionable insights from deeper analysis.

Should I copy features that all my competitors have?

Not automatically—just because every competitor has a feature doesn't mean it's actually working well for users. I've analysed apps where the most prominent features had less than 10% engagement, and I've seen market leaders succeed despite terrible onboarding flows, not because of them.

What's the best way to find gaps in the market through competitor research?

Focus on app store reviews, especially 2-star and 3-star ones, and look at support forums where users complain freely about their problems. I often find the biggest opportunities in what users are working around or complaining about, rather than in missing features—like when we discovered users wanted faster prescription input, not just medication reminders.

How do I avoid building a "me too" product when researching competitors?

Start with user problems, not feature lists, and always ask why competitors made certain choices before copying them. Document what problems competitors are trying to solve rather than just what features they built, because you might find better solutions to the same underlying issues.

What should I document during my competitive analysis?

I track user types served, time to complete core actions, critical user journey steps, and most importantly—pain points observed and opportunities spotted. The real magic happens in that last column where you identify what competitors are doing wrong or missing entirely.

How can I tell if a popular feature is actually worth building?

Test how often users actually access the feature in competitor apps, not just its presence, and check user reviews mentioning it specifically. I use a simple test: if it takes more than 5-6 taps to access or if removing it wouldn't genuinely upset users, it's probably just cargo cult design copied without real purpose.

Where should I look beyond just downloading competitor apps?

Reddit threads, support forums, and social media mentions reveal how users really feel about competitor apps and what workarounds they're creating. I've found massive opportunities by watching what users screenshot and share, or what elaborate workarounds they create—these signal genuine unmet needs that apps aren't addressing properly.

How do I turn my research findings into actual design decisions?

Create a priority matrix mapping user value against development effort, and group findings into themes rather than feature lists. For each pattern you found, ask: Does this solve a real user need? Can we do it better? Do we have resources to build and maintain it? Then document why you're making each decision, not just what you're building.

Subscribe To Our Learning Centre