Expert Guide Series

How Do I Test My App With Real Users Who Have Disabilities?

Most developers spend months building an app, making sure every feature works perfectly, every button does what its supposed to do. Then they launch it and discover that 15-20% of their potential users can't actually use it properly. Not because the app crashes or has bugs, but because it wasn't designed with disabilities in mind. I've seen this happen more times than I'd like to admit, and its always an expensive mistake to fix after launch.

The thing about accessibility is that you can't just guess if your app works for people with disabilities. You need to actually test it with real users who rely on screen readers, who navigate entirely by voice, who can't distinguish between certain colours. I learned this the hard way on a healthcare project where we thought we'd built everything to WCAG standards—turned out our carefully crafted colour scheme was completely useless for users with colour blindness, and our "accessible" navigation was confusing for screen reader users.

Testing with real users who have disabilities isn't just about ticking compliance boxes; its about discovering the actual barriers that exist between your app and a huge portion of your audience.

The challenge is that most development teams don't know where to start. How do you find these users? What do you even ask them to do? How much should you pay them for their time? And honestly, theres a bit of nervousness around it too—you're asking people to test something that might not work well for them, which feels uncomfortable if you've never done it before. But here's what I've learned after running dozens of these testing sessions: users with disabilities are incredibly patient and want to help you build better apps. They're used to apps not working for them, so when a developer actually asks for their input? They appreciate it. This guide will walk you through exactly how to do it right.

Understanding Why Accessibility Testing Matters

Look, I'll be honest—accessibility testing wasn't something I prioritised in my early years building apps. It seemed like an optional extra, something you'd add if you had leftover budget. But here's the thing; I learned this lesson the hard way when we launched a banking app that seemed perfect in our internal tests. Within days of launch, we got complaints from users with visual impairments who couldn't complete basic transactions because our custom buttons weren't recognised by screen readers. The app worked beautifully for us, but it was completely unusable for about 15% of potential users. That's millions of people who couldn't access their own money.

The business case alone should be enough to convince anyone. There are over 14 million people in the UK with some form of disability, and they control about £249 billion in spending power. When you exclude them from your app, you're literally turning away paying customers. I've seen clients lose entire corporate contracts because their apps didn't meet accessibility standards—one healthcare app we were brought in to fix had been rejected by an NHS trust specifically because it failed basic accessibility requirements. The original development cost was £80,000 but fixing it afterwards cost nearly half that again.

What Actually Happens When Apps Aren't Accessible

Beyond the numbers, its about real people trying to do everyday things. A user with motor impairments might struggle with tiny tap targets or gesture controls. Someone with colour blindness cant distinguish your red error messages from green success notifications. Deaf users miss audio-only alerts. These aren't edge cases; these are common scenarios that affect how millions of people interact with your app daily. And you know what? Many accessibility improvements actually make apps better for everyone—larger tap targets help people using apps whilst walking, good colour contrast helps in bright sunlight, and clear navigation benefits users in a hurry.

The Legal Side You Can't Ignore

The Equality Act 2010 requires that digital services be accessible to people with disabilities, and enforcement is getting stricter. I've watched companies face legal challenges that could have been avoided with proper testing. More importantly, both Apple and Google have guidelines that can affect your app store presence if you're flagged for accessibility issues. Testing with actual users who have disabilities isn't just good practice—its becoming a basic requirement for responsible app development.

Finding and Recruiting Users With Disabilities

This is where most teams get stuck, and I'll be honest—it took me years to figure out the right approach. You can't just post a job ad on a general testing platform and expect qualified testers with disabilities to magically appear. The recruitment process requires genuine effort and building trust with communities you might not have connections to yet.

I've worked with specialist recruitment agencies like Fable and AbilityNet who maintain pools of testers with various disabilities. Yes, they charge more than standard user testing services (usually £50-80 per tester per session compared to £30-40 for general testing), but the quality of feedback is worth every penny. These testers know exactly what to look for and can articulate issues that might take you weeks to discover on your own.

Where to Find Testers

Local disability organisations and charities are often willing to help connect you with their members—I've had success working with RNIB for vision-related testing and Action on Hearing Loss for audio accessibility. The key is approaching them respectfully, explaining your project clearly, and offering fair compensation. Don't expect people to test for free just because they're part of a charity; their time and expertise deserve payment.

Social media groups and forums dedicated to assistive technology users can be goldmines, but you need to engage authentically. I usually spend time in these communities first, contributing and learning before ever asking for testers. Facebook groups for VoiceOver users or Reddit communities for people with motor disabilities have connected me with brilliant testers who've fundamentally changed how I think about app design.

Building Your Testing Panel

Here's what I aim for when recruiting testers:

  • At least 2-3 people who use screen readers daily (VoiceOver for iOS, TalkBack for Android)
  • Users with motor disabilities who rely on switch controls or voice commands
  • People with colour blindness or low vision who don't use screen readers
  • Users who are deaf or hard of hearing
  • People with cognitive disabilities if your app involves complex workflows

Start building relationships with disability advocates and organisations before you need testers. When I launched testing for a healthcare app, having established connections meant I could recruit qualified participants within days rather than weeks. The community needs to trust you first.

One thing that's changed my approach is offering ongoing relationships rather than one-off tests. I now work with a core group of testers across multiple projects, which means they understand our development process and we've built mutual trust. They're more likely to give honest, detailed feedback because they know we actually implement their suggestions. For a fintech app we built, one of our regular testers spotted a critical issue with how our authentication flow worked with voice commands—something that would've been a nightmare if we'd discovered it post-launch.

Preparing Your App and Team for Accessibility Testing

Before you bring in users with disabilities to test your app, you need to get your house in order—and I mean really get it sorted. I've seen too many teams rush into testing sessions only to waste everyone's time because they hadn't done the groundwork. Its not just about fixing obvious bugs; it's about making sure your team understands what they're looking for and that your app is actually ready to be tested in the first place.

Start by running your app through automated accessibility checkers. I use tools like Accessibility Scanner for Android and Xcode's Accessibility Inspector for iOS on every project. Sure, these tools won't catch everything (they typically find about 30-40% of issues in my experience) but they'll flag the low-hanging fruit like missing labels, poor colour contrast, and touch targets that are too small. Fix these before you invite real users to test. Why? Because you don't want to waste a testing session discovering that your app crashes when VoiceOver is enabled—something you could have caught yourself in ten minutes.

Getting Your Team Ready

Your developers, designers, and product managers all need a proper briefing. I usually spend an hour with the team going through disability types we'll be testing for and what assistive technologies users might employ. Most designers I work with have never actually watched someone navigate an app using a screen reader, and that lack of experience shows in their work. Have them use VoiceOver or TalkBack themselves for 30 minutes before the testing sessions. It's uncomfortable and slow, but that's exactly the point. Before you even start, make sure you have the right technical capabilities on your team to implement the changes you'll discover.

What Your App Actually Needs

Here's a checklist I run through before any accessibility testing session:

  • All interactive elements have proper labels that make sense when read aloud
  • Your app works in both portrait and landscape orientations
  • Text size can be adjusted without breaking your layouts (test up to 200% scaling)
  • Colour isn't the only way information is conveyed—add icons or text labels
  • Touch targets are at least 44x44 points (Apple) or 48x48 dp (Android)
  • Form fields have clear labels and error messages that screen readers can announce
  • Your app doesn't auto-play videos or sounds that might confuse screen reader users

I worked on a banking app where we thought we were ready for testing, but during our internal check we discovered that our "quick transfer" feature was completely inaccessible because the developers had built it as a custom gesture that screen readers couldn't interpret. We had to rebuild that entire flow before testing, which delayed us by two weeks... but better to find it ourselves than embarrass everyone in front of users who'd taken time out of their day to help us.

Setting Up Testing Sessions That Work for Everyone

The logistics of accessibility testing sessions need more thought than standard user testing—and I've learned this the hard way. Early on, I set up a session for a healthcare app where we'd invited users with motor impairments, but our testing room was up two flights of stairs with no lift. Bloody hell, right? We had to scramble and move everything to a ground floor conference room at the last minute. Now I always check the physical space first, even before booking dates.

Location matters more than you'd think. If you're testing in person, you need step-free access, accessible toilets, clear signage, and enough space for mobility aids or service animals. For a fintech app we tested a few years back, one of our participants used a power wheelchair and the testing room doorway was too narrow—we ended up conducting that session in the building lobby because it was the only practical option. Remote testing solves some of these issues but creates others; not everyone has reliable internet or a quiet space at home, so offering both options works best.

I always budget extra time between sessions because accessibility testing takes longer than you expect, and rushing participants defeats the entire purpose

Timing and Scheduling

Give people choices about session times. Some disabilities come with fatigue that's worse at certain times of day. I usually offer morning, afternoon and evening slots across different days of the week. Sessions should be 60-90 minutes maximum—shorter than that and you won't see real usage patterns, longer and people get tired. And here's something most people miss: build in proper breaks. If someone needs to take medication or rest, that's not wasted time, its just part of working with real humans.

Technical Setup

Test your recording software works with screen readers before the actual sessions. I mean it. We once lost an entire session's audio because our recording tool and JAWS conflicted with each other. Now we always do a technical dry run with someone from the team using the assistive technology we expect participants to bring. Have backup devices ready too—if someone's screen reader acts up, you need a plan B that doesn't waste their time.

Running the Testing Sessions

The first few minutes of your testing session matter more than you think—I've seen perfectly planned sessions fall apart because we rushed through the welcome and jumped straight into tasks. Take time to make your participant comfortable; explain what you're testing (the app, not them), and remind them they can stop whenever they need to. I always tell participants "there are no wrong answers here, and if something doesn't work its our problem to fix not yours". This seems obvious but you'd be surprised how many people worry they're doing something wrong when really the apps just badly designed.

Let participants use their own assistive technology if possible. When we tested a healthcare appointment booking app with a blind user, she had her iPhone configured exactly how she liked it—VoiceOver speed cranked up way faster than I could follow, specific gestures she'd customised over years of use. If we'd handed her a test device with default settings it would've been like asking someone to type on a keyboard with all the keys rearranged. Sure, sometimes you need to provide devices (especially for Android testing when participants use iOS), but whenever possible let people use what they know.

Don't interrupt unless you absolutely have to. I know its hard—you'll see someone struggling with something that seems obvious to you and you'll want to jump in and help. Don't. Watch what they do, listen to what they say out loud, and take notes like crazy. We once watched a screen reader user spend three minutes trying to find a "continue" button that was there but had been mislabelled as "next step"...the developer sitting next to me wanted to tell her where it was but that struggle showed us exactly where our labelling had failed. Ask questions after they've completed (or abandoned) a task, not during it.

Record everything if participants consent to it—screen recordings, audio, the lot. You think you'll remember the important bits but you won't, and having video means your whole team can watch later and see exactly what happened.

What to Look For During Accessibility Testing

Right, so you've got your users in the session and you're watching them use your app. What exactly should you be paying attention to? I've run dozens of these sessions and I'll be honest, the first few times I completely missed important issues because I was focusing on the wrong things. Its easy to get distracted by small visual glitches when there are much bigger problems happening right in front of you.

The most telling thing to watch is where users pause or hesitate. When someone using VoiceOver suddenly stops navigating and starts swiping back and forth through the same elements, that's a massive red flag—they're probably lost or confused about where they are in your app. I worked on a healthcare app where we thought our medication reminder flow was simple, but watching a blind user get stuck between the "add medication" and "view schedule" buttons made us realise our focus order was completely illogical. They couldn't build a mental model of the screen because our elements jumped all over the place.

Here's what I actively monitor during every session:

  • Task completion times; if something takes 3x longer than expected, somethings wrong
  • Error rates and where they happen most frequently
  • Moments where users verbally express frustration or confusion
  • How many attempts it takes to complete a specific action
  • Whether users find workarounds or give up entirely
  • Physical strain—watch for users adjusting their grip or taking breaks

Don't just watch the screen, watch the user. Their facial expressions, body language, and verbal reactions tell you more than any analytics dashboard ever will. When someone sighs heavily or mutters under their breath, thats your cue to dig deeper.

Technical Issues That Are Easy to Miss

Pay close attention to colour contrast issues in real lighting conditions. I've seen apps that passed automated contrast checkers but were completely unreadable for users with low vision when tested on an actual device in normal indoor lighting. Same goes for touch target sizes—something might technically meet the 44x44 pixel guideline but still be impossible for someone with motor difficulties to hit consistently if its positioned near screen edges or other interactive elements.

Context Switching and Cognitive Load

Watch how users handle interruptions. Do notifications break their flow entirely? Can they recover after switching to another app and coming back? I worked on a fintech app where users with cognitive disabilities kept losing their place during multi-step forms because we didn't save progress properly. They'd get a text message, switch apps to read it, come back and have to start completely over. Bloody frustrating for them and entirely our fault for not considering that scenario. This is also where understanding how user engagement patterns work becomes crucial for designing flows that don't overwhelm people.

Making Sense of Your Testing Results

After your testing sessions wrap up, you'll have pages of notes, recordings, and probably a bit of a headache trying to work out what to do next. I've sat through hundreds of these debriefs over the years and the hardest part isn't collecting the feedback—its making sense of it all and deciding what actually needs fixing first. Not everything needs immediate attention and some issues are genuinely more critical than others.

Start by grouping your findings into three categories: blockers, friction points, and nice-to-haves. Blockers are the bits that completely stop someone from using a core feature—like when I worked on a banking app where screen reader users couldn't actually confirm transactions because the button wasn't properly labelled. That's a blocker. Friction points are things that slow people down or cause frustration but don't completely prevent task completion. Nice-to-haves are improvements that would make the experience better but aren't stopping anyone from achieving their goals.

Prioritising What to Fix First

Look at how many people hit the same issue and how severely it affected them. If three out of five screen reader users couldn't navigate your checkout flow, that's your top priority. If one person mentioned the text could be slightly larger, that can probably wait. I always tell my team to focus on the patterns rather than individual preferences—you're looking for systemic problems, not personal opinions. This is where proper project management becomes essential to ensure fixes are implemented systematically.

Creating Your Action Plan

Once you've categorised everything, turn your findings into specific technical tasks. Don't just write "fix screen reader support"—that's too vague. Instead, list exactly what needs doing: "Add ARIA labels to all form inputs in checkout flow" or "Increase touch target size for navigation buttons to minimum 44x44 pixels." This makes it much easier for your development team to actually implement the fixes. And honestly? Document everything properly because you'll need to test these changes again once they're done.

Here's how I typically structure the findings document:

  • Issue description with specific examples from testing sessions
  • Which users were affected and how severely
  • The technical cause if known
  • Recommended solution with implementation notes
  • Priority level and estimated effort to fix
  • WCAG guidelines it relates to if applicable

Fixing Issues and Testing Again

Right, so you've done your testing sessions and found some problems—now what? This is where a lot of teams make a crucial mistake; they fix everything in one go and call it done. But here's the thing, accessibility fixes aren't like regular bug fixes where you can just deploy and move on. You need to test again with the same users who reported the issues, or at least users with similar disabilities. I learned this the hard way on a healthcare app where we "fixed" a screen reader issue that actually made things worse for users with low vision—we'd optimised for one group without considering another.

When you're prioritising fixes, tackle the blockers first. These are issues that completely prevent someone from using a core feature of your app. On an e-commerce project, we discovered users couldn't complete checkout because the payment form wasn't announcing errors properly...that's obviously got to be fixed before anything else. The polish items like slightly awkward focus orders can wait a bit, but dont leave them too long because they add up. Having a reliable release process helps you deploy fixes quickly and test them properly.

The best approach I've found is to fix issues in small batches and retest with 2-3 users after each batch, rather than fixing everything and hoping it all works

Keep your testing participants in the loop throughout this process. Send them TestFlight or beta builds as you make changes, and get their quick feedback. Its much cheaper than doing another full testing round, and users really appreciate being part of the solution. I usually schedule short 15-20 minute follow-up calls to verify specific fixes rather than hour-long sessions. And honestly? Sometimes users will tell you that your fix works technically but still feels awkward—that's gold dust information that you wouldn't get from automated testing tools alone. When evaluating whether your improvements are working, you need to think about how to measure the real value of these accessibility enhancements.

Conclusion

Testing with real users who have disabilities isn't something you do once and tick off your list—its an ongoing part of building better apps. I've worked on apps that launched thinking they were accessible, only to discover through user testing that screen reader navigation was confusing or colour contrast failed in bright sunlight. Each round of testing teaches you something new about how people actually use your app in the real world.

The biggest mistake I see? Teams treating accessibility testing as a final quality check rather than building it into their process from the start. When we worked on a healthcare booking app, we had users with motor disabilities test our prototype before we'd even finished the design. It saved us months of rework because we learned early that our gesture controls needed serious adjustment. You know what? Those changes made the app better for everyone, not just users with disabilities. This early testing approach also helps with building user trust before launch by demonstrating your commitment to inclusivity.

Start small if you need to. Even testing with three or four users will reveal issues you'd never spot on your own. Document everything you learn—the good, the bad, and the stuff that surprises you. Build relationships with your testers because they become invaluable partners in making your app genuinely usable. And remember, accessibility isn't about compliance badges or meeting minimum standards; it's about respecting that your users have different needs and abilities.

The apps I'm most proud of are the ones where users with disabilities tell us they can finally do something independently that was difficult before. That's the goal. Not perfection, but genuine improvement with each iteration. Keep testing, keep learning, and keep your users at the centre of every decision you make.

Frequently Asked Questions

How much should I budget for accessibility testing with users who have disabilities?

From my experience, expect to pay £50-80 per tester per session compared to £30-40 for general user testing, plus recruitment costs if you use specialist agencies like Fable or AbilityNet. For a proper round of testing with 5-6 users covering different disability types, budget around £2,000-3,000 total including recruitment and session costs.

How many users with disabilities do I need to test with to get meaningful results?

I typically aim for 5-6 users covering different disability types: 2-3 screen reader users, someone with motor disabilities, a user with colour blindness or low vision, and someone who's deaf or hard of hearing. Even testing with 3-4 users will reveal major issues you'd never spot on your own, so start small if budget is tight.

Should I do accessibility testing remotely or in person?

Remote testing is often more practical since it eliminates physical accessibility barriers and lets users test with their own configured assistive technology. However, I always offer both options because not everyone has reliable internet or a quiet space at home, and some users prefer the personal interaction of face-to-face sessions.

When in the development process should I start accessibility testing?

Start testing with disabled users during the prototype phase, not as a final quality check before launch. When we tested a healthcare app's early prototypes with users who had motor disabilities, we discovered our gesture controls needed complete rework—saving us months of development time compared to finding this after launch.

What's the biggest mistake teams make during accessibility testing sessions?

Jumping in to help when you see someone struggling with your app, which defeats the entire purpose of testing. I've learned to let users work through problems naturally because watching someone spend three minutes looking for a mislabelled button shows you exactly where your design has failed.

How do I find reliable testers with disabilities for my app?

Build relationships with disability organisations and charities in your area before you need testers, and consider specialist recruitment agencies who maintain pools of experienced testers. I've had great success with local branches of RNIB and Action on Hearing Loss, but you need to engage authentically and offer fair compensation—don't expect people to test for free.

Do I need to retest after fixing accessibility issues, or can I just deploy the changes?

Always retest accessibility fixes with actual users, preferably the same people who reported the original issues. I've seen "fixes" that solved one problem but created new barriers for other disability types—like optimising for screen readers while making navigation worse for users with motor impairments.

How long should accessibility testing sessions be, and what should I prepare beforehand?

Keep sessions to 60-90 minutes maximum and always run your app through automated accessibility checkers first to catch basic issues like missing labels or poor colour contrast. There's no point wasting a user's time discovering your app crashes when VoiceOver is enabled—something you could have caught yourself in ten minutes.

Subscribe To Our Learning Centre