How Do You Test Apps With Users Who Have Disabilities?
A property search app launched with all the bells and whistles—high-res photos, interactive maps, virtual tours. Looked brilliant on the surface. Downloads were solid at first. But then something weird happened; retention dropped off a cliff after week one and the reviews told a different story than the marketing team expected. Users with screen readers couldn't navigate property listings properly. The colour scheme made it impossible for people with colour blindness to distinguish between available and sold properties. Voice control features? They didn't work with the custom UI elements the designers had spent months perfecting. The app had excluded roughly 15% of its potential user base without anyone on the team realising it during development.
I've seen this pattern play out more times than I'd like to admit. You build something that works perfectly for you and your team, run it through your standard testing process, and ship it thinking you've done good work. Then reality hits. The thing is, most of us don't use screen readers day-to-day. We don't navigate apps with switch controls or voice commands. We're not trying to read text with low vision or use an interface whilst managing tremors that affect touch precision. So we miss things—big things—that make our apps unusable for millions of people.
Testing with real users who have disabilities isn't about ticking a compliance box; its about understanding how people actually experience your app in the real world
The mobile accessibility landscape has matured quite a bit over the years, but there's still this gap between what developers think is accessible and what actually works for disabled users. Apple and Google have built incredible assistive technologies into iOS and Android, but they only help if your app is designed to work with them properly. This guide walks through the practical steps I've learned for testing apps with users who have disabilities—not the theoretical stuff you read in documentation, but the real process that actually uncovers problems before your users find them.
Understanding Different Types of Disabilities
When I started testing apps with users who have disabilities, I honestly thought it would be straightforward—test with a few people, fix the obvious stuff, job done. I was wrong. Really wrong, actually. The reality is that disability exists on a massive spectrum and what works perfectly for one person might create barriers for someone else. I've worked on a banking app where we optimised everything for screen reader users only to discover our motion animations were triggering vestibular issues for people with balance disorders. It's a learning process, and you'll make mistakes along the way.
The thing about disabilities is they dont fit into neat little boxes. Sure, we can group them into categories to help us think about testing, but every person's experience is different. Someone with low vision might use screen magnification at 200% zoom while another person prefers high contrast mode at standard size; both are valid, both need consideration. I've seen apps that work brilliantly for one type of motor impairment but completely fail for another because the developers assumed all motor disabilities were the same.
Main Categories You Need to Know
From my experience testing apps across healthcare, retail and education sectors, these are the disability types that affect mobile app usage most:
- Visual disabilities (blindness, low vision, colour blindness)—they'll interact with your app through screen readers, magnification or colour adjustments
- Motor disabilities (limited dexterity, tremors, paralysis)—these users might struggle with small touch targets, gestures or timed interactions
- Hearing disabilities (deafness, hard of hearing)—they need captions, transcripts and visual alternatives to audio content
- Cognitive disabilities (dyslexia, ADHD, memory issues)—complex navigation, dense text and inconsistent patterns cause problems here
- Vestibular disorders—motion and animations can make people physically ill, which is something most developers never consider
What surprised me most? Temporary and situational disabilities matter just as much. I've tested with users who had their arm in a cast, parents holding babies whilst trying to use an app, people in bright sunlight who couldn't see low-contrast text. These aren't "edge cases"—they're everyday situations that affect millions of users. When we built an e-commerce app, our testing showed that good accessibility features helped everyone, not just users with permanent disabilities. For apps that need to work in situations like these, understanding how to design for one-handed use becomes particularly important for creating truly inclusive experiences.
Setting Up Your Accessibility Testing Process
Right so you've decided to test your app with users who have disabilities—brilliant start, but now you need a proper process or things get messy fast. I learned this the hard way on a healthcare app project where we invited participants before we'd sorted out our testing environment, and honestly? It was a bit of a disaster. We had screen reader users waiting around whilst we fumbled with software settings, which was embarrassing and disrespectful of their time.
The first thing you need is a dedicated testing space thats quiet, accessible, and has adjustable furniture; I mean, you cant expect someone using a wheelchair to test comfortably at a desk thats too high. Make sure you've got all your assistive tech installed and tested before anyone arrives—VoiceOver, TalkBack, Dragon NaturallySpeaking, whatever your participants will be using. Run through the tech yourself first because software updates can break things without warning. Speaking of which, having a solid plan for keeping your app working when updates break things is crucial for maintaining accessibility features over time.
You also need a clear testing protocol that's flexible enough to adapt to each persons needs. Some users might need longer sessions with breaks, others might prefer remote testing from home where they're comfortable with their own setup. I usually create a testing checklist that covers key user journeys but I don't follow it religiously—if a participant hits an issue that needs exploring, we explore it. The goal isn't to tick boxes, its to understand where your app fails real people.
Record your sessions (with permission!) but dont just rely on screen recordings; they often miss whats happening with assistive tech. Use session recording software that captures both the screen and audio of the screen reader output, because watching back these recordings is where you'll spot patterns you missed live.
Budget wise, plan for at least 5-8 participants per disability type you're testing for—any less and you wont see patterns, any more and you're probably seeing diminishing returns. And yeah, pay your participants properly; we typically offer £50-100 per hour depending on the complexity of what we're asking them to do.
Recruiting Participants With Disabilities
This is where most app testing programmes fall flat on their face, and I get it—finding participants with disabilities isn't as straightforward as posting on UserTesting or sending out a quick email to your regular beta group. But here's the thing, if you're serious about accessibility (and you should be, both legally and morally), you need to put in the work to find real users who actually rely on assistive technologies daily.
I've worked with healthcare apps that needed testing from users with visual impairments and fintech platforms that had to work for people with motor disabilities. The difference between testing with someone who occasionally uses VoiceOver and someone who depends on it every single day? Night and day. You'll uncover issues you never even considered, like navigation patterns that make perfect sense visually but are absolute chaos when using a screen reader. This is particularly important when testing for users with motor disabilities, as their daily experience with adaptive interfaces will reveal problems that casual testing simply can't catch.
Where to Actually Find Participants
Start with disability advocacy organisations and charities—they often have members who are keen to participate in user testing, especially if it means apps will work better for them in the future. In the UK, organisations like RNIB for visual impairments or Scope for broader disability representation can help connect you with their communities. Sure, there might be some admin involved and you'll need to compensate participants fairly (I usually pay 20-30% more than standard user testing rates because these sessions take longer and require more effort from participants).
Social media groups focused on assistive technology are another goldmine. People in these communities are usually passionate about accessibility and want to help improve digital experiences. LinkedIn groups, Facebook communities, even Reddit threads dedicated to specific disabilities can be great places to post recruitment calls. If you're testing internationally, learning from approaches used in researching users who don't speak your language can help you adapt your recruitment strategies for different cultural contexts.
What You Need to Get Right
Don't make the rookie mistake of treating this like standard user recruitment. You need to ask specific questions upfront about what assistive technologies participants use, how long they've used them, and whether they're comfortable being recorded (some people aren't, and that's fine). Make sure your testing location is physically accessible if you're doing in-person sessions—I learned this the hard way on a project where we'd booked a gorgeous office space that turned out to have terrible wheelchair access.
Your recruitment screener should include questions like:
- What specific assistive technologies do you use regularly (screen readers, switch controls, voice input, etc.)
- How long have you been using these technologies
- What types of apps do you use most frequently
- Have you participated in user testing before
- Do you need any specific accommodations for the testing session itself
And look, some participants will need extra time, breaks, or specific environmental conditions to test effectively. Budget for longer sessions than you would normally—what takes 45 minutes with a standard user might take 90 minutes with someone using assistive tech, not because they're slower but because the technology itself adds time and you want to properly understand their experience without rushing them.
Choosing the Right Assistive Technologies to Test
You can't test every single assistive technology out there—there's just too many of them and it would take forever. So you need to be smart about which ones you focus on. I usually start with the big three: VoiceOver for iOS, TalkBack for Android, and a desktop screen reader like NVDA or JAWS. These cover about 80% of what your users will actually be using in the real world.
But here's the thing—the assistive tech you choose really depends on your app's purpose and who its for. When we built a banking app a few years back, we spent loads of time testing with screen magnification tools because our research showed that partially sighted users were a huge part of the target audience. They weren't using screen readers at all; they just needed everything bigger and with better contrast. For a healthcare app focused on elderly users, we prioritised voice control features because arthritis made touch interactions really difficult for them. Understanding the potential of voice technology for transforming customer experience becomes essential when designing for users who rely on these interactions.
The most common mistake I see is testing only with screen readers and calling it a day, when your actual users might rely on completely different tools.
Switch controls are another big one that gets overlooked. Users with motor disabilities often can't use touchscreens the way we expect, so they navigate using external switches or adaptive controllers. Testing with these shows you problems you'd never spot otherwise—like buttons that are too close together or gestures that require too much precision. Voice control through tools like Voice Access on Android or Voice Control on iOS matters too, especially as more people are using these features even without disabilities. Start with the most common assistive technologies for your specific user base, then expand from there as you learn more about how people actually use your app. Don't try to boil the ocean right away.
Running Effective Accessibility Testing Sessions
The first accessibility test I ran was a bit of a disaster, honestly. I thought I could just hand someone a prototype and ask them to "try it out"—turns out that's not how this works at all. You need structure, yes, but you also need flexibility because every participant will interact with your app differently based on their specific disability and assistive technology setup. The key thing I've learned is to prepare thoroughly but stay ready to adapt on the fly.
Start by making sure your testing environment is actually accessible. Sounds obvious, right? But I've seen teams book testing rooms that wheelchair users couldn't even get into. If you're testing remotely (which we do more often now), use platforms that work with screen readers and don't require complex mouse interactions. Zoom works reasonably well, but you'll want to test your setup beforehand. Give participants extra time—I usually schedule 90 minutes for what would normally be a 60-minute session because assistive technology just takes longer to navigate, and that's fine. Rushing people defeats the entire purpose.
During the session itself, your job is mostly to observe and ask clarifying questions. Don't jump in to help unless someone's completely stuck—you need to see where the real problems are. I've worked on healthcare apps where we thought our medication reminder feature was perfectly clear, but watching a blind user struggle with our poorly labelled buttons showed us we'd made assumptions about what was "obvious". Take detailed notes about exactly what actions cause confusion, not just that confusion occurred.
Key Things to Observe During Sessions
- How long it takes users to complete core tasks compared to your internal benchmarks
- Any moments where users seem confused about what element they've focused on
- Whether users can successfully navigate back to previous screens
- If users understand error messages and know how to fix problems
- Times when users give up on a task entirely and why that happened
Record the sessions if participants consent—you'll catch things you missed in the moment, and its helpful for showing developers the actual impact of accessibility issues rather than just describing them. But here's the thing: don't make the recording process intrusive. Some screen reader users get anxious when they know they're being recorded because they worry about being judged for taking longer to complete tasks.
Common Accessibility Issues You'll Actually Find
After testing apps with hundreds of users who have various disabilities, I can tell you the same problems come up again and again. Its not the fancy edge cases that trip people up—its the basic stuff that somehow still gets overlooked. The most common issue? Touch targets that are too small. I've watched users with motor impairments struggle to hit buttons that looked perfectly fine to the design team, but in practice were impossible to tap accurately. Apple recommends 44x44 points minimum and they're not just making that up—we tested a fintech app where increasing button sizes from 36 to 48 points reduced error rates by 67% for users with limited dexterity. This is why understanding how to design buttons that everyone can actually press is so crucial for accessibility.
Colour contrast is another massive one. I mean, your designer might love that subtle grey text on a light background, but users with low vision literally cannot read it. We use contrast checkers during development now because I've been burned too many times by beautiful designs that fail WCAG AA standards. And here's something that surprised me when I started doing proper accessibility testing—screen reader users get completely lost when developers forget to label interactive elements. That image button with no alt text? Its announced as "button" with no context whatsoever, leaving blind users to guess what it does. For visual accessibility issues, learning how to design for users with colour blindness helps create interfaces that work regardless of how users perceive colour.
Run your app through VoiceOver or TalkBack yourself before user testing. You'll catch 80% of screen reader issues just by experiencing how confusing unlabelled elements actually are.
Form inputs cause chaos too. Users with cognitive disabilities need clear error messages that explain whats wrong and how to fix it, not just "Invalid entry" in red text. We learned this the hard way on a healthcare app where users abandoned registration entirely because they couldn't work out why their password was being rejected—turns out the requirements weren't clearly stated upfront. Auto-playing videos and animations can trigger vestibular disorders; I've seen users become genuinely nauseous from parallax scrolling effects that seemed harmless to the development team. For some users, features like dark mode can help with accessibility by reducing eye strain and providing better visual comfort.
Making Sense of Your Testing Results
So you've run your accessibility testing sessions and now you're sitting there with pages of notes, screen recordings, and feedback that ranges from "this button doesn't work with my screen reader" to "I got completely lost on the checkout page." I've been in this exact position more times than I can count, and honestly, the first few times it felt overwhelming. But here's what I've learned after years of doing this—the patterns start to emerge pretty quickly once you know what you're looking for.
The first thing I do is separate issues into three categories: blockers, frustrations, and nice-to-haves. Blockers are the ones that completely prevent someone from completing a task—like a screen reader user who cant submit a form because the button isn't properly labelled. These go to the top of your list, no questions asked. I worked on a banking app where users with motor disabilities literally couldn't transfer money because the touch targets were too small and too close together; that's a blocker, and it needed fixing immediately.
Frustrations are different. They don't stop people entirely, but they make the experience painful enough that users might give up or switch to a competitor. Things like confusing navigation order for keyboard users or colour contrast thats just barely failing WCAG standards but still causing eye strain. These matter because your app isn't just about being technically accessible—it needs to be genuinely usable and not exhausting to interact with. When prioritising fixes, applying the same framework used for deciding which app features to build first can help you tackle the most impactful accessibility improvements.
Organising Your Findings
Here's how I typically structure my testing results so the development team actually knows what to do with them:
- Severity level (blocker, major, minor)
- Affected user groups (screen reader users, motor disability, cognitive)
- Specific location in the app where the issue occurs
- Steps to reproduce the problem consistently
- Video clip or screenshot showing the issue
- Suggested fix based on WCAG guidelines
- Estimated effort to fix (quick win vs. major refactor)
Looking for Patterns
One thing that surprised me early on was how often the same underlying problem would show up in multiple ways. You might have five different issues reported, but when you dig in, they're all caused by the same root problem—like inconsistent heading structure throughout the app. If three different screen reader users got lost in the same section, that's telling you something important about your information architecture, not just your accessibility implementation.
Pay attention to the language your testers use too. When someone says "I couldn't figure out where I was" or "I had to guess what would happen," that's usually a sign of missing feedback or unclear labelling. I've found that users with disabilities are often incredibly articulate about their needs because they've had to learn exactly what works and what doesn't through years of navigating poorly designed interfaces.
Don't ignore the positive feedback either. If multiple users tell you a particular feature worked brilliantly, make note of it so you don't accidentally break it in future updates. I've seen teams "improve" things that were already working well for disabled users, and it's frustrating for everyone involved. Understanding which features will actually make you money helps balance accessibility improvements with business objectives, ensuring both inclusivity and sustainability.
Conclusion
Look, I'll be honest with you—accessibility testing isn't something you do once and tick off a list. Its an ongoing part of your development process, and the apps I've built that genuinely work for everyone are the ones where we baked this thinking in from day one. Not as an afterthought. Not as a compliance exercise. But as a core part of how we build.
The thing is, when you start testing with users who have disabilities, you'll discover problems that no automated tool would ever catch. I've had testing sessions where a screen reader user got completely stuck on a form because the error messages weren't properly announced—something that looked fine visually and passed every automated check we ran. You know what? That's the real value here. Real people using your app in ways you didn't anticipate, showing you exactly where your assumptions fell apart.
What strikes me most after years of doing this work is how much it improves the app for everyone. When we simplified navigation for users with cognitive disabilities in a healthcare app, our support tickets dropped by 30% across the board. Clearer labels, better focus states, logical tab orders... they help every single user. And thats the bit people miss when they treat accessibility as a separate thing.
Start small if you need to—even testing with three users who rely on assistive tech will teach you more than any checklist. Build relationships with your testers, pay them properly, and actually listen to what they tell you. The apps that win in the long run are the ones that work for the most people, and you cant get there without doing this properly.
Frequently Asked Questions
From my experience running accessibility tests across healthcare, retail, and fintech apps, you'll want 5-8 participants per disability type you're testing for. Any fewer and you won't spot meaningful patterns; any more and you'll hit diminishing returns where new participants aren't revealing fresh issues.
The most common error I see is testing only with screen readers and assuming that covers accessibility—when your actual users might rely on voice control, switch navigation, or magnification tools instead. I've worked on apps where we optimised everything for VoiceOver users only to discover our target audience primarily used voice commands and high contrast modes.
I typically pay participants £50-100 per hour (20-30% more than standard user testing rates) because these sessions require more effort and take longer. Budget for 90-minute sessions instead of the usual 60 minutes, as assistive technology naturally adds time to navigation and you don't want to rush participants through their genuine experience.
Automated tools catch maybe 30% of real accessibility issues—I've had testing sessions where screen reader users got completely stuck on forms that passed every automated check we ran. The tools miss context problems, confusing navigation flows, and the actual lived experience of using assistive technology daily.
I've had the best success working with disability advocacy organisations like RNIB or Scope, and posting in social media groups focused on assistive technology. These communities are usually keen to help improve digital experiences, but you'll need to ask specific questions upfront about what assistive technologies they use and accommodate longer session times.
After testing hundreds of users, the same problems appear repeatedly: touch targets under 44 points that users with motor impairments can't hit accurately, unlabelled buttons that screen readers announce as just "button" with no context, and colour contrast that's too low for users with low vision. These basic issues trip people up far more than complex edge cases.
I categorise issues as blockers (completely prevent task completion), frustrations (make the experience painful), and nice-to-haves, then document the specific location, steps to reproduce, and estimated fix effort. Blockers like improperly labelled form submission buttons go straight to the top—I've seen banking apps where users literally couldn't transfer money due to inaccessible touch targets.
Absolutely not—it's an ongoing part of your development process, not a one-time compliance exercise. I've seen teams "improve" features that were already working brilliantly for disabled users, breaking accessibility in updates, which is why you need to test regularly and build relationships with participants who can provide consistent feedback as your app evolves.
Share this
Subscribe To Our Learning Centre
You May Also Like
These Related Guides

How Do I Test My App With Real Users Who Have Disabilities?

What Is Accessibility in Mobile App Design and Why Does It Matter?



