Expert Guide Series

How Do You Research Users Who Don't Speak Your Language?

Apps that properly localise for their target markets see retention rates that are 3-5 times higher than those that simply translate their interface into another language. That's a massive difference, and it's something I learned the hard way after launching an e-commerce app in Germany that performed brilliantly in the UK but fell completely flat with German users. The problem wasn't the translation—we'd hired a professional translator and everything looked right on the surface. The issue was that we hadn't actually spoken to German users about how they shop, what they expect from a checkout process, or even how they prefer to pay for things online.

Here's the thing; building apps for international markets isn't just about swapping out text strings and maybe adjusting some date formats. Its about understanding how people in different countries think about and use technology in fundamentally different ways. I've worked on apps where payment preferences varied wildly between markets—what works in the UK doesn't work in Japan, and what Japanese users expect would confuse American users. And that's before we even get into more complex issues like data privacy expectations, navigation patterns, or colour symbolism.

Understanding your users across different languages and cultures isn't optional anymore; it's the difference between an app that gets deleted after one use and one that becomes part of someone's daily routine.

Over the years I've built apps for healthcare providers serving diverse communities, fintech platforms expanding into Asian markets, and education apps that needed to work across five different languages simultaneously. Each project taught me something new about the challenges of cross-cultural research—and more importantly, how to actually do it well without needing a massive research budget or a team of anthropologists.

Why Language Barriers Matter in User Research

I remember working on a healthcare app that was performing brilliantly in the UK but tanking in Germany—same features, same design, completely different reception. We'd done our research, or so we thought. The problem? We'd translated the questions but hadn't actually understood what German users were telling us through their responses. Their politeness masked critical usability issues we completely missed.

Language barriers don't just affect what users say; they affect what they don't say. When people struggle to express themselves in a language thats not their native tongue, they simplify their feedback. They skip the nuanced criticism. They give you surface-level responses that sound fine but hide the real problems with your app. And here's the thing—you might think you're getting good data when actually you're getting the easiest-to-communicate data, which isn't the same thing at all.

What You Actually Lose When Language Gets in the Way

The impact goes deeper than just misunderstanding a few words. Cultural context shapes how people describe problems, what they consider important enough to mention, and even how they interact with your research process. I've seen users in Japan give completely different feedback in English versus Japanese—not because they were being dishonest, but because certain concepts don't translate well and their communication style changes dramatically between languages.

When you don't account for language barriers properly, you end up with:

  • Incomplete understanding of user pain points because they cant articulate them clearly
  • Missed cultural preferences that affect how users interact with your interface
  • Biased data towards users with stronger English skills, who might not represent your actual market
  • Lower quality feedback as users focus on being understood rather than being thorough
  • Misinterpretation of severity—what seems like mild concern might actually be serious criticism expressed politely

Its honestly one of the biggest mistakes I see teams make when expanding internationally. They treat translation as a checkbox exercise rather than recognising it fundamentally changes the quality and type of insights you'll gather. Your research is only as good as your users ability to communicate with you, which means language isn't just a barrier—its the entire foundation of whether your research will actually work.

Finding and Recruiting Users Across Different Markets

Right, so you've decided you need to do proper user research across different markets—now comes the tricky bit of actually finding people to talk to. I've worked on apps that needed to launch in Japan, Brazil, and Germany simultaneously, and let me tell you, recruiting the right users in each market is its own challenge. You can't just post on Facebook and hope for the best; what works in the UK rarely translates directly to other regions.

The first thing I always do is identify where your target users actually spend their time online. In China, you're not using Google or Facebook—you're looking at WeChat and Weibo. In Russia, VKontakte is massive. In South Korea, Naver dominates search. Its not just about finding any users; it's about finding users who represent your actual target audience in that specific market. When we built a healthcare app that needed testing in five countries, we partnered with local universities in each region because medical students gave us quick access to health-conscious, tech-savvy participants who understood the domain. Understanding which social media platforms work best for reaching your audience in each market is crucial for effective recruitment.

Where to Find International Research Participants

  • Local research agencies who already have participant databases in your target markets
  • Universities and educational institutions—students are often eager to participate for small incentives
  • Professional networks like LinkedIn, filtered by location and industry
  • Region-specific social platforms (LINE in Japan, Telegram in Eastern Europe)
  • Expat communities if you need English speakers familiar with local culture
  • Local co-working spaces and startup hubs for tech-savvy early adopters

One mistake I see constantly? Recruiting expats or English speakers when you actually need native users. Sure, it's easier to communicate with someone who speaks your language, but they won't give you authentic insights into how local users think and behave. When we tested an e-commerce app in France, talking to British expats living in Paris told us almost nothing about actual French shopping behaviours and expectations.

Budget at least 2-3 weeks longer for international recruitment than you would domestically. Response rates vary wildly by culture—some markets respond quickly, others take persistent follow-up. Japan, for example, often requires formal introductions through mutual contacts rather than cold outreach.

Incentives need localising too. What's considered fair compensation in one market might be insulting in another. I learned this the hard way when our standard £50 Amazon voucher fell completely flat in a market where Amazon barely operated. Local gift cards, mobile top-ups, or even direct bank transfers work better in many regions—just make sure you understand local tax implications and payment regulations before promising anything.

Working with Translators and Local Research Partners

Finding the right translator for user research isn't the same as hiring someone for marketing copy—trust me, I learned this the hard way on a healthcare app project for the Middle East market. We initially used a translation agency that gave us technically perfect Arabic translations, but when we sat in on the research sessions the translator kept paraphrasing what users said rather than giving us their exact words. It's a bit mad really, because we were losing all the nuance and emotion that makes qualitative research so valuable. Now I always look for translators who have experience with research specifically, not just general translation work.

The best setup I've found is to have your translator on a video call during live sessions, providing real-time interpretation. Yes, it makes sessions longer—what would normally take 30 minutes stretches to about 50—but you get to hear the user's tone of voice and see their reactions directly. For a fintech app we built for the Spanish market, our local research partner actually taught us that certain questions about money were considered rude when asked directly; we had to completely restructure our interview guide to get honest answers about spending habits and financial goals. This kind of cultural insight is invaluable when you're developing financial features like budget tracking.

Choosing Between Translation Services and Local Partners

Local research partners cost more than translators (usually £500-800 per day versus £40-60 per hour for translation) but they bring cultural context you simply can't buy elsewhere. For straightforward usability testing where you're mainly watching someone navigate your app, a good translator is probably enough. But when you're doing discovery work or trying to understand behaviours and motivations? You need someone who understands why people in that market make the choices they do. I always brief translators to flag anything that seems culturally significant, even if it slows down the session—those moments often reveal the insights that actually shape your product decisions.

Adapting Your Research Methods for Different Cultures

Here's what I've learned from testing apps in places like Tokyo, Dubai, and São Paulo—what works brilliantly in London can fall flat in Mumbai. Its not just about translating words; the entire research approach needs to shift. When we built a healthcare app for a client expanding into Southeast Asia, our standard interview techniques produced polite but useless responses. People nodded, agreed with everything we said, and gave us nothing we could actually use. The problem? Direct questioning is seen as confrontational in many Asian cultures, and participants were too polite to criticise anything.

I had to completely rethink how we gathered feedback. Instead of asking "What don't you like about this feature?" we started using observational methods—just watching people interact with prototypes whilst they thought aloud naturally. In Japan, we found group discussions worked better than one-on-one interviews because the collective setting felt less pressured. But in Germany, individual sessions were far more effective because people were comfortable being direct and critical. You cant use the same playbook everywhere.

The biggest mistake I see agencies make is assuming their research methods are culturally neutral—they're absolutely not.

For a fintech app we tested across the Middle East, we had to adjust session timing around prayer schedules and ensure our female researchers could interview female participants (male researchers simply couldn't in some regions). We also learned that showing early rough prototypes—something we do constantly in Western markets—didn't work well in places where "face" matters more. Participants felt embarrassed critiquing something that looked unfinished. So we started presenting more polished mockups, even in early stages, just to make people comfortable enough to give honest feedback. Every market teaches you something new about what genuine user research actually looks like.

Testing Your App with International Users

Right, so you've recruited your international users and got your translators sorted—now comes the actual testing part. This is where theory meets reality and let me tell you, its often messier than you'd expect. I've run testing sessions with users in Japan, Brazil, Germany, and about fifteen other countries, and each one taught me something different about how people actually interact with apps in their own context.

The biggest mistake I see? Running international tests exactly like you'd run them at home. When we tested a healthcare app for a client expanding into Southeast Asia, we initially scheduled hour-long video sessions just like we do with UK users. Disaster. Turned out users in Thailand and Vietnam were incredibly uncomfortable being on camera for that long and giving critical feedback felt rude to them. We had to completely rethink our approach; shorter sessions, more indirect questioning, and honestly, accepting that we'd need multiple touchpoints to get the same depth of feedback.

Different Testing Approaches for Different Markets

You need to adapt your testing method to match cultural norms—not the other way round. In Nordic countries, users are typically very direct and will tell you exactly whats wrong with your app. Great. But in many Asian markets, users might say everything is "fine" even when they're struggling. For a fintech app we built for the Middle Eastern market, we learned to watch what users did rather than what they said; their behaviour told us far more than their words ever could. Understanding what makes people install an app from the app store also varies significantly by culture, so testing these decision-making factors is crucial.

Timezone coordination is genuinely painful but you cant skip it. I mean, sure, you could test at 3am your time, but you'll miss half the insights because you're knackered. We typically batch international testing sessions into focused weeks where the team adjusts their schedule. Its not ideal but it works better than spreading things too thin.

Technical Setup That Actually Works

Here's what you need to sort before your first international test session:

  • Screen recording software that works in every region (WeChat-based tools for China, for example)
  • Backup communication channels because Zoom doesnt work everywhere
  • Test builds that handle different character sets—I've seen apps crash when users entered Arabic or Mandarin
  • Local payment methods if you're testing checkout flows; nobody in Germany wants to see PayPal as the only option
  • VPN access to test how your app performs with local network conditions and any regional content restrictions

One thing that surprised me early on? Network speeds vary dramatically. An e-commerce app that loaded instantly on our office WiFi took nearly 30 seconds to load product images for users in rural India. We only discovered this because we tested on actual devices, in actual locations, with actual network conditions. Emulators and simulators cant replicate that reality, no matter how sophisticated they are. If you're working solo, consider investing in professional development tools that help you simulate different network conditions and device capabilities during testing.

Making Sense of Research Data from Multiple Languages

Right, so you've done your research across three different markets and now you're sat there with interview transcripts in Spanish, user feedback in Mandarin, and survey responses in German. Fun times. This is where things get a bit messy if you don't have a proper system in place—and I've learned this the hard way working on a healthcare app that needed to work across European markets.

The biggest mistake I see teams make is just running everything through Google Translate and calling it a day. Sure, it'll give you the gist, but you'll miss the nuances that actually matter. When we were building a fintech app for Southeast Asian markets, our translated feedback said users wanted "more security features" but what they actually meant (when we checked with our local partner) was they needed clearer explanations of existing security because they didn't trust what they couldn't understand. See the difference? When dealing with sensitive data across markets, it's essential to understand proper data deletion practices and how they vary by jurisdiction.

I always create a central spreadsheet where we log key findings in English but keep the original language quotes alongside them. This way, if something doesn't quite make sense later, we can go back and check the original wording with a native speaker. Its saved our bacon more times than I can count.

Organising Your Multi-Language Data

Here's what actually works when you're dealing with research data from different languages:

  • Work with professional translators who understand UX terminology—not just language experts
  • Create a shared glossary of key terms across all languages so everyone's talking about the same concepts
  • Tag your findings by market and language so you can spot regional patterns later
  • Keep video recordings of sessions even if you can't understand them—body language tells you loads
  • Schedule regular sync-ups with your local research partners to discuss confusing or contradictory findings

Always verify critical findings with at least two sources before making big design decisions. What looks like a pattern might just be a translation issue or cultural misunderstanding that could send you down the wrong path entirely.

Looking for Patterns Across Markets

Once you've got your data organised, the real work begins; finding what's universal vs what's market-specific. I use colour coding in our research docs—green for findings that appear across all markets, yellow for regional patterns, and red for market-specific issues that need localised solutions. It sounds simple but it works.

When we built an e-commerce app for a retail client expanding into Latin America and Europe, we found that checkout anxiety was universal but the reasons behind it were completely different. European users worried about data privacy whilst Latin American users were concerned about delivery reliability. Same symptom, totally different solutions needed. Understanding what drives retail app sales in different markets requires this kind of nuanced analysis.

Building User Personas That Work Globally

Here's what most teams get wrong—they build personas for each country separately and end up with 15 different documents that contradict each other. I've seen this happen with a fintech client who had one persona for their UK users, another for Germany, and a third for Poland; the problem is these personas focused on surface-level differences (language, currency) but missed the universal behaviours that actually mattered for the app's core functionality.

The trick is building what I call "layered personas" where you start with universal needs and motivations, then add regional variations on top. When we built a health tracking app that launched in six markets simultaneously, we identified three core user types based on their relationship with health data—the Optimiser who tracks everything obsessively, the Casual Logger who checks in occasionally, and the Reluctant User who only engages when prompted by a doctor or family member. These personas existed across every market we researched.

What changed between countries wasn't the persona type but the specific triggers and pain points. Our German Optimisers cared deeply about data privacy and wanted explicit control over every permission; our Brazilian users in the same persona group were more concerned about whether the app would drain their phone battery or use too much mobile data. Both groups wanted detailed tracking—their barriers to adoption were completely different though. If you're dealing with children's data across markets, you'll need to understand different privacy regulations for kids in each region.

I always include a "universal needs" section at the top of each persona, followed by "regional considerations" that highlight what changes by market. This keeps your development team focused on building one solid core experience that can flex to accommodate local differences, rather than trying to build essentially different apps for each market. Its more maintainable and honestly, it just makes more sense from a technical standpoint. For complex compliance requirements, you might need to consider hiring a regulatory expert who understands the legal landscape in your target markets.

Turning Research Insights into Design Decisions

Right, so you've got all this research data from different countries and now you need to actually do something with it. This is where I see a lot of teams struggle, because its tempting to just design for your home market and then bolt on some translations later—but that's honestly a recipe for disaster. I've watched healthcare apps fail in Southeast Asian markets because we initially designed the privacy settings based on European attitudes towards data sharing, which turned out to be completely wrong for those regions. The research told us this, but we didn't listen hard enough at first.

What I do now is create a sort of design decision matrix where each major feature gets evaluated against the different markets you're targeting. Sounds fancy but its really just a spreadsheet where you list out the insights from each region and identify where they conflict with each other. For a fintech app we built, users in Germany wanted loads of detailed transaction information visible on the home screen whilst Japanese users found that overwhelming and preferred a minimal summary view. We couldn't do both, so we made the home screen customisable—letting users choose their information density level.

The best global apps don't try to please everyone equally; they identify which features are universal and which need regional flexibility.

Sometimes you'll find that certain insights apply to specific user segments rather than entire countries. When testing an e-commerce app, younger users in Brazil and India had very similar browsing behaviours that were totally different from older users in those same markets. Age mattered more than geography in that case. The trick is spotting these patterns across your data and not assuming every difference is cultural—sometimes its just about life stage or tech literacy levels. Understanding what makes content shareable across different cultures can help you design features that naturally encourage social sharing in each market.

Conclusion

Looking back at everything we've covered—I mean, it's a lot isn't it?—but here's what really matters when you're trying to understand users who speak different languages. Its about being genuinely curious about how people in other markets think and behave, not just translating your English surveys word-for-word and calling it done. I've seen too many clients waste money on research that asks the wrong questions in technically correct Spanish or Mandarin.

The biggest lesson I've learned from building apps for international markets is that language is just the surface level challenge; the real work is understanding cultural context, local behaviours, and what actually motivates people in each market. When we built that healthcare app for Southeast Asian markets, we discovered that privacy concerns manifested completely differently than they did in the UK—users weren't worried about data breaches, they were concerned about family members finding out about their medical conditions. No amount of translation would have revealed that without proper local research partners.

Start small if you need to. You don't need to research fifteen markets simultaneously. Pick one or two priority markets, invest in finding good local partners who understand both research methodology and their market, and build your process from there. The tools and techniques we've discussed—remote testing platforms, asynchronous research methods, working with skilled translators—these all become easier once you've done it a few times.

And look, its messy sometimes. You'll get conflicting feedback from different markets. You'll realise your core assumptions about user behaviour were completely wrong. But that discomfort means you're learning something real about your users, and that's infinitely more valuable than designing based on guesswork. The apps that succeed globally are the ones built on actual understanding of their users, not assumptions about them.

Frequently Asked Questions

How much extra time should I budget for international user research compared to domestic research?

From my experience, budget at least 2-3 weeks longer for international recruitment than you would domestically, as response rates vary wildly by culture. The actual research sessions also take longer—what normally takes 30 minutes stretches to about 50 minutes when working with translators, but you get much richer insights.

Should I use Google Translate for analysing user feedback from different languages?

Absolutely not—I've seen this mistake cost teams critical insights. When we built a fintech app for Southeast Asian markets, Google Translate said users wanted "more security features" but they actually meant clearer explanations of existing security because they didn't trust what they couldn't understand. Always work with professional translators who understand UX terminology, not just language conversion.

Is it better to recruit English-speaking expats or native users in each market?

Always recruit native users, even though it's more challenging. When we tested an e-commerce app in France, talking to British expats living in Paris told us almost nothing about actual French shopping behaviours and expectations. Expats might be easier to communicate with, but they won't give you authentic insights into how local users think and behave.

How do I know if my research methods will work in different cultures?

Direct questioning works brilliantly in Nordic countries but falls flat in many Asian cultures where it's seen as confrontational. I've learned to adapt completely—in Japan, group discussions work better than one-on-one interviews, whilst in Germany, individual sessions are far more effective. Always test your research approach with a small group first and be prepared to pivot your methodology.

What's the difference between translation and localisation for user research?

Translation just swaps words, whilst localisation adapts the entire approach to cultural context. I learned this the hard way when our technically perfect Arabic translations missed the fact that certain questions about money were considered rude when asked directly—we had to completely restructure our interview guide. Proper localisation means understanding not just language, but cultural norms around communication, privacy, and feedback.

How much should I pay international research participants?

Incentives need localising just like everything else—our standard £50 Amazon voucher fell completely flat in a market where Amazon barely operated. Research what's considered fair compensation in each market, as what works in the UK might be insulting elsewhere. Local gift cards, mobile top-ups, or direct bank transfers often work better than international vouchers.

Can I use the same user personas across all international markets?

Build "layered personas" that start with universal needs and add regional variations on top. When we built a health tracking app across six markets, we found the same core user types everywhere—but German users cared about data privacy controls whilst Brazilian users worried about battery drain. Same persona type, completely different barriers to adoption.

How do I spot the difference between cultural patterns and individual preferences in my research data?

Create a colour-coding system—I use green for findings across all markets, yellow for regional patterns, and red for market-specific issues. Sometimes what looks cultural is actually about age or tech literacy; younger users in Brazil and India had more similar behaviours than older users in those same markets. Always verify critical findings with at least two sources before making design decisions.

Subscribe To Our Learning Centre