Expert Guide Series

What Should My App Do When There's a Data Breach?

Every app developer's worst nightmare is getting that message—the one that says your database has been compromised or someones found a way into your user data. Its a bit mad really, because you can do everything right, follow all the security protocols, hire the best people, and still find yourself dealing with a data breach. I've been building apps for years now and I've seen this happen to companies of all sizes; from tiny startups with a few hundred users to massive platforms with millions of daily active users—nobody is completely immune to this risk.

The problem is that most app owners don't have a plan until its too late. They think about user experience, they obsess over feature sets, they worry about App Store rankings and user acquisition costs, but the question of "what do we actually do if our data gets breached" gets pushed to the bottom of the priority list. And honestly? I get it. Its not fun to think about. But here's the thing—when a breach happens (and statistically speaking, it probably will at some point) the first few hours are absolutely critical for your app's survival.

The way you respond to a data breach can be the difference between a temporary setback and the complete destruction of your business

GDPR has changed the game entirely when it comes to breach notifications. You cant just quietly fix the problem and hope nobody notices anymore. There are strict timelines, specific people you need to inform, and serious financial penalties if you get it wrong. Mobile app crisis management isn't just about protecting your users data—its about protecting your entire business, your reputation, and your legal position. This guide will walk you through exactly what your app should do when faced with a security incident, because hoping it never happens isn't a strategy that works.

Understanding What Counts as a Data Breach

Right, so lets start with the basics—what actually counts as a data breach? I mean, its not always as obvious as you might think. Sure, if someone hacks into your database and downloads thousands of user records, thats clearly a breach. But here's the thing; breaches come in all shapes and sizes, and some are much harder to spot than others.

A data breach is basically any incident where someone who shouldn't have access to your users data gets hold of it. This could be an external hacker breaking through your security, yes, but it could also be an employee accidentally emailing a spreadsheet of user details to the wrong person. Both count. Both need to be dealt with properly. I've seen companies miss this distinction and it always comes back to bite them—they think because it was an accident or an inside mistake that the rules dont apply. They do.

When Personal Data Gets Exposed

The key thing to understand is that personal data means more than just names and email addresses. We're talking about anything that can identify someone; their location data, their payment information, their health records if you're a fitness or medical app, even their device IDs in some cases. If any of this information ends up somewhere it shouldn't be—whether thats stolen, lost, accidentally shared, or even just accessed by someone without proper authorisation—you've got a breach on your hands.

The Grey Areas That Trip People Up

Actually, one of the trickiest parts is knowing when something crosses the line from a security incident to a reportable breach. Not every security event needs to be reported to regulators, but the rules are strict about this. If theres a real risk of harm to your users—financial loss, identity theft, damage to their reputation, or even just significant inconvenience—then you need to treat it as a proper breach. And honestly? When in doubt, treat it seriously. Its far better to over-report than to miss something that could have serious consequences for the people who trust your app.

The First 24 Hours After Discovery

Right, so you've just discovered that your app has had a data breach. Your heart's probably racing and you're wondering what the hell to do first. I mean, I've been through this with clients and its absolutely one of the most stressful situations you'll face as an app owner—but here's the thing, how you respond in these first 24 hours can make or break everything that comes after.

The very first thing you need to do is stop the breach if its still happening. Shut down the affected systems, change all admin passwords, revoke API keys, whatever it takes to prevent more data from leaking out. Don't worry about keeping the app running perfectly right now; containment is your priority. You can deal with angry users about downtime later—they'll be far more angry if you let their data keep leaking while you tried to maintain uptime.

Once youve contained it, you need to figure out exactly what happened and what data was compromised. This is where having good logging systems pays off because you need to know: what data was accessed, how many users are affected, when did it start, and how did they get in? Write everything down as you discover it. Seriously, document everything because you'll need this information for regulators and your legal team.

Your Immediate Action Checklist

Here's what needs to happen in those first 24 hours, in order of priority:

  1. Contain the breach and stop any ongoing data access
  2. Assemble your incident response team (technical leads, legal, management)
  3. Begin investigating the scope and impact of the breach
  4. Document everything you're discovering and every action you're taking
  5. Assess whether you need to report to regulators within 72 hours (GDPR requirement)
  6. Start drafting your communication plan for users and stakeholders

Set up a dedicated Slack channel or communication hub for your breach response team right away. You need one place where everyone can share updates, ask questions and coordinate your response without information getting lost in email threads.

And look, I know this sounds like a lot to do in 24 hours, but actually the worst thing you can do is move too slowly. Under GDPR rules you've only got 72 hours to notify regulators if personal data has been compromised, which means you need to assess the situation fast. That clock starts ticking from when you first became aware of the breach, not when you finished investigating it—so don't waste time.

Who You Need to Tell and When

Right, so you've discovered a breach and now comes the bit that most app owners dread—telling people about it. But here's the thing; the timeline matters more than you might think. In the UK, GDPR gives you 72 hours to notify the ICO once you become aware of a breach. Not 72 hours from when it happened, but from when you knew about it. That's a tight window and honestly, its one of the reasons why having a response plan ready is so important.

The ICO needs to be your first call if the breach poses a risk to peoples rights and freedoms—basically, if personal data has been exposed that could harm your users. You'll need to explain what happened, what data was affected, how many users, and what you're doing about it. They're actually pretty reasonable if you're transparent and acting quickly; what they don't like is being kept in the dark or finding out from the media.

When to Tell Your Users

Now, not every breach requires user notification, but if there's a high risk to individuals—like passwords, financial data, or sensitive personal information—you need to tell them directly without undue delay. I've seen companies try to downplay this or wait too long, and it always makes things worse. Your users deserve to know so they can protect themselves, change passwords, watch for suspicious activity, that sort of thing.

Other People Who Need to Know

Don't forget about your payment processors if financial data is involved, your insurance company (if you have cyber insurance), and any third-party services that might be affected. If you're working with enterprise clients, check your contracts—many have specific notification requirements. And yeah, your own team needs to know too, but make sure you're controlling the message so everyone's saying the same thing. Mixed messages during a breach just create more panic and confusion than necessary.

Writing Your User Notification

Right, this is probably the hardest email you'll ever have to write—and I mean that. When you're drafting a data breach notification, every word matters because your users are going to be scared, angry, and looking for reasons to trust you again. Or not trust you at all, depending on how you handle it.

The big mistake I see companies make? They try to hide behind legal jargon and corporate speak. Look, I get it—your legal team wants to limit liability. But your users aren't lawyers, they're real people who just want to know what happened and what they should do next. Keep it simple and direct.

Start with what actually happened. Don't bury the lead or try to minimise it. Say clearly what data was accessed, when you discovered the breach, and what you've done about it so far. Users can spot a corporate cover-up from a mile away, and nothing destroys trust faster than feeling like you're being lied to.

The clearest notification is the one that treats your users like intelligent adults who deserve straight answers, not legal waffle that leaves them more confused than before

Tell them exactly what they need to do—change passwords, monitor bank statements, watch for phishing emails. Be specific. "Be vigilant" isn't helpful; "Check your account for unauthorised transactions from the past 30 days" is. And here's something most people forget: include a direct contact method. Not a generic support email that goes into a black hole, but a dedicated helpline or email address where they can actually reach someone who knows about the breach. Its basic respect really, and it shows you're taking responsibility for the mess you've made.

Technical Steps to Stop the Breach

Right, so you've discovered there's a breach—now what? The technical response needs to happen fast, but it also needs to be methodical because one wrong move can make things worse. I mean, I've seen teams panic and shut down entire systems when isolating the affected component would've been enough. Its about balance really.

First thing: isolate the compromised systems or services immediately. If the breach came through a specific API endpoint or database, take it offline if you can do so without destroying evidence. Document everything you do—screenshots, logs, timestamps—because you'll need this for regulators and potentially for legal proceedings later. And here's the thing, don't delete anything yet; you need to preserve the crime scene basically.

Securing Your Infrastructure

Change all administrative passwords and revoke API keys that might've been exposed. Actually, change them even if you think they're safe, better to be paranoid here. Force a password reset for any affected user accounts but don't do this before you've notified users (more on that in other chapters). Review your access logs to understand the scope—what data was accessed? How long were they in the system? This detective work is tedious but absolutely necessary.

Patching and Recovery

Identify how they got in and patch that vulnerability straight away. Whether its a code flaw, outdated dependency, or misconfigured server, fix it before bringing systems back online. Run security scans across your entire infrastructure, not just where you found the breach; attackers often create multiple entry points. Once you've secured everything, you can start bringing services back up—but monitor closely for any suspicious activity. The forensic analysis will continue for weeks probably, but at least you've stopped the bleeding.

Working With Regulators and Legal Teams

Right, this is where things get properly serious—and honestly, a bit nerve-wracking. Once you've got your legal team involved (and you should've done that in the first 24 hours, remember?) they'll need to work alongside you to handle regulator communication. In the UK that means the ICO, and if you've got users in other countries you might be dealing with multiple data protection authorities at once. Fun times.

Your legal team will want to control most of the communication going to regulators, and you should let them. But here's the thing—they'll need you to provide accurate technical details about what happened, how many users were affected, what data was exposed, and what you've done to fix it. Don't sugarcoat anything or try to minimise the situation; regulators find out the truth eventually and lying makes everything worse. I mean it.

The ICO requires breach notification within 72 hours of you becoming aware of it (not when it happened, but when you discovered it). Your legal team will help prepare this notification but they need your technical input to get it right. Be available. Answer their questions quickly. They're not the enemy here—they're protecting you from making mistakes that could lead to massive fines.

Keep detailed records of every conversation with regulators and legal teams, including dates, times, and who said what. This documentation becomes absolutely critical if things escalate or if there's an investigation down the line.

One thing people don't realise is that regulators actually prefer when you're cooperative and transparent. They deal with companies trying to hide breaches all the time, so when you're upfront about what went wrong and what you're doing to fix it, it genuinely helps your case. Your legal team will know how to strike that balance between being honest and not exposing you to unnecessary liability—trust their judgement on what to share and when.

Protecting Your Users After a Breach

Right, so the breach is contained and you've notified everyone—but your work isn't done yet. Actually, this is where a lot of apps make their biggest mistake; they think once the technical fix is in place, everything goes back to normal. It doesn't.

Your users are sitting there wondering if their data is being sold on some dodgy forum right now. They need help, not just reassurance. And honestly? You owe them that help.

First up—offer real protection services. I'm talking about credit monitoring if financial data was exposed, or identity theft protection if personal details got out. Yes, its expensive. Yes, it might cost you thousands or even tens of thousands depending on your user base. But here's the thing; this shows you actually care about the consequences of what happened. Some companies try to cheap out here and offer discount codes or app credits instead, which is honestly a bit insulting when peoples personal information is at risk.

Make sure you set up a dedicated support channel just for breach-related questions. Your regular support team is probably overwhelmed already, and users need quick answers about whether their specific data was affected. I usually recommend a separate email address and phone line that goes directly to people who know the full details of what happened.

Force a password reset for all users—not just the ones affected. Sure, it's a pain for everyone, but it closes off any potential access points you might have missed. Enable two-factor authentication by default if you haven't already (and if you haven't, what were you thinking?).

Keep users updated regularly. Send weekly emails for at least the first month showing what steps you're taking to prevent this happening again. Transparency builds trust back faster than silence ever will.

Rebuilding Trust and Preventing Future Incidents

Right, so you've contained the breach, notified everyone, and dealt with the regulators—now comes the hard part. Actually rebuilding trust with your users. Its not something that happens overnight, I mean, people have trusted you with their data and that trust has been broken. You cant just send one apology email and expect everyone to move on.

The first thing you need to do is be visible and accountable. Keep updating your users regularly about what you're doing to prevent this happening again. Show them the specific changes you've made to your security infrastructure; tell them about the new encryption protocols, the additional security audits, the extra team members you've hired. People need to see concrete action, not just promises.

The companies that recover best from data breaches are the ones that treat the incident as a turning point rather than just a crisis to manage.

But here's the thing—prevention is where you really prove yourself. You need to implement proper security testing as part of your development cycle now. Regular penetration testing, code reviews focused on security vulnerabilities, and proper access controls should become standard practice. I've seen too many apps go through a breach, make temporary fixes, then slowly slip back into old habits once the crisis passes.

Consider getting security certifications like ISO 27001 or SOC 2 if you handle sensitive data. Yes they're expensive and time-consuming but they demonstrate to users that you're taking security seriously at an organisational level. And look, some users will never come back no matter what you do. That's just the reality of a data breach. Your job is to make sure the ones who do stay can see that your app is genuinely safer than it was before...because if it isnt, you're just waiting for the next incident to finish you off completely.

Conclusion

Look—nobody wants to deal with a data breach. Its stressful, it's expensive, and it can seriously damage everything you've built. But here's the thing; having a plan in place before anything goes wrong makes all the difference between an app that survives a breach and one that doesn't recover.

I've watched apps handle breaches in completely different ways over the years. Some teams panic and try to hide whats happened, which always—and I mean always—makes things worse. Others follow the steps we've covered in this guide: they act quickly, communicate honestly with their users, work with regulators properly, and actually fix the underlying problems. Those apps usually come out stronger on the other side.

The truth is that even big companies with massive security teams get breached sometimes. You can do everything right and still face an incident. What matters most is how you respond when it happens... and being prepared before it does. Actually having a response plan ready, knowing who needs to be contacted, understanding your legal obligations—these things turn a potential disaster into something manageable.

Your users are trusting you with their data. That's not a small thing. When that trust gets broken by a breach, you need to earn it back through transparency, accountability, and real action. No excuses, no corporate speak, just honest communication about what happened and what you're doing to make it right. Thats how you build an app that lasts—not by never making mistakes, but by handling them properly when they inevitably occur. And if you haven't already, sit down this week and start building your response plan; you'll sleep better knowing its there if you need it.

Subscribe To Our Learning Centre