Why Your App Keeps Getting Rejected Over Privacy Settings
App store rejections for privacy issues have become one of the most frustrating roadblocks for developers, and I've watched this problem grow steadily worse over the past five years as both Apple and Google have tightened their requirements. The rejection email usually arrives within 48 hours of submission (sometimes within minutes if the automated systems catch something obvious), and the explanation often feels vague or confusing, leaving you wondering what exactly went wrong with your privacy implementation. I've probably walked about 50 different clients through this exact situation, and the pattern is always the same... they thought they'd covered everything, ticked all the boxes during submission, but still got that dreaded "we've identified one or more issues with your submission" message. The frustration compounds when you're working to a deadline, when you've got users waiting for features, when every day of delay costs money in development resources and missed market opportunities.
Three out of every four app rejections we handle at the agency now involve some aspect of privacy compliance, compared to maybe one in ten just a few years back
What makes this particularly difficult is that privacy requirements keep evolving, the documentation from Apple and Google doesn't always match what reviewers actually check, and the automated scanning tools both platforms use can flag issues that seem completely unrelated to what you're actually doing with user data. Back in 2019, you could get away with generic privacy policy language and basic permission requests, but now both stores run sophisticated static analysis on your code, checking for API calls, third-party SDKs, and data handling patterns that might not align with what you've declared in your app privacy details.
The Real Reason App Store Reviewers Flag Privacy Issues
The truth is that Apple and Google aren't just reading your privacy policy anymore, they're scanning your compiled code for specific function calls and framework usage that indicates data collection. I've seen apps rejected where the privacy policy was perfect, the permissions were properly explained, but buried in a third-party analytics SDK was a call to access the advertising identifier that the developer didn't even know was there. The automated systems flag these discrepancies immediately, and then human reviewers double-check the findings before sending that rejection notice. This means you can't just copy privacy documentation from another app or use template language without understanding exactly what data your app actually touches. This is where having proper debugging tools becomes essential for identifying hidden data collection in your codebase.
The review process works in layers now. First pass is automated scanning that checks your binary against your declared privacy manifest, looking for any API usage that doesn't match up with what you've told them you're collecting. Second pass involves manual review where someone actually opens your app, goes through the user journey, and checks whether permission requests appear when and how you've described them. Third pass examines your privacy policy URL to verify it's accessible, readable, and actually covers what your app does rather than just generic legal language.
- Undeclared SDK usage that accesses location, contacts, photos or other protected data
- Permission requests without proper user-facing purpose strings explaining why you need access
- Privacy policy links that lead to 404 errors or generic company policies that don't mention the app
- Discrepancies between what your privacy manifest says and what the code actually does
- Using required permissions when optional ones would work for your use case
The difference between what counts as collection versus what doesn't can be subtle. If your app reads the device timezone, that's not considered data collection by itself, but if you send that timezone to your servers along with other information that could identify a user, suddenly it becomes part of your data fingerprint that needs declaring. I worked with a fintech client who got rejected three times before we realised their fraud detection service was collecting device identifiers in the background without declaring it, even though they never showed that data to users or used it for advertising purposes. Understanding these nuances is particularly crucial when expanding internationally, as our global compliance checklist demonstrates.
Understanding What Data Collection Actually Means to Apple and Google
Data collection triggers the moment information leaves the device or gets stored in a way that connects to a specific user, which sounds straightforward but gets complicated fast when you dig into real-world implementations. If you're using Firebase Analytics, you're collecting data even if you never look at the dashboard, because events are being sent to Google's servers with associated identifiers. If you're storing user preferences locally but syncing them to iCloud or Google Drive, that counts as collection because the data is leaving the device. If you're using a third-party keyboard SDK, a mapping library, a customer support chat widget, or basically any framework that connects to external services, you need to understand exactly what that framework is collecting and declaring it properly.
Download the privacy nutrition labels from similar apps in your category to see what they're declaring, then audit every SDK and service you're using against that standard to make sure you've covered everything.
Apple defines data collection as any information that gets linked to a user's identity or device, while Google focuses on what gets sent off the device for any purpose including analytics, advertising, or functionality. Both platforms distinguish between data used for app functionality (like storing a user's preferences), data used for analytics (tracking how features get used), and data used for advertising (building profiles for targeted ads). The same piece of information can fall into different categories depending on how you use it, and you need to declare all applicable uses rather than just the primary one. This complexity around data handling is why many developers overlook essential legal costs during the planning phase.
What Counts As Personal Data
Email addresses are obvious, but device identifiers like IDFA, IDFV, or Android Advertising ID all count even though they're not directly linked to someone's name. IP addresses count, even if you're just using them for basic server logging. Location data counts whether it's precise GPS coordinates or rough location based on IP address. Health and fitness information includes step counts, heart rate, workout data, and even dietary logs if your app isn't specifically categorised as a health app. Financial information covers not just bank details but also purchase history, credit scores, and payment information even if you're using a third-party processor like Stripe.
The mistake I see most often is developers thinking that because they're using a payment processor or authentication service, they're not collecting financial or contact data. If a user enters their email in your app and you send it to Auth0 or Firebase, you're collecting email addresses regardless of where the actual database lives. If someone makes a purchase through your app using Apple Pay or Google Pay, you're still processing financial information even though the payment credentials never touch your servers. The rule is simple... if the data passes through your app at any point, you need to declare how you handle it. For specialized apps like peer-to-peer payment platforms, this becomes even more critical given the sensitive financial data involved.
Building Privacy Manifests That Pass Review on First Submission
Apple introduced privacy manifest files as a requirement for certain APIs, and this has become one of the fastest paths to rejection if you don't understand what goes in them. The manifest is a property list file that lives in your Xcode project and declares exactly which sensitive APIs your app uses and why, focusing particularly on things like file timestamps, system boot time, disk space, and user defaults. When you call certain functions (like accessing the documents directory or checking available storage), Apple wants to know the specific reason through standardised codes rather than freeform text.
I've built probably 20 of these manifests in the past few months, and the key is being extremely precise about your reasons. Apple provides specific reason codes like "DDA9.1" for accessing user defaults to read app configuration, or "1C8F.1" for checking disk space to display storage management UI. You can't just pick the code that sounds closest, you need to match your actual implementation to the exact purpose Apple has defined for that code. The automated scanning during review will check whether your code usage matches the declared purpose, so if you say you're checking disk space for storage management but your UI doesn't actually show storage information to users, that's a rejection waiting to happen.
What Goes In Your Privacy Manifest
Start with a complete audit of every API call in your codebase that touches system resources, user data, or device information. Look through your dependencies too, because third-party SDKs might be calling sensitive APIs without you realising it. Popular frameworks like Firebase, Facebook SDK, or analytics libraries often need their own manifest files now, and if those aren't included or properly declared, your app gets rejected even though your own code is fine. Apple provides a list of required reasons APIs that must be declared, covering things like NSUserDefaults access, file timestamp APIs, system boot time APIs, disk space APIs, and active keyboard APIs.
The format is straightforward XML but the specifics matter. Each API category gets its own dictionary entry with an array of reason codes explaining why you need that access. You can't leave it empty or skip APIs you're using, and you can't use reason codes that don't actually match what your app does. Google hasn't implemented anything quite as structured as Apple's manifest system yet, but they're moving in that direction with data safety declarations in the Play Console that serve a similar purpose of making developers explicitly document their data practices before submission. When working with complex database architectures, our database design guide can help ensure your data handling aligns with privacy requirements from the ground up.
Common Permission Request Mistakes That Trigger Instant Rejections
Asking for permissions at the wrong moment is probably the single biggest avoidable mistake I see, and it results in immediate rejection because it violates both platforms' guidelines on user experience and data minimisation. If your app asks for location access before explaining why it needs location, that's a rejection. If you request camera permissions on first launch before the user has tried to take a photo, that's a rejection. If you bundle multiple permission requests together without clear individual justification for each one, that's a rejection. Both Apple and Google want users to understand exactly why each permission is needed at the moment they're asked to grant it, not as a generic "we need these to make the app work" dump during onboarding. This is one of the critical onboarding mistakes that can derail your app's success before users even engage with core features.
The permission request should appear exactly when the user tries to use a feature that requires it, not before, not bundled with other requests, just that single permission with clear explanation
The purpose string (the text that appears in the permission dialog) needs to be specific and user-focused rather than technical or vague. Saying "We need access to your photos to upload images" isn't specific enough because it doesn't explain what the user gets from uploading images. Better would be "Choose photos from your library to share them with your team" because it explains the user benefit in concrete terms. Saying "Required for app functionality" is almost guaranteed rejection because it tells the user nothing useful about what will happen with their data.
Timing Your Permission Requests
I worked with an e-commerce client who kept getting rejected because they requested notification permissions during onboarding, before users had even browsed products or added anything to their cart. We moved the request to trigger after someone's first purchase, explained it as "Get notified when your order ships and when items you viewed go on sale," and that sailed through review. The change didn't affect their opt-in rate negatively either, because users who had already made a purchase saw clear value in order updates.
Location permissions are particularly sensitive. If your app requests "always" location access when "when in use" would work, that's likely rejection unless you can demonstrate clear user value that requires background location. I've seen dozens of apps try to request always-on location for analytics or advertising purposes, and that gets shot down immediately because there's no user-facing feature that requires it. Even requesting "when in use" location needs careful timing... wait until the user taps a button to find nearby stores, or opens a map view, or tries to use a location-based feature rather than asking on launch. For apps incorporating location-heavy features like AR experiences, our AR development guide covers privacy considerations specific to augmented reality implementations.
Testing Your Privacy Implementation Before Submission
Running your app through the same checks that Apple and Google's automated systems will perform saves you weeks of rejection-resubmission cycles, and there are specific tools and techniques that catch most issues before they reach reviewers. For iOS apps, use Xcode's Privacy Report feature which analyses your compiled binary and shows you exactly which APIs you're calling, which frameworks are accessing sensitive data, and whether your privacy manifest covers everything. For Android apps, use the App Bundle Explorer in Android Studio to examine which permissions your app requests and verify they match what you've declared in your Play Console data safety section.
- Check every third-party SDK version you're using against the developer's documentation for required privacy declarations
- Test your app with a completely fresh install on a device that has never run it before to see exactly what permissions get requested and when
- Verify your privacy policy URL loads quickly and displays properly on mobile devices
- Review your App Store Connect or Play Console privacy declarations against your actual code implementation
- Test with different user journeys to ensure permission requests only appear at appropriate moments
Set up a clean test device that you use only for validation runs (not daily development), and go through your entire user flow from first launch to every major feature. Record the session so you can review exactly when each permission dialog appeared and what the context was. Compare that recording against Apple and Google's guidelines for permission timing and purpose strings. Look for any moment where you're requesting access before the user has encountered the feature that needs it, or where the purpose string doesn't clearly explain what will happen. This systematic approach is especially important when designing forms that handle sensitive data, which is why understanding accessible form design principles helps create better user experiences while meeting privacy requirements.
Static Analysis Tools
Tools like CocoaPods-Keys for iOS or the Play Console Pre-Launch Report for Android catch a lot of privacy issues automatically. The pre-launch report runs your Android app through automated testing and flags any crashes, performance issues, or privacy problems before real users see them. Apple doesn't offer quite as comprehensive automated testing, but the validation process in App Store Connect runs similar checks and will show you warnings (yellow flags that don't block submission but might cause review issues) and errors (red flags that prevent submission entirely). Pay attention to both.
I recommend keeping a privacy validation checklist that you run through before every submission, covering things like verifying your privacy policy is current, checking that all purpose strings are specific and user-focused, confirming your privacy manifest matches your API usage, testing permission request timing with fresh installs, and reviewing any new SDKs added since your previous version. This takes maybe 30 minutes but can save you two weeks of review cycles. When building serverless apps, these validation steps become even more important since data flows through multiple services, as covered in our serverless development guide.
What To Do When Your App Gets Rejected for Privacy Violations
The rejection message from App Review or Play Console usually points to the specific guideline you've violated, but the explanation can be frustratingly vague or seem to contradict what you think you've implemented. The reply thread in App Store Connect or the response field in Play Console is your direct line to reviewers, and how you respond makes a big difference in whether your next submission gets approved or triggers another rejection. I've seen developers get defensive in these replies, arguing about interpretation of guidelines or insisting their implementation is correct, and that approach never works because you're essentially telling reviewers they're wrong about their own platform's rules.
When responding to a privacy rejection, acknowledge the specific issue the reviewer identified, explain exactly what you've changed in technical terms, and provide step-by-step instructions for the reviewer to verify your fix.
Start by reading the rejection reason carefully and identifying which specific privacy guideline reference they've cited. Both Apple and Google include these references in rejection messages, like "Guideline 5.1.2 - Data Use and Sharing" for Apple or "Data Safety section does not accurately reflect" for Google. Look up that guideline in the full documentation, not just the summary, and read the entire section to understand the context. Often the rejection is about something adjacent to what you think the problem is, like your privacy manifest being correct but your purpose string being too vague.
| Rejection Type | Action Required | Typical Fix Time |
|---|---|---|
| Missing privacy manifest | Add manifest file with all required API declarations | 2-4 hours |
| Vague purpose strings | Rewrite each permission purpose with specific user-facing benefits | 1-2 hours |
| Undeclared data collection | Audit all SDKs, update privacy labels to match actual usage | 4-8 hours |
| Permission timing issues | Refactor code to delay requests until user initiates relevant feature | 8-16 hours |
When you resubmit after making changes, use the notes field to explain exactly what you modified and why those changes address the rejection. Say something like "We've updated the photo library permission purpose string from 'Required for app functionality' to 'Choose photos from your library to upload as your profile picture or share in messages.' We've also moved this permission request from the onboarding flow to trigger only when users tap the camera button in their profile or message composer." This level of detail shows you understand the issue and have made thoughtful corrections rather than just hoping the next reviewer doesn't notice the same problem.
Why Your App Keeps Getting Rejected Over Privacy Settings
The root cause of repeated privacy rejections usually isn't that developers are trying to be sneaky or collect data inappropriately, it's that the privacy requirements have become so detailed and technical that it's hard to keep track of every obligation across both platforms. You need to understand not just what data your own code collects, but what every third-party SDK might be doing, how both Apple and Google interpret different types of data usage, when and how to request permissions, what goes in your privacy manifest versus your privacy policy versus your app store privacy labels, and how to write purpose strings that meet both legal requirements and platform guidelines. That's a lot of moving parts, and missing any single piece can trigger rejection. For specialized sectors, additional considerations apply - for example, agricultural data apps have specific compliance requirements around sensitive farming information.
The solution isn't trying to memorise every guideline or reading through hundreds of pages of documentation before each submission. Build systems and checklists that catch privacy issues during development rather than at submission time. Add privacy review as a required step in your development workflow, right alongside code review and QA testing. Create a spreadsheet that lists every SDK you use, what data it collects, when it was last audited, and whether it requires privacy manifest entries. Set up automated tests that fail if anyone adds a permission request without updating the corresponding documentation. Make privacy implementation a continuous practice rather than a submission-time scramble.
The apps that pass review consistently aren't necessarily doing anything technically different, they're just being more systematic about documenting and testing their privacy practices. They've mapped out their entire data flow, they know exactly what leaves the device and why, they've audited every dependency, and they've tested the user experience of their permission requests on clean devices. They treat privacy compliance as a core product requirement rather than a checkbox exercise during submission. That mindset shift, more than any specific technical change, is what separates apps that sail through review from apps that get stuck in rejection cycles.
If you're dealing with repeated privacy rejections and need experienced help untangling what reviewers are asking for, get in touch with us and we'll work through the specific issues your app is facing.
Frequently Asked Questions
You'll usually receive a privacy rejection within 48 hours of submission, sometimes within minutes if Apple or Google's automated systems catch obvious issues. The quick turnaround happens because both platforms now run sophisticated static analysis on your compiled code before human reviewers even see it.
No, generic privacy policy templates will likely trigger rejections because Apple and Google now scan your actual code to verify it matches what your privacy documentation claims. Your privacy policy must accurately reflect the specific data your app and all its third-party SDKs actually collect, not just use standard legal language.
Even when using first-party payment processors, you're still handling financial information that passes through your app, which must be declared in your privacy labels. The rule is simple: if data passes through your app at any point, you need to declare how you handle it, regardless of where the actual processing happens.
Privacy manifests are technical files in your Xcode project that declare specific API usage with standardized reason codes, while privacy labels are the user-facing declarations you make in App Store Connect about data collection. Both must be accurate and consistent with each other, as automated systems check for discrepancies.
Request permissions exactly when users try to use a feature that requires them, not during onboarding or bundled with other requests. For example, ask for photo access when someone taps a camera button, not when they first launch the app.
Use Xcode's Privacy Report feature for iOS apps or Android Studio's App Bundle Explorer to see exactly which APIs are being called by your dependencies. Additionally, check each SDK's latest documentation for required privacy declarations, as these change with updates.
Acknowledge the specific guideline violation cited, explain exactly what technical changes you've made, and provide step-by-step instructions for reviewers to verify your fix. Never argue with reviewers or insist your original implementation was correct.
Repeated rejections usually happen because privacy compliance involves multiple interconnected requirements: privacy manifests, purpose strings, permission timing, third-party SDK declarations, and privacy policies must all align perfectly. Missing any single piece can trigger rejection, so use systematic checklists and testing rather than trying to fix issues piecemeal.


