Expert Guide Series

How Do Developers Protect Apps During Building?

Developers often worry about app security after launch, but the truth is that some of the worst security breaches happen during the development process itself. I've seen teams who spent six months building a brilliant healthcare app only to discover that their entire codebase, including patient data structures and API credentials, had been exposed on GitHub for four of those months because someone forgot to add the environment file to the gitignore list. The damage was done long before the app ever reached the app stores, and the cost of fixing it was somewhere around £45k in legal fees and security remediation work alone.

Building security into your development process isn't about adding extra steps at the end, it's about making protection part of how your team works from the very first line of code.

Over the past ten years of building apps for clients across healthcare, fintech and e-commerce, I've learned that security during development is actually harder than securing a live production app in many ways. You've got multiple developers working on different features, test environments with varying levels of protection, third-party libraries being added and updated constantly, and API keys being passed around between team members. Each of these represents a potential entry point for someone with bad intentions, and the scary part is that development environments often have weaker security than production systems precisely when they contain the most valuable intellectual property (the source code itself, database structures, business logic that took months to develop).

The Reality of Security Threats in Development Environments

The development phase is actually when your app is at its most vulnerable, which seems backwards at first. Production apps have firewalls, monitoring systems, intrusion detection and dedicated security teams watching them. Development environments often run on developer laptops, staging servers with default passwords, and cloud instances that someone spun up on a Thursday afternoon and forgot about. I worked with a fintech startup where we discovered that their staging database (which contained anonymised but still sensitive transaction data) was accessible from the public internet because a junior developer had opened port 5432 to "temporarily" fix a connection issue eight months earlier.

The threats you face during development aren't always external hackers trying to break in. Sometimes the biggest risks come from accidental exposure, like credentials committed to version control or configuration files uploaded to cloud storage. Other times it's about supply chain attacks, where compromised dependencies inject malicious code into your build process. I've probably reviewed about 200 codebases at this point, and I can tell you that roughly half of them had at least one serious security issue that originated during development rather than in production.

  • Exposed credentials in version control repositories (API keys, database passwords, third-party service tokens)
  • Insecure development servers accessible from the public internet without proper authentication
  • Test data that contains real user information instead of properly anonymised datasets
  • Compromised developer machines that leak source code or provide access to internal systems
  • Malicious packages in dependency chains that steal data or inject backdoors during the build process
  • Overly permissive cloud storage buckets containing builds, logs or configuration files

Code Repositories and Version Control Protection

Your code repository is the crown jewels of your app project, and protecting it during development means thinking about more than just setting a strong password on your GitHub account. Every developer who has access to your repository can see the entire history of your project, including every commit, every branch, and every file that's ever been added (even if it was later deleted). I once helped a client whose former contractor had committed their production database credentials in month two of development, then removed them in month three, but the credentials remained visible in the commit history for anyone who knew where to look.

The first thing you need is branch protection rules that prevent anyone from pushing directly to your main branch without review. This isn't just about catching bugs, it's about making sure that nobody can slip malicious code or exposed credentials into your production codebase without at least one other person reviewing it. Set up required pull request reviews, and make sure that the person reviewing actually has the security knowledge to spot problems. We require at least two approvals for any merge into main branches, and one of those approvals must come from someone with security review training.

Protection Level Measures Protects Against
Basic Branch protection, required reviews, signed commits Accidental exposure, unauthorised changes
Intermediate Automated secrets scanning, pre-commit hooks, access logging Credential leaks, suspicious patterns
Advanced Code signing, commit verification, audit trails, just-in-time access Supply chain attacks, insider threats

Set up automated secrets scanning that runs on every commit and blocks any push that contains patterns matching API keys, passwords, or private keys. Tools like git-secrets or GitGuardian can catch exposed credentials before they ever make it into your repository history, and trust me when I say that preventing the exposure in the first place is infinitely easier than trying to scrub it from the commit history later.

API Keys and Sensitive Data Management

Managing API keys and sensitive configuration data during development is one of those things that seems simple until you actually try to do it properly across a team of five developers working on different features. The naive approach is to put everything in a config file and add it to gitignore, but then how does the new developer who joined last Monday get access to the development API keys they need to run the app locally? I've seen teams email credentials around, post them in Slack channels, or store them in shared Google Docs, and every single one of those approaches is a security disaster waiting to happen. This is especially critical when dealing with user data protection requirements that could result in significant fines for data exposure.

Environment-Specific Configuration

The key is to separate your configuration by environment and ensure that development credentials never have access to production data or systems. Your local development environment should use API keys that point to sandbox services with fake data. Your staging environment might use a different set of credentials that access test systems. Your production environment uses completely separate credentials that only the deployment pipeline and a very small number of senior team members can access. This way, if development credentials leak (which happens more often than you'd think), the blast radius is limited to test systems that don't contain real user data.

Secret Management Systems

For any app handling sensitive data (and that's most apps these days), you need a proper secret management system rather than environment files floating around. Services like AWS Secrets Manager, Azure Key Vault or HashiCorp Vault let you store sensitive configuration in encrypted vaults that developers access through authenticated APIs rather than having the actual secrets on their machines. The learning curve is steeper than just using a dotenv file, but I can tell you from experience that it's worth it. We had a situation where a developer's laptop was stolen from their car, and because we were using secret management with time-limited tokens, we just revoked that developer's access rather than having to rotate every single API key and password in the entire system. These security patterns are particularly important for enterprise applications where security vulnerabilities can have far-reaching consequences.

Testing Environments That Mirror Production Security

Your testing and staging environments should be as close to production as possible in terms of functionality, but they need to be completely isolated in terms of data and access. The problem I see constantly is teams who build their staging environment as a lighter-weight version of production (to save on hosting costs, which is understandable) but then it ends up having security configurations that don't match production at all. Your app might work perfectly in staging but then have security vulnerabilities in production because the two environments are configured differently. This is where careful feature testing approaches can help identify security issues early in the development cycle.

The best testing environment is one that's identical to production in security configuration but uses completely different credentials, isolated networks, and synthetic test data that matches the structure of real data without containing any actual user information.

I built an e-commerce app where we discovered during pre-launch security testing that our staging environment allowed HTTP connections while production enforced HTTPS, and this meant that our entire QA process had missed a bunch of mixed content warnings and security issues that only appeared in production. Now we use infrastructure-as-code (Terraform files that define our server configurations) to ensure that security settings are identical across environments, with only the credentials and data sources being different. It costs a bit more to run staging environments that are properly configured, maybe an extra £200 per month in our case, but it's caught probably a dozen security issues before they reached users.

Testing environments also need their own security monitoring, not just application monitoring. Someone attempting to brute force your staging database is worth knowing about, even though it's not production. These attempts often indicate that your infrastructure has been discovered by automated scanners or bad actors who are probing for weaknesses before attempting a real attack on your production systems.

Third-Party Libraries and Dependency Scanning

Modern app development means you're building on top of hundreds of third-party libraries and frameworks, and each one of those dependencies is a potential security risk during development. The React Native app we built last year had 847 dependencies when you counted both direct dependencies and all their sub-dependencies, and keeping track of security vulnerabilities in that many packages is impossible to do manually. I remember sitting in a Wednesday morning planning meeting when we got an alert that one of our authentication libraries had a critical vulnerability that had been present in our codebase for about three weeks. This is particularly challenging when working with cross-platform security frameworks where vulnerabilities can affect multiple platforms simultaneously.

The scary thing about supply chain attacks is that they often target development tools and libraries that never make it into your production app but still run on developer machines or in your build pipeline. A compromised development dependency could steal your source code, inject malicious code into your builds, or exfiltrate credentials from your build environment, all without ever being visible in your final app. There was a well-known case where a popular npm package was compromised and modified to steal cryptocurrency wallet credentials from developer machines during the build process. For teams using containerised development environments, these supply chain risks can be particularly dangerous as compromised packages can escape container boundaries.

  1. Use automated dependency scanning tools that check for known vulnerabilities on every build (npm audit, Snyk, or GitHub's Dependabot are good starting points)
  2. Pin your dependency versions rather than using wildcards that automatically update to newer versions without review
  3. Review the dependencies your dependencies use, because that's where a lot of malicious packages hide
  4. Set up alerts for new vulnerabilities in packages you're already using, so you know when to update
  5. Consider the maintenance status of libraries before adding them (a popular library that hasn't been updated in three years is probably accumulating unpatched vulnerabilities)

We implement a policy where no dependency can be added to the project without a brief security review of the package, its maintainers, and its own dependencies. This takes maybe ten minutes per library, but it's caught several suspicious packages that looked useful but were actually typosquatting popular libraries with names that were one letter different. The real express package versus expres with one 's' might seem obvious written out like this, but when you're quickly installing packages at the end of a long day, these things slip through.

Team Access Controls and Developer Permissions

Not every developer needs access to everything, and implementing proper access controls during development is about more than just security theatre. It's about reducing the number of people who could accidentally (or intentionally) cause problems, and it's about creating audit trails that let you understand what happened if something does go wrong. The principle of least privilege means giving people the minimum access they need to do their work, and then adding more access only when there's a clear reason for it.

Role-Based Access

Junior developers working on frontend features probably don't need access to your production database credentials or your cloud infrastructure configuration. Contract developers working on a specific feature might not need access to other parts of your codebase that aren't related to their work. I'm not suggesting you create a hostile environment of distrust, but rather that you structure access in a way that protects both your app and your developers. If a contractor's credentials get compromised, you want to limit what an attacker can access through those credentials. For complex applications that require sophisticated security features like card controls or payment systems, these access restrictions become even more critical.

Implement time-limited access for sensitive systems, so that when a developer needs to debug a production issue, they request access that expires after a few hours. This means credentials aren't sitting around indefinitely in places they don't need to be, and you have a clear audit trail of who accessed what and when. We use this for production database access, and it's probably saved us from several potential incidents where developers forgot to close connections or left credentials in their shell history.

Regular access reviews are boring but necessary. Every couple of months, go through your list of who has access to what and revoke anything that's no longer needed. That contractor who finished their work four months ago probably still has access to your repository unless you've explicitly removed it. Former employees sometimes retain access to systems for months after leaving because nobody remembered to deprovision all their accounts. Understanding realistic development timelines can help prevent pressure that leads to cutting corners on security reviews.

Pre-Launch Security Audits and Penetration Testing

Security testing needs to happen throughout development, not just at the end, but there's still value in a thorough pre-launch security audit that looks at your complete app from an attacker's perspective. This is different from the ongoing security practices we've been talking about because it's specifically about finding vulnerabilities in how all your components work together, including issues that might not be visible when looking at individual pieces of code or infrastructure. It's particularly important to address these concerns before they become the kind of project disasters that can derail an entire app launch.

Penetration testing for mobile apps involves both testing the app itself and testing all the backend systems it connects to. A pen tester will try to intercept network traffic, reverse engineer your app binary, manipulate API requests, and exploit any weaknesses they can find in your authentication, authorisation, or data handling. I had a pen test on a healthcare app find that we were storing patient identifiers in local storage without encryption, something that had been there since early development and hadn't been caught by code review because it was split across several files and wasn't obviously problematic in any single place.

The timing matters quite a bit. You want to do security testing late enough that your app is relatively complete (no point testing features that will be completely rewritten), but early enough that you have time to fix serious issues before launch. We typically schedule the security audit about 4-6 weeks before planned launch, which gives enough buffer to address findings without derailing the launch date. Budget somewhere between £5k-15k for a proper security audit of a mobile app with backend services, depending on complexity. It sounds like a lot, but it's substantially less than dealing with a security breach after launch. Teams should also consider how security requirements might affect decisions about splitting applications into multiple products, as each additional app surface increases the security testing requirements.

Conclusion

Protecting your app during development requires building security into how your team works every single day, from how you manage credentials to how you review code to how you handle third-party dependencies. The development phase is when your app is actually at its most vulnerable, with source code on multiple machines, test environments potentially exposed, and credentials being shared among team members. Getting security right during development means thinking through these scenarios before they become problems and putting systems in place that make the secure approach the easy approach, rather than something developers have to remember to do. The apps that launch securely are the ones where security was part of the development process from the beginning, not something bolted on at the end, and that mindset shift makes all the difference between an app that's genuinely protected and one that just looks secure until someone tests it properly.

If you're building an app and want help implementing these security practices into your development process, or if you need someone to review your existing setup before launch, get in touch with us and we can talk through what would work best for your specific situation.

Frequently Asked Questions

How much should we budget for security measures during app development?

For a typical mobile app, budget around £5k-15k for professional security auditing, plus ongoing costs for secret management tools and secure hosting environments (usually £200-500 per month). The investment is minimal compared to the potential cost of a security breach, which can easily reach £45k+ in legal fees and remediation work alone.

When should we start implementing security measures in our development process?

Security should be built into your development process from the very first line of code, not added at the end. Start with proper repository protection, environment configuration, and access controls before any development begins. Schedule your comprehensive security audit 4-6 weeks before launch to allow time for fixing any issues.

What's the biggest security risk teams overlook during development?

The most common oversight is accidentally committing API keys, passwords, or sensitive credentials to version control repositories. Even if you delete these files later, they remain visible in the commit history forever. Use automated secrets scanning and proper environment configuration from day one to prevent this.

How do we safely share API keys and credentials among our development team?

Never share credentials through email, Slack, or shared documents. Use proper secret management systems like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault that provide encrypted storage and time-limited access tokens. Set up environment-specific credentials so development keys never access production systems.

Should our testing environment have the same security as production?

Yes, your testing environment should mirror production security configurations exactly, but use completely isolated credentials and synthetic data. This ensures you catch security issues during testing rather than after launch, while protecting against accidental exposure of real user data.

How do we protect against vulnerabilities in third-party libraries?

Implement automated dependency scanning tools (like npm audit or Snyk) that check for vulnerabilities on every build, and pin your dependency versions rather than using auto-updating wildcards. Review each new library before adding it, including checking its maintenance status and sub-dependencies.

What access permissions should different team members have?

Follow the principle of least privilege - give each team member only the minimum access needed for their role. Junior developers don't need production database access, and contractors should only access relevant parts of the codebase. Implement time-limited access for sensitive systems and conduct regular access reviews.

How often should we review our development security practices?

Conduct formal access reviews every 2-3 months to remove unnecessary permissions and offboard former team members. Run dependency scans on every build, and perform comprehensive security audits before major releases. Security should be an ongoing practice, not a one-time check.

Subscribe To Our Learning Centre