Truffle Security Co. · Research

Leaked and Still Live

Why Developers Fail to Remediate Exposed Credentials

tl;dr We analyzed 22,000+ verified secrets across Bitbucket and GitLab and tracked their lifecycle through git history. When developers attempted remediation, the vast majority used techniques that don’t actually invalidate the credential: overwriting files, deleting repos, rewriting history. We cataloged four secret remediation antipatterns and ran a canary detection experiment to measure how quickly secrets get scraped on each platform.

Join the Truffle Security team as they discuss these findings in more detail — sign up for the webinar here.

22,000 Live and Unique Secrets over 16 years

We scanned all of GitLab and Bitbucket in fall 2025, found 22,000 live secrets and disclosed them to users and partners to get these rotated and revoked. Six months later we checked these secrets to understand what remediation developers took, or didn’t!

We chose Bitbucket and GitLab deliberately, GitHub tends to get the lion’s share of attention from researchers, scanners and secret detection programs. Bitbucket and GitLab seem to fly under the radar. We also believe that these platforms skew more corporate, hosting a higher concentration of professional development teams, which makes the remediation gap more significant.

We tracked every secret’s lifecycle through Git, looking at whether developers noticed the leak, how they attempted a fix and whether the fix actually worked. The failed attempts formed a clear taxonomy that we call Secret Remediation Antipatterns.

Total verified secrets
22,000+
Platforms
GitLab & Bitbucket
Detector types
289

80.9%

Still Live

Six months after the initial scan, 80.9% of secrets are still live, less than one in five has been revoked. These include GCP keys, AWS IAM keys, OpenAI tokens, Docker credentials, and MongoDB connection strings — credentials that could grant access to cloud infrastructure, databases, and production environments.

This stat sparked the idea for this research, we knew remediation rates were low, we dug into what developers are actually doing when they try to clean up leaked credentials.

63.7%

Did Nothing

The most significant finding isn’t any antipattern, it’s that 63.7% of developers made no remediation attempt at all.

Secret remediation antipatterns

33%

File overwrite

File overwrite was the most common antipattern we observed. A developer leaks a secret, realizes the mistake, and pushes a new commit that swaps the hardcoded secret for a placeholder or an environment variable reference.

Git commits are immutable. The original secret sits in history, accessible via simple commands like git log -p.

This antipattern accounted for 33% of all repositories with live secrets.

Repository view illustrating file overwrite remediation
File overwrite involves amending the commit: the secret disappears from HEAD but still lies in Git history.
5.9%

Re-introduction

Sometimes developers caught the leak, rotated the credential to an environment variable, and moved on. But the very next commit, they either reverted to the hard-coded secret or introduced a new one. We saw this cycle (secure → insecure → secure → insecure) play out across multiple repos, sometimes with a single feature branch.

5.9% of repositories re-introduced secrets this way, either the same credential type or a different service altogether.

Step 1: hard-coded secret in source
A developer hard-codes a secret.
Step 2: secret removed in amended commit
The secret is removed via an amended commit.
Step 3: secret re-added in next commit
The very next commit, the secret is re-added again.
2.3%

The nuclear option

Deleting or privatizing a repository with a leaked secret feels like a fix. If it’s no longer accessible, the risk of it being discovered is gone, right?

By the time a developer deletes or privatizes a repo, automated scanners have likely already scraped and stored the credentials. This generates a sense of false security and makes it harder for remediation, especially with deleted repositories. You can no longer see the physical key so you can’t be certain which one you need to revoke and rotate.

This showed up in 2.3% of cases.

Repository settings showing delete or privatize
Deleting or hiding the repo does not revoke credentials already copied by scanners.
0.9%

History rewrite

This approach is the most involved of all the antipatterns that we looked at. Developers use tools like BFG Repo-Cleaner or git filter-repo to scrub secrets from the entire commit history.

The scrubbed commit disappears from git log --all, but the platform retains the dangling commit.

These platforms retain dangling commits: commits no longer referenced by any branch but still accessible via their SHA through the web interface.

This accounted for 0.9% of our total cases.

Terminal: commit hash not available when pulling
Commit hash is not available when pulling the remote repository.
Web UI: same commit still reachable by SHA
But via the UI it’s available if we specifically search for it.

Secrets leaked per platform, by month

Secrets by detector type

This chart represents the top 20 secrets by detector type. GCP and MongoDB are the clear outliers. These credentials could be used to access Infrastructure environments or databases.

Who’s Watching?

We committed AWS canary tokens using https://canarytokens.org/ to three public repositories across Bitbucket, GitLab and GitHub. We monitored how long it took for these tokens to be validated (in most cases by an sts get-caller-identity request) and how many times the tokens were used in the first hour.

GitHub

43 hits in 1 hour

GitLab

2 hits in 1 hour

Bitbucket

0 hits for 48 hrs

GitHub had by far the most hits in 30 minutes, it makes sense, it’s a very popular choice for developers and researchers.

The Bitbucket result surprised us. Some scanners can detect AWS canary tokens and avoid them, but we still expected at least some activity within 48 hours.

We think this ultimately comes down to how accessible these platforms are for automation to access via the API.

GitHub

Events API streams new commits in real time. Scanners poll every few minutes to grab the latest commits.

GitLab

Projects API filterable by last activity. Higher processing overhead than GitHub’s real-time stream.

Bitbucket

No activity filter at all. Possible to filter by newly created repos but not by latest activity.

The implication is pretty straightforward, secrets on Bitbucket and GitLab have a longer window of undetected exposure and less eyes looking at them.

Platform Detection Isn’t Equal

GitHub

Runs a Secret Scanning Partner Program that works with services to register token formats for real-time scanning of every public commit. This works well, we found that AWS keys on GitHub are roughly 4x more likely to be revoked than AWS keys on Bitbucket and GitLab.

GitLab

Has platform-level secret scanning, but it’s not enabled by default and some features sit behind paid tiers.

Bitbucket

Offers secret scanning for Data Center instances but for cloud repositories there is nothing.

Rotate it!

Quiz: A developer exposes an AWS key — how do you respond? D. Rotate your key.

The correct remediation for a leaked credential is always the same: rotate the key. Every other approach leaves the credential functional and vulnerable to exploitation by attackers.

There’s three simple steps you can take:

  1. Invalidate the exposed key immediately. On GitHub, you’ve got about 5 minutes before automated scanners pick it up so having a defined process for remediation is crucial.
  2. Generate a new key.
  3. Update your systems to use the new key.

Another way you could handle remediation is by combining detection and remediation together.

Detection + Remediation

The best protection is prevention. Unfortunately platforms like GitHub do not open up their push protection to best in class secret scanners, and instead force users to use their in house secret scanner, which is not as strong at detection and validation. This pushes us to use the following options:

  • Pre-commit workflows that run on developer workstations
  • GitHub actions that are fast to detect, but still retroactive and require revocation

We put together a simple GitHub action that uses TruffleHog’s official action and a modified workflow to revoke GitHub PAT’s when they were detected by TruffleHog.

Left terminal: a script hitting GitHub’s /users endpoint every half second with a personal access token, printing HTTP 200 on each success.

Animation: push triggers TruffleHog and revocation

Right terminal: pushing that same token to a GitHub repository with a GitHub Action that runs TruffleHog and automatically revokes any verified secret it finds.

Terminal showing HTTP 200 flipping to HTTP 401 after revocation
12 seconds after we pushed the secret to GitHub, the action revoked it.

That’s faster than manual revocation under the best circumstances. In many organizations, the developer who leaked the token doesn’t even have permission to revoke it directly (service account credentials locked in a vault, for example).

Based on our canary data, 12 seconds puts revocation well ahead of every automated scraper we observed across all three platforms. The token is dead before anyone else can detect and use it.

GitHub’s revocation API also makes regeneration straightforward. You get an email with details about what was revoked and a one-click path to regenerate. Revoking a production token has downstream consequences, but a live credential with production access sitting in a public repository is worse by every measure.

What can you do?

SaaS providers

Revocation API endpoints

These are great for researchers and developers; they enable programmatic revocation of API tokens and reduce the friction between researchers and disclosure processes to get a token remediated.

Proactive scanning

SaaS providers could monitor for leaked credentials themselves across open source ecosystems.

Improving disclosure pathways

Make it easier for people who find leaked credentials to report them. Not every SaaS has a clear channel for this today and it creates friction which reduces the likelihood of researchers disclosing leaked credentials to SaaS providers.

Developers & Security Teams

Deploy pre-commit scanning

Several secret scanners, TruffleHog included, can be used as a pre-commit hook to catch secrets before they ever reach remote repositories.

Gate your CI/CD pipelines

Implement secret scanning in your CI/CD pipelines to detect and block PR’s that contain secrets before they enter production code.

Bonus: Pair detection and remediation into a single workflow where possible. A scanner that finds a secret but requires a human to manually revoke it leaves a gap. You can use CI/CD pipelines that detect and revoke in one pass.

Takeaways

1

Rotation is the only remediation that works. Every other approach, file overwrites, repo deletion, history rewrites, leaves the credential live and exploitable.

2

The gap between detection and remediation is where breaches happen. You can close that gap by pairing TruffleHog’s scanning with automated revocation in your CI/CD pipelines, turning detection into immediate action rather than a ticket in someone’s backlog.

3

For the current environment to improve, SaaS providers need revocation APIs, proactive scanning of open-source repositories, and clearer disclosure pathways. Developers need pre-commit hooks, pipeline-level scanning, and the discipline to rotate, not overwrite, every leaked credential.

4

The clock starts the moment a secret hits a public repository. On GitHub, you have about five minutes. On Bitbucket and GitLab, you have longer, for now. Neither window is a remediation strategy.

Interested in learning more about the research? Register for our webinar on Thursday, April 9 at 10 AM PT / 1 PM ET / 5 PM GMT.