Truffle Security Co. · Research
Why Developers Fail to Remediate Exposed Credentials
tl;dr We analyzed 22,000+ verified secrets across Bitbucket and GitLab and tracked their lifecycle through git history. When developers attempted remediation, the vast majority used techniques that don’t actually invalidate the credential: overwriting files, deleting repos, rewriting history. We cataloged four secret remediation antipatterns and ran a canary detection experiment to measure how quickly secrets get scraped on each platform.
Join the Truffle Security team as they discuss these findings in more detail — sign up for the webinar here.
The ResearchWe scanned all of GitLab and Bitbucket in fall 2025, found 22,000 live secrets and disclosed them to users and partners to get these rotated and revoked. Six months later we checked these secrets to understand what remediation developers took, or didn’t!
We chose Bitbucket and GitLab deliberately, GitHub tends to get the lion’s share of attention from researchers, scanners and secret detection programs. Bitbucket and GitLab seem to fly under the radar. We also believe that these platforms skew more corporate, hosting a higher concentration of professional development teams, which makes the remediation gap more significant.
We tracked every secret’s lifecycle through Git, looking at whether developers noticed the leak, how they attempted a fix and whether the fix actually worked. The failed attempts formed a clear taxonomy that we call Secret Remediation Antipatterns.
80.9%
Still Live
Six months after the initial scan, 80.9% of secrets are still live, less than one in five has been revoked. These include GCP keys, AWS IAM keys, OpenAI tokens, Docker credentials, and MongoDB connection strings — credentials that could grant access to cloud infrastructure, databases, and production environments.
This stat sparked the idea for this research, we knew remediation rates were low, we dug into what developers are actually doing when they try to clean up leaked credentials.
63.7%
Did Nothing
The most significant finding isn’t any antipattern, it’s that 63.7% of developers made no remediation attempt at all.
File overwrite was the most common antipattern we observed. A developer leaks a secret, realizes the mistake, and pushes a new commit that swaps the hardcoded secret for a placeholder or an environment variable reference.
Git commits are immutable. The original secret sits in history, accessible via simple commands like git log -p.
This antipattern accounted for 33% of all repositories with live secrets.
Sometimes developers caught the leak, rotated the credential to an environment variable, and moved on. But the very next commit, they either reverted to the hard-coded secret or introduced a new one. We saw this cycle (secure → insecure → secure → insecure) play out across multiple repos, sometimes with a single feature branch.
5.9% of repositories re-introduced secrets this way, either the same credential type or a different service altogether.
Deleting or privatizing a repository with a leaked secret feels like a fix. If it’s no longer accessible, the risk of it being discovered is gone, right?
By the time a developer deletes or privatizes a repo, automated scanners have likely already scraped and stored the credentials. This generates a sense of false security and makes it harder for remediation, especially with deleted repositories. You can no longer see the physical key so you can’t be certain which one you need to revoke and rotate.
This showed up in 2.3% of cases.
This approach is the most involved of all the antipatterns that we looked at. Developers use tools like
BFG Repo-Cleaner or git filter-repo to scrub
secrets from the entire commit history.
The scrubbed commit disappears from git log --all, but the platform retains the dangling commit.
These platforms retain dangling commits: commits no longer referenced by any branch but still accessible via their SHA through the web interface.
This accounted for 0.9% of our total cases.
This chart represents the top 20 secrets by detector type. GCP and MongoDB are the clear outliers. These credentials could be used to access Infrastructure environments or databases.
We committed AWS canary tokens using https://canarytokens.org/ to
three public repositories across Bitbucket, GitLab and GitHub. We monitored how long it took for these tokens to be
validated (in most cases by an sts get-caller-identity request) and how many times the tokens were used in the first hour.
43 hits in 1 hour
2 hits in 1 hour
0 hits for 48 hrs
GitHub had by far the most hits in 30 minutes, it makes sense, it’s a very popular choice for developers and researchers.
The Bitbucket result surprised us. Some scanners can detect AWS canary tokens and avoid them, but we still expected at least some activity within 48 hours.
We think this ultimately comes down to how accessible these platforms are for automation to access via the API.
Events API streams new commits in real time. Scanners poll every few minutes to grab the latest commits.
Projects API filterable by last activity. Higher processing overhead than GitHub’s real-time stream.
No activity filter at all. Possible to filter by newly created repos but not by latest activity.
The implication is pretty straightforward, secrets on Bitbucket and GitLab have a longer window of undetected exposure and less eyes looking at them.
Runs a Secret Scanning Partner Program that works with services to register token formats for real-time scanning of every public commit. This works well, we found that AWS keys on GitHub are roughly 4x more likely to be revoked than AWS keys on Bitbucket and GitLab.
Has platform-level secret scanning, but it’s not enabled by default and some features sit behind paid tiers.
Offers secret scanning for Data Center instances but for cloud repositories there is nothing.
The correct remediation for a leaked credential is always the same: rotate the key. Every other approach leaves the credential functional and vulnerable to exploitation by attackers.
There’s three simple steps you can take:
Another way you could handle remediation is by combining detection and remediation together.
The best protection is prevention. Unfortunately platforms like GitHub do not open up their push protection to best in class secret scanners, and instead force users to use their in house secret scanner, which is not as strong at detection and validation. This pushes us to use the following options:
We put together a simple GitHub action that uses TruffleHog’s official action and a modified workflow to revoke GitHub PAT’s when they were detected by TruffleHog.
Left terminal: a script hitting GitHub’s /users endpoint every half second with a personal access token, printing HTTP 200 on each success.
Right terminal: pushing that same token to a GitHub repository with a GitHub Action that runs TruffleHog and automatically revokes any verified secret it finds.
That’s faster than manual revocation under the best circumstances. In many organizations, the developer who leaked the token doesn’t even have permission to revoke it directly (service account credentials locked in a vault, for example).
Based on our canary data, 12 seconds puts revocation well ahead of every automated scraper we observed across all three platforms. The token is dead before anyone else can detect and use it.
GitHub’s revocation API also makes regeneration straightforward. You get an email with details about what was revoked and a one-click path to regenerate. Revoking a production token has downstream consequences, but a live credential with production access sitting in a public repository is worse by every measure.
These are great for researchers and developers; they enable programmatic revocation of API tokens and reduce the friction between researchers and disclosure processes to get a token remediated.
SaaS providers could monitor for leaked credentials themselves across open source ecosystems.
Make it easier for people who find leaked credentials to report them. Not every SaaS has a clear channel for this today and it creates friction which reduces the likelihood of researchers disclosing leaked credentials to SaaS providers.
Several secret scanners, TruffleHog included, can be used as a pre-commit hook to catch secrets before they ever reach remote repositories.
Implement secret scanning in your CI/CD pipelines to detect and block PR’s that contain secrets before they enter production code.
Rotation is the only remediation that works. Every other approach, file overwrites, repo deletion, history rewrites, leaves the credential live and exploitable.
The gap between detection and remediation is where breaches happen. You can close that gap by pairing TruffleHog’s scanning with automated revocation in your CI/CD pipelines, turning detection into immediate action rather than a ticket in someone’s backlog.
For the current environment to improve, SaaS providers need revocation APIs, proactive scanning of open-source repositories, and clearer disclosure pathways. Developers need pre-commit hooks, pipeline-level scanning, and the discipline to rotate, not overwrite, every leaked credential.
The clock starts the moment a secret hits a public repository. On GitHub, you have about five minutes. On Bitbucket and GitLab, you have longer, for now. Neither window is a remediation strategy.
Interested in learning more about the research? Register for our webinar on Thursday, April 9 at 10 AM PT / 1 PM ET / 5 PM GMT.