CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code
Posted by grauenwolf@reddit | programming | View on Reddit | 58 comments
Posted by grauenwolf@reddit | programming | View on Reddit | 58 comments
WillingnessFun2907@reddit
I just assumed that was a feature of it
PurepointDog@reddit
Tldr?
nnomae@reddit
You can prompt inject co-pilot chat just by sending a pull request to another user. Since co-pilot has full access to every users private data such as code repositories, AWS keys etc this basically means none of your private data on github is secure for as long as co-pilot remains enabled. Probably unfixable without literally cutting co-pilot off from access to your data which would utterly neuter it something Microsoft don't want to do.
TLDR: If you care about your private data, get it off of github because there will likely be more of these.
StickiStickman@reddit
... if you put them in plain text into the repository, which is a MASSIVE detail to ignore
nnomae@reddit
It's a private repository. The only people who have access to it should be the projects own developers. You don't need to keep things secret from people you trust. I mean if you used a password manager to share those keys and the password manager company decided to add an AI integration you couldn't disable that was sending the passwords stored within it with third parties you'd be pretty annoyed. Why should trusting Github to protect your private data be any different?
Far_Associate9859@reddit
"Private repository" doesn't mean "personal repository" - its standard practice not to check environment variables into source control, even in private repositories, and even if you trust all the developers who have access to that repository.
grauenwolf@reddit (OP)
Ah, I see you are playing the "blame the victim" card. Always a crowd pleaser.
Far_Associate9859@reddit
🙄 Github is clearly at fault - but you should also try to protect yourself against security failures, and not checking environment variables into source control is one way of doing that
nnomae@reddit
What are you on about. Of course devs should be able to assume a private repositary is a safe place to store things that should remain private. If you can't make that basic assumption you shouldn't be using github for any non-public projects. You're trying to engage in blame transferrence here. Saying it's the devs fault for trusting github with their info and not githubs fault for failing to protect it.
I mean where are they storing these keys if not on github, oh yeah, in another system that promises to keep them private.
hennell@reddit
Storing keys in a private repository is also a bad idea if:
- You want to separate access between code and secrets. Which you should, working on a projects code doesn't mean you need all the secrets that code uses in prod.
- You want to use other tools with your repo. Same as above but tooling, CI/CD runners, code scanners, Ais or whatever may be given access to your code, do they need the secrets?
- You might someday open source or otherwise make your repo public. Or if someone accidentally makes a public fork. Or theres a github bug and all private repos are public for 24 hours.
Security is configured for the most secretive thing and you want to operate on a least permissions possible model. Giving people or tools access they don't need is adding pointless weak-points in your security. And outside a few proprietary algorithms most code is not really a sensitive secret. There's not always much damage people can do with 'private code', but theres a lot of damage you can do with an AWS key etc.
Keys and secrets should be scoped to the minimum possible abilities and given to the minimum possible people. Adding them to a repo is never a good idea.
nnomae@reddit
I'm not saying it was a great idea. I'm saying that it's reasonable to expect that any data - code, keys or other - should be stored securely by github It is perfectly reasonable for a developer to weigh the pros and cons and decide that just uploading the key into a private repository is fine for their circumstances.
We are talking here of a situation where Microsoft gave a known insecure technology, one that has for instance already leaked their own entire salesforce database, full access to customer developers accounts, in many cases against the wishes of those developers and yet some people are trying to argue those developers are to blame here.
Now the next time this happens it will be the developers fault. They know now that as long as copilot has access to their account their data is insecure. If they fail to act on that then they should also be held accountable next time round.
MostCredibleDude@reddit
You do if you follow the principal of least privilege. Any employee or organization member should be considered compromisable (either willingly through malfeasance or unwillingly through negligence). If you care at all about the security of your keys/passwords, you only expose those to the people who absolutely need them to perform their jobs.
I don't need my client's AWS keys or deployment credentials to kick of the CI/CD suite because they're nicely tucked away in environment configurations that I very appropriately have no access to. I don't want to know those things. They expose me to potential professional or legal peril if something goes wrong.
SaxAppeal@reddit
Yeah I’m not seeing how they fixed the fundamental issue here
nnomae@reddit
Indeed, it's not even clear if restricting Co-Pilot to plain ASCII text would even fix the underlying issue. The fundamental problem is that no matter how many times you tell an LLM not to do something stupid if someone asks it to do so a certain percentage of the time it will ignore your instructions and follow theirs.
SaxAppeal@reddit
It wouldn’t! It sounds like they essentially block the singular case where the agent literally steals your data instantaneously without you knowing? But it sounds like you can still inject a phishing scam, or malicious instruction sets that appear genuine….
wrosecrans@reddit
ASCII text isn't the issue. The issue is that they want all of the benefits of LLMs having access to everything, and they want to be in denial about all of the downsides of LLMs having access to everything. And there's just no magic that will make this a good approach. This stuff either has access or it doesn't.
tRfalcore@reddit
can't wait to go to work tomorrow
Nate506411@reddit
Don't let AI do pull requests.
grauenwolf@reddit (OP)
It's not "doing" the pull request. It's responding to one.
Nate506411@reddit
Ok, so after re-read the tldr sounds more like...don't let devs imbed malicious instructions for copilot into PRs as it will expose that Copilot has the same permissions as the implementing user and can exfiltrate the same potential IP?
grauenwolf@reddit (OP)
That's my impression.
And really it's a problem for any "agentic" system. If the AI has permission to do something, then you have to assume anyone who interacts with the AI has the same permissions.
JaggedMetalOs@reddit
An attacker can hide invisible AI prompts in pull requests.Â
If the person at the other end of the pull request is using AI then the AI will follow the hidden prompt.
The AI can read data from private repos and used to be able to post it directly to an attacker via
tags in its chat window.Â
grauenwolf@reddit (OP)
https://pivot-to-ai.com/2025/10/14/its-trivial-to-prompt-inject-githubs-ai-copilot-chat/
awj@reddit
Definitely reassuring to see this with a technology that everyone is racing to shove in everywhere and giving it specialized access to all kinds of data and APIs.
syklemil@reddit
I'm reminded of the way-back-when MS thought executable code everywhere was a good idea, which resulted in ActiveX exploits in anything for ages.
It must have happened at least once before as well, because I'm pretty sure LLM interpretation and execution everywhere is the farce repeat, not the tragedy repeat.
SkoomaDentist@reddit
cough Excel macros cough
knome@reddit
excel is nothing. windows had an image format (WMF) that allowed the image to carry code that would be executed when rendering the image (or failing to? I don't remember). someone noticed in the early 2000s and started spamming banner ads and email images using it to deliver malware (CVE-2005-4560)
funniest part was wine had, as they say, bug-for-bug replicated the feature, so it was also vulnerable.
WaytoomanyUIDs@reddit
WMF wasn't an image file format. It was a general purpose format that for some reason some brainiac decided "yes, general purpose includes executable content". It just ended up being used mostly for images. I believe it was inspired by something similar on Amigas.
CherryLongjump1989@reddit
This is why I started self-hosting my own software forge.
dangerbird2@reddit
Does this vulnerability only expose content in a users' repos, or can it access even more sensitive data like github action secret variables? The example exploit seems it will be of minimal risk unless you already have sensitive values in plaintext in a repo, which is already a massive vulnerability (theoretically, it could be used to dump private source code into the attacker's image server, but it seems like there'd be limit to the length of the compromised urls)
chat-lu@reddit
The latter.
tj-horner@reddit
Nowhere in this article does it demonstrate access to GitHub Actions secrets. I’m pretty sure Copilot can’t even access those; they are only available within an Actions workflow run.
veverkap@reddit
This is correct
dangerbird2@reddit
where does it say that, since OP's article describes the zero-action vulnerability reading the codebase for sensitive info, rather than metadata like secrets and ssh keys which have much stricter protections than the git repo itself. Which is why it seems like this vulnerability is more about making it easier for attackers to exploit existing vulnerabilities (ie committing plaintext secrets to git). Not that this makes it okay of course, considering how difficult it can be to purge a secret accidentally committed and pushed to a remote
tRfalcore@reddit
our github rules and jenkins rules deny, hide, and delete that shit if anyone messes up accidentally. That's all it takes.
chat-lu@reddit
He got the AWS keys.
But in any case copilot do have access to all the variables and you can prompt it.
dangerbird2@reddit
in a git repo, which is an extremely significant vulnerability on the victim's part rather than Microsoft's. For context, outside of copilot, github won't even display your own action secrets, and will redact the secrets from action logs.
altik_0@reddit
From what I could tell in the article, the demonstrated attack was focused on the text content of Pull Requests / comments, so the former. But they did make a compelling case for a significant attack vector here: exposing Zero-Day exploit private repositories.
Short version of the attack: - Craft a prompt to CoPilot that requests recent pull request summaries for the victim - Inject this prompt as hidden content inside a pull request to a popular open source repository with large surface area to attack (i.e. the Linux kernel, openssl, etc.) - Phish for a prominent user of these repositories who is also looped in on significant zero-day investigations, and has private repositories they are working on to patch these without publicly exposing them - Get summaries of these zero-days sent to the attacker, who can then make use of this information to escalate the zero-days from hypothetical to actual attacks.
This isn't as obviously dire as leaking credentials or sensitive user data that CoPilot may or may not have access to, but it's still a VERY serious security issue.
dangerbird2@reddit
yep, that's basically what I gleaned
grauenwolf@reddit (OP)
If I'm reading this correctly, it's exposing information from the user's account, not the repos. But I could be mistaken.
tj-horner@reddit
This is an interesting exploit, but I don't agree with the author's assessment of a CVSS 9.6 because:
grauenwolf@reddit (OP)
So the tool is only a vulnerability if you use the tool? I think the author might agree with that.
tj-horner@reddit
One of the core CVSS metrics is user interaction.
Goron40@reddit
I must be misunderstanding. Seems like in order to pull this off, the malicious user needs to create a PR against a private repo? Isn't that impossible?
altik_0@reddit
Think of it as a phishing attack:
Goron40@reddit
Yeah, I follow all of that. What about what I actually asked about though?
AjayDevs@reddit
The pull request can be done on any repo (the victim doesn't even have to be the owner of it). And then any random user who uses copilot chat with that pull request open will have copilot fetch all of their personal private repo details
straylit@reddit
I know there are settings for actions to not run on PRs from outside/forked repos. Is this different than copilot? When someone who has read access to the repo opens the PR it automatically runs copilot against the PR?
mv1527@reddit
What worries me is that the mentioned fix is to plug this particular exfiltration strategy, but nothing is mentioned regarding fixing the actual injection.
etherealflaim@reddit
Because prompt injection is a "feature" of LLMs. They will probably step up their defenses but they can't promise that it's preventable because it isn't.
audentis@reddit
For the love of god why can't copilot treat context as unsanitized user input with all security risks this implies?
Prompt injection has been around way too long for this to be acceptable in the slightest.
PancAshAsh@reddit
Because that would defeat the whole purpose of copilot, or at the very least make it a lot worse to use.
j1xwnbsr@reddit
Wouldn't a better fix be to totally disable HTML inside the pull request and commit comments? Or am I missing something beyond that?
Shogobg@reddit
Again?
olearyboy@reddit
So copilot is training on private repos?
grauenwolf@reddit (OP)
Jannik2099@reddit
No. Is reading the article really this difficult?
Ironically, you could've even asked your LLM of choice to summarize it for you...
grauenwolf@reddit (OP)
Summary Video: https://www.youtube.com/watch?v=iapCiYBj1bI