kapitanjakc 8 days ago

GitHub Co pilot was doing this earlier as well.

I am not talking about giving your token to Claude or gpt or GH co pilot.

It has been reading private repos since a while now.

The reason I know about this is from a project we received to create a LMS.

I usually go for Open edX. As that's my expertise. The ask was to create a very specific XBlock. Consider XBlocks as plugins.

Now your Openedx code is usually public, but XBlocks that are created for clients specifically can be private.

The ask was similar to what I did earlier integration of a third party content provider (mind you that the content is also in a very specific format).

I know that no one else in the whole world did this because when I did it originally I looked for it. And all I found were content provider marketing material. Nothing else.

So I built it from scratch, put the code on client's private repos and that was it.

Until recently the new client asked for similar integration, as I have already done that sort of thing I was happy to do it.

They said they already have the core part ready and want help on finishing it.

I was happy and curious, happy that someone else did the process and curious about their approach.

They mentioned it was done by their in house team interns. I was shocked, I am no genius myself but this was not something that a junior engineer let alone an intern could do.

So I asked for access to code and I was shocked again. This was same code that I wrote earlier with the comments intact. Variable spellings were changed but rest of it was the same.

8
ZYbCRq22HbJ2y7 8 days ago

> I know that no one else in the whole world did this because when I did it originally I looked for it.

Not convincing, but plausible. Not many things that humans do are unique, even when humans are certain that they are.

Humans who are certain that things that they themselves do are unique, are likely overlooking that prior.

bastardoperator 7 days ago

Agreed, ask it for the cutoff date. I did, June 2024...

yellow_lead 8 days ago

It seems you're implying Github Copilot trained on your private repo. That's a completely separate concern than the one raised in this post.

6Az4Mj4D 8 days ago

In GitHub Co pilot if we say dont use my code option for training does this still leaks your private code?

ZYbCRq22HbJ2y7 8 days ago

Read the privacy policy and terms of use

https://docs.github.com/en/site-policy/privacy-policies/gith...

IMO, You'd have to be naive to think Microsoft makes GitHub basically free for vibes.

josteink 8 days ago

Github copilot is most definitely not free for Github enterprise customers.

ZYbCRq22HbJ2y7 7 days ago

I didn't realize we were talking about that.

Shekelphile 8 days ago

Yes. Opt-outs like that are almost never actually respected in practice.

And as the OP shows, microsoft is intentionally giving away private repo access to outside actors for the purpose of training LLMs.

ikiris 8 days ago

You're completely leaving out the possibility that the client gave others the code.

kapitanjakc 5 days ago

Lol, never thought about that, it's highly possible.

RedCardRef 8 days ago

Which provider is immune to this? Gitlab? Bitbucket?

Or is it better to self host?

digi59404 8 days ago

Self hosted GitLab with a self-hosted LLM Provider connected to GitLab powering GitLab Duo. This should ensure that the data never gets outside your network, is never used in training data, and still allows you/staff to utilize LLMs. If you don’t want to self host an LLM, you could use something like Amazon Q, but then you’re trusting Amazon to do right by you.

https://docs.gitlab.com/administration/gitlab_duo_self_hoste...

Aurornis 8 days ago

GitHub won’t use private repos for training data. You’d have to believe that they were lying about their policies and coordinating a lot of engineers into a conspiracy where not a single one of them would whistleblow about it.

Copilot won’t send your data down a path that incorporates it into training data. Not unless you do something like Bring Your Own Key and then point it at one of the “free” public APIs that are only free because they use your inputs as training data. (EDIT: Or if you explicitly opt-in to the option to include your data in their training set, as pointed out below, though this shouldn’t be surprising)

It’s somewhere between myth and conspiracy theory that using Copilot, Claude, ChatGPT, etc. subscriptions will take your data and put it into their training set.

kennywinker 8 days ago

“GitHub Copilot for Individual users, however, can opt in and explicitly provide consent for their code to be used as training data. User engagement data is used to improve the performance of the Copilot Service; specifically, it’s used to fine-tune ranking, sort algorithms, and craft prompts.”

- https://github.blog/news-insights/policy-news-and-insights/h...

So it’s a “myth” that github explicitly says is true…

Aurornis 8 days ago

> can opt in and explicitly provide consent for their code to be used as training data.

I guess if you count users explicitly opting in, then that part is true.

I also covered the case where someone opts-in to a “free” LLM provider that uses prompts as training data above.

There are definitely ways to get your private data into training sets if you opt-in to it, but that shouldn’t surprise anyone.

kennywinker 8 days ago

You speak in another comment about the “It would involve thousands or tens of thousands of engineers to execute. All of them would have to keep the conspiracy quiet.” yet if the pathway exists, it seems to me there is ample opportunity for un-opted-in data to take the pathway with plausible deniability of “whoops that’s a bug!” No need for thousands of engineers to be involved.

Aurornis 8 days ago

Or instead of a big conspiracy, maybe this code which was written for a client was later used by someone at the client who triggered the pathway volunteering the code for training?

Or the more likely explanation: That this vague internet anecdote from an anonymous person is talking about some simple and obvious code snippets that anyone or any LLM would have generated in the same function?

I think people like arguing conspiracy theories because you can jump through enough hoops to claim that it might be possible if enough of the right people coordinated to pull something off and keep it secret from everyone else.

kennywinker 7 days ago

My point is less “it’s all a big conspiracy” and more that this can fall into Hanlon’s razor territory. All it takes is not actually giving a shit about un-opted in code leaking into the training set for this to happen.

The existence of the ai generated studio ghibli meme proves ai models were trained on copyrighted data. Yet nobody’s been fired or sued. If nobody cares about that, why would anybody care about some random nobody’s code?

https://www.forbes.com/sites/torconstantino/2025/05/06/the-s...

suddenlybananas 8 days ago

Companies lie all the time, I don't know why you have such faith in them

Aurornis 8 days ago

Anonymous Internet comment section stories are confused and/or lie a lot, too. I’m not sure why you have so much faith in them.

Also, this conspiracy requires coordination across two separate companies (GitHub for the repos and the LLM providers requesting private repos to integrate into training data). It would involve thousands or tens of thousands of engineers to execute. All of them would have to keep the conspiracy quiet.

It would also permanently taint their frontier models, opening them up to millions of lawsuits (across all GitHub users) and making them untouchable in the future, guaranteeing their demise as soon a single person involved decided to leak the fact that it was happening.

I know some people will never trust any corporation for anything and assume the worst, but this is the type of conspiracy that requires a lot of people from multiple companies to implement and keep quiet. It also has very low payoff for company-destroying levels of risk.

So if you don’t trust any companies (or you make decisions based on vague HN anecdotes claiming conspiracy theories) then I guess the only acceptable provider is to self-host on your own hardware.

Covenant0028 8 days ago

Another thing that would permanently taint models and open their creators to lawsuits is if they were trained on many terabytes worth of pirated ebooks. Yet that didn't seem to stop Meta with Llama[0]. This industry is rife with such cases; OpenAI's CTO famously could not answer a simple question about whether Sora was trained on Youtube data or not. And now it seems they might be trained on video game content [1], which opens up another lawsuit avenue.

The key question from the perspective of the company is not whether there will be lawsuits, but whether the company will get away with it. And so far, the answer seems to be: "yes".

The only exception that is likely is private repos owned by enterprise customer. It's unlikely that GitHub would train LLMs on that, as the customer might walk away if they found out. And Fortune 500 companies have way more legal resources to sue them than random internet activists. But if you are not a paying customer, well, the cliche is that you are the product.

[0]: https://cybernews.com/tech/meta-leeched-82-terabytes-of-pira... [1]: https://techcrunch.com/2024/12/11/it-sure-looks-like-openai-...

brian-armstrong 8 days ago

With the current admin I don't think they really have any legal exposure here. If they ever do get caught, it's easy enough to just issue some flimsy excuse about ACLs being "accidentally" omitted and then maybe they stop doing it for a little while.

This is going to be the same disruption as Airbnb or Uber. Move fast and break things. Why would you expect otherwise?

suddenlybananas 8 days ago

I really don't see how tens of thousands of engineers would be required.

0_gravitas 8 days ago

I work for <company>, we lie, in fact, many of us in our industry lie, to each other, but most importantly to regulators. I lie for them because I get paid to. I recommend you vote for any representative that is hostile towards the marketing industry.

And companies are conspirators by nature, plenty of large movie/game production companies manage to keep pretty quiet about game details and release-dates (and they often don't even pay well!).

I genuinely don't understand why you would legitimately "trust" a Corporation at all, actually, especially if it relates to them not generating revenue/marketshare where they otherwise could.

Aurornis 8 days ago

If you found your exact code in another client’s hands then it’s almost certainly because it was shared between them by a person. (EDIT: Or if you’re claiming you used Copilot to generate a section of code for you, it shouldn’t be surprising when another team asking Copilot to solve the same problem gets similar output)

For your story to be true, it would require your GitHub Copilot LLM provider to use your code as training data. That’s technically possible if you went out of your way to use a Bring Your Own Key API, then used a “free” public API that was free because it used prompts as training data, then you used GitHub Copilot on that exact code, then that underlying public API data was used in a new training cycle, then your other client happened to choose that exact same LLM for their code. On top of that, getting verbatim identical output based on a single training fragment is extremely hard, let alone enough times to verbatim duplicate large sections of code with comment idiosyncrasies intact.

Standard GitHub Copilot or paid LLMs don’t even have a path where user data is incorporated into the training set. You have to go out of your way to use a “free” public API which is only free to collect training data. It’s a common misconception that merely using Claude or ChatGPT subscriptions will incorporate your prompts into the training data set, but companies have been very careful not to do this. I know many will doubt it and believe the companies are doing it anyway, but that would be a massive scandal in itself (which you’d have to believe nobody has whistleblown)

throwaway314155 8 days ago

Indeed. In light of that, it seems this might (!) just be a real instance of "i'm obsolete because interns can get an LLM to output the same code I can"

kapitanjakc 5 days ago

Hmm could very well be. But with comments intact ?

Anyway 1 thing that I did not consider and is pointed out by other comment is that original client could've provided the same code as they are also actual owners.

cmiles74 8 days ago

I believe the issue here is with tooling provided to the LLM. It looks like GitHub is providing tools to the LLM that give it the ability to search GitHub repositories. I wouldn't be shocked if this was a bug in some crappy MCP implementation someone whipped up under some serious time pressure.

I don't want to let Microsoft of the hook on this but is this really that surprising?

Update: found the company's blog post on this issue.

https://invariantlabs.ai/blog/mcp-github-vulnerability

Shekelphile 8 days ago

No, what you're seeing here is that the underlying model was trained with private repo data from github en masse - which would only have happened if MS had provided it in the first place.

MS also never respected this in the first place, exposing closed source and dubiously licensed code used in training copilot was one of the first thing that happened when it was first made available.

kapitanjakc 5 days ago

Or as the other comment points out that original clients might have used it on the code. So my conspiracy theory just came crashing.

1oooqooq 8 days ago

thinking a non enterprise GH repo to be out of reach from Microsoft is like giving your phone for Facebook authentication and thinking they won't add it to their social graph matching.

alfiedotwtf 8 days ago

“With comments intact”

… SCO Unix Lawyers have entered the chat