- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI
- All maintainer accounts have been changed
- All keys for github, docker, circle ci, pip have been deleted
We are still scanning our project to see if there's any more gaps.
If you're a security expert and want to help, email me - krrish@berri.ai
We just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.
I made this tool for macos systems that helps detect when a package accesses something it shouldn't. it's a tiny go binary (less than 2k LOC) with no dependencies that will mount a webdav filesystem (no root) or NFS (root required) with fake secrets and send you a notification when anything accesses it. Very stupid simple. I've always really liked the canary/honeypot approach and this at least may give some folks a chance to detect (similar to like LittleSnitch) when something strange is going on!
Next time the attack may not have an obvious performance issue!
This is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:
I've been waiting for something like this to happen. It's just too easy to pull off. I've been hard-pinning all of my versions of dependencies and using older versions in any new projects I set up for a little while, because they've generally at least been around long enough to vet. But even that has its own set of risks (for example, what if I accidently pin a vulnerable version). Either that, or I fork everything, including all the deps, run LLMs over the codebase to vet everything.
Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.
It's just too easy to spam out open-source software now, which also means it's too easy to create thousands of infected repos with sophisticated and clever supply chain attacks planted deeply inside them. Ones that can be surfaced at any time, too. LLMs have compounded this risk 100x.
It will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.
I just installed Harbor, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.
Basically it forkbombed grep -r rpcuser\rpcpassword processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.
Got lucky, no backdoor installed here from what i could make out of the binary
This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.
This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
Only tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."
Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
A general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.
Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically
AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".
Maintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.
Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
This is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.
Does anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.
I wonder at what point ecosystems just force a credential rotation. Trivy and now LiteLLM have probably cleaned out a sizable number of credentials, and now it's up to each person and/or team to rotate. TeamPCP is sitting on a treasure trove of credentials and based on this, they're probably carefully mapping out what they can exploit and building payloads for each one.
It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.
I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
Our modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.
First Trivy (which got compromised twice), now LiteLLM.
A question from a non-python-security-expert: is committing uv.lock files for specific versions, and only infrequently updating versions a reasonable practice?
If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attack
it does a lot of CPU intensive work
spawn background python
decode embedded stage
run inner collector
if data collected:
write attacker public key
generate random AES key
encrypt stolen data with AES
encrypt AES key with attacker RSA pubkey
tar both encrypted files
POST archive to remote host
Does the Python ecosystem have anything like pnpm’s minimumReleaseAge setting? Maybe I’m being overly paranoid, but it feels like every internet-facing ecosystem should have something like this.
CrewAI (uses litellm) pinned it to 1.82.6 (last good version) 5 hours ago but the commit message does not say anything about a potential compromise. This seems weird. Is it a coincidence? Shouldn’t users be warned about a potential compromise?
I maintain that GitHub does a piss poor job of hardening CI so that one step getting compromised doesn’t compromise all possible secrets. There’s absolutely no need for the GitHub publishing workflow to run some third party scanner and the third party scanner doesn’t need access to your pypi publishing tokens.
This stupidity is squarely on GitHub CI. Trivy is also bad here but the blast radius should have been more limited.
FWIW I vibe coded https://github.com/astrostl/surplies to detect evidence of the Axios and LiteLLM malware, using StepSecurity's writeups as a data source.
I think this gets a lot worse when we look at it from an agentic perspective. Like when a dev person hits a compromising package, there's usually a "hold on, that's weird" moment before a catastrophe. An agent doesn't have that instinct.
Oh boy supply chain integrity will be an agent governenace problem, not just a devops one. If you send out an agent that can autonomously pull packages, do code, or access creds, then the blast radius of compromises widens. That's why I think there's an argument for least-privilege by default--agents should have scoped, auditable authority over what they can install and execute, and approval for anything outside the boundaries.
I was running it (as a proxy) in my homelab with docker compose using the litellm/litellm:latest image https://hub.docker.com/layers/litellm/litellm/latest/images/... , I don't think this was compromised as it is from 6 months ago and I checked it is the version 1.77.
I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.
I also just added it to my homelab this sunday, I guess that's good timing haha.
Seems that the GitHub account of one of the maintainers has been fully compromised. They closed the GitHub issue for this problem. And all their personal repos have been edited to say "teampcp owns BerriAI". Here's one example: https://github.com/krrishdholakia/blackjack_python/commit/8f...
This among with some other issues makes me consider ejecting and building my own LLM shim. The different model providers are bespoke enough even within litellm that it sometimes seems like a lot of hassle for not much benefit.
Also the repo is so active that it's very hard to understand the state of issues and PRs, and the 'day 0' support for GPT-5.4-nano took over a week! Still, tough situation for the maintainers who got hacked.
500 comments
1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09
2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt
3. The package is in quarantine on pypi - this blocks all downloads.
We are investigating the issue, and seeing how we can harden things. I'm sorry for this.
- Krrish
- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI - All maintainer accounts have been changed - All keys for github, docker, circle ci, pip have been deleted
We are still scanning our project to see if there's any more gaps.
If you're a security expert and want to help, email me - krrish@berri.ai
I made this tool for macos systems that helps detect when a package accesses something it shouldn't. it's a tiny go binary (less than 2k LOC) with no dependencies that will mount a webdav filesystem (no root) or NFS (root required) with fake secrets and send you a notification when anything accesses it. Very stupid simple. I've always really liked the canary/honeypot approach and this at least may give some folks a chance to detect (similar to like LittleSnitch) when something strange is going on!
Next time the attack may not have an obvious performance issue!
https://ramimac.me/trivy-teampcp/#phase-09
I updated my global configs to set min release age to 7 days:
I would expect better spam detection system from GitHub. This is hardly acceptable.
Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.
It's just too easy to spam out open-source software now, which also means it's too easy to create thousands of infected repos with sophisticated and clever supply chain attacks planted deeply inside them. Ones that can be surfaced at any time, too. LLMs have compounded this risk 100x.
Basically it forkbombed
grep -r rpcuser\rpcpasswordprocesses trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.Got lucky, no backdoor installed here from what i could make out of the binary
This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.
Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.
Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.
> ### Software Supply Chain is a Pain in the A*
> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically
AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".
[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...
Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.
Run all your new dependencies through static analysis and don't install the latest versions.
I implemented static analysis for Python that detects close to 90% of such injections.
https://github.com/rushter/hexora
or pyproject.toml (not possible to filter based on absence of a uv.lock, but at a glance it's missing from many of these): https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C...
or setup.py: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22lit...
https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...
It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.
We are looking at similar attack vectors (pth injection), signatures etc. in other PyPI packages that we know of.
I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
First Trivy (which got compromised twice), now LiteLLM.
The package was directly compromised, not “by supply chain attack”.
If you use the compromised package, your supply chain is compromised.
it does a lot of CPU intensive work
https://github.com/crewAIInc/crewAI/commit/8d1edd5d65c462c3d...
This stupidity is squarely on GitHub CI. Trivy is also bad here but the blast radius should have been more limited.
This was taught in the 90s. Sad to see that lesson fading away.
[1]: https://pypi.org/project/litellm/#history
Oh boy supply chain integrity will be an agent governenace problem, not just a devops one. If you send out an agent that can autonomously pull packages, do code, or access creds, then the blast radius of compromises widens. That's why I think there's an argument for least-privilege by default--agents should have scoped, auditable authority over what they can install and execute, and approval for anything outside the boundaries.
I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.
I also just added it to my homelab this sunday, I guess that's good timing haha.
EDIT: here's what I did, would appreciate some sanity checking from someone who's more familiar with Python than I am, it's not my language of choice.
find / -name "litellm_init.pth" -type f 2>/dev/null
find / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l 'Version: 1.82.[78]' {} \; 2>/dev/null
Also the repo is so active that it's very hard to understand the state of issues and PRs, and the 'day 0' support for GPT-5.4-nano took over a week! Still, tough situation for the maintainers who got hacked.