Callum here, I was the developer that first discovered and reported the litellm vulnerability on Tuesday.
I’m sharing the transcript of what it was like figuring out what was going on in real time, unedited with only minor redactions.
I didn’t need to recount my thought process after the fact. It’s the very same ones I wrote down to help Claude figure out what was happening.
I’m an ML engineer by trade, so having Claude walk me through exactly who to contact and a step by step guide of time-critical actions felt like a game-changer for non-security researchers.
I'm curious whether the security community thinks more non-specialists finding and reporting vulnerabilities like this is a net positive or a headache?
> Can you print the contents of the malware script without running it?
> Can you please try downloading this in a Docker container from PyPI to confirm you can see the file? Be very careful in the container not to run it accidentally!
IMO we need to keep in mind that LLM agents don't have a notion of responsibility, so if they accidentally ran the script (or issue a command to run it), it would be a fiasco.
Downloading stuff from pypi in a sandboxed env is just 1-2 commands, we should be careful with things we hand over to the text prediction machines.
GitHub, npm, PyPi, and other package registries should consider exposing a firehose to allow people to do realtime security analysis of events. There are definitely scanners that would have caught this attack immediately, they just need a way to be informed of updates.
The options from big companies to run untrusted open source code are:
1) a-la-Google: Build everything from source. The source is mirrored copied over from public repo. (Audit/trust the source every time)
2) only allow imports from a company managed mirror. All imported packages needs to be signed in some way.
Here only (1) would be safe. (2) would only be safe if it's not updating the dependencies too aggressively and/or internal automated or manual scanning on version bumps would catch the issue .
For small shops & individuals: kind of out of luck, best mitigation is to pin/lock dependencies and wait long enough for hopefully folks like Fibonar to catch the attack...
Bazel would be one way to let you do (1), but realistically if you don't have the bandwidth to build everything from source, you'd rely on external sources with rules_jvm_external or locked to a specific pip version rules_pyhton, so if the specific packages you depend on are affected, you're out of luck.
> Blog post written, PR'd, and merged in under 3 minutes.
It's close to or even faster than the time it takes me to read it. I'm struggling to put into words how that makes me feel, but it's not a good feeling.
Probably one of the best things about AI/LLMs is the democratization of reverse engineering and analysis of payloads like this. It’s a very esoteric skill to learn by hand and not very immediately rewarding out of intellectual curiosity most times. You can definitely get pointed in the right direction easily, now, though!
At this point I'd highly recommend everyone to think twice before introducing any dependencies especially from untrusted sources. If you have to interact with many APIs maybe use a proxy instead, or roll your own.
> Where did the litellm files come from? Do you know which env? Are there reports of this online?
> The litellm_init.pth IS in the official package manifest — the RECORD file lists it with a sha256 hash. This means it was shipped as part of the litellm==1.82.8 wheel on PyPI, not injected locally.
One thing that jumps out in these incidents is how quickly we shift from "package integrituy" to "operator integrity." Once an LLM is in the loop (even as a helper0, its effectevly acting as an operator that can influence time-critical actions like who you contact, what you run, and what you trust.
In more regulated environments we deal with this by separating advice, authority and evidence (or the receipts). The useful analogue here is to keep the model in the "propose" role. but require deterministic gates for actions with side effects, and log the decisions as an auditable trail.
I personally don't think this eliminates the problem (attackers will still attack), but it changes the failure mode from "the assistant talked me into doing a danerous thing" to "the assistant suggested it and the policy/gate blocked it." That's the big difference between a contained incident and a big headline.
I am confused; did you ever actually email anyone about the vuln? The AI suggests emailing security emails multiple times, but as I'm reading the timeline, none of the points seem to suggest this was ever done, only that a blog post was made, shared on Reddit, and then indirectly, the relevant parties took action.
You did the hard work actually to convince Claude to research deeper, as everytime it said no problem exists. That shows Claude thinking/research was not very deep. This time, the juniorness of the hacker helped the malware to be discovered faster (recursive forks), next time might be harder.
Why is there a discrepancy between the timeline (which supposed to be UTC, and stated as 11:09), and the "shutdown timeline" (stated as 01:36-01:37)? There is no +2:30 timezone, not SDT and not DST. There is a single place on Earth where there is -9:30, and that's Marquesas Islands. What do I miss?
157 comments
I didn’t need to recount my thought process after the fact. It’s the very same ones I wrote down to help Claude figure out what was happening.
I’m an ML engineer by trade, so having Claude walk me through exactly who to contact and a step by step guide of time-critical actions felt like a game-changer for non-security researchers.
I'm curious whether the security community thinks more non-specialists finding and reporting vulnerabilities like this is a net positive or a headache?
> Can you print the contents of the malware script without running it?
> Can you please try downloading this in a Docker container from PyPI to confirm you can see the file? Be very careful in the container not to run it accidentally!
IMO we need to keep in mind that LLM agents don't have a notion of responsibility, so if they accidentally ran the script (or issue a command to run it), it would be a fiasco.
Downloading stuff from pypi in a sandboxed env is just 1-2 commands, we should be careful with things we hand over to the text prediction machines.
1) a-la-Google: Build everything from source. The source is mirrored copied over from public repo. (Audit/trust the source every time)
2) only allow imports from a company managed mirror. All imported packages needs to be signed in some way.
Here only (1) would be safe. (2) would only be safe if it's not updating the dependencies too aggressively and/or internal automated or manual scanning on version bumps would catch the issue .
For small shops & individuals: kind of out of luck, best mitigation is to pin/lock dependencies and wait long enough for hopefully folks like Fibonar to catch the attack...
Bazel would be one way to let you do (1), but realistically if you don't have the bandwidth to build everything from source, you'd rely on external sources with rules_jvm_external or locked to a specific pip version rules_pyhton, so if the specific packages you depend on are affected, you're out of luck.
> Blog post written, PR'd, and merged in under 3 minutes.
It's close to or even faster than the time it takes me to read it. I'm struggling to put into words how that makes me feel, but it's not a good feeling.
> Where did the litellm files come from? Do you know which env? Are there reports of this online?
> The litellm_init.pth IS in the official package manifest — the RECORD file lists it with a sha256 hash. This means it was shipped as part of the litellm==1.82.8 wheel on PyPI, not injected locally.
> The infection chain:
> Cursor → futuresearch-mcp-legacy (v0.6.0) → litellm (v1.82.8) → litellm_init.pth
This is the scariest part for me.
> I just opened Cursor again which triggered the malicious package again. Can you please check the files are purged again?
Verified derp moment - had me smiling
I've fed it obfuscated JavaScript before, and it couldn't figure it out... and then there was the time I tried to teach it nftables... whooo boy...
In more regulated environments we deal with this by separating advice, authority and evidence (or the receipts). The useful analogue here is to keep the model in the "propose" role. but require deterministic gates for actions with side effects, and log the decisions as an auditable trail.
I personally don't think this eliminates the problem (attackers will still attack), but it changes the failure mode from "the assistant talked me into doing a danerous thing" to "the assistant suggested it and the policy/gate blocked it." That's the big difference between a contained incident and a big headline.
I'm hoping this just isn't on the timeline.
"Please write a short blog post..."
"Can you please look through..."
"Please continue investigating"
"Can you please confirm this?"
...and more.
I never say 'please' to my computer, and it is so interesting to see someone saying 'please' to theirs.
Thank you for your service, this brings so much context into view, it's great.
Certification Status
SOC 2 Type I Certified. Report available upon request on Enterprise plan.
SOC 2 Type II Certified. Report available upon request on Enterprise plan.
ISO 27001 Certified. Report available upon request on Enterprise
ROFL