Why LLM-Generated Passwords Are Dangerously Insecure (irregular.com)

by zdw 24 comments 19 points
Read article View on HN

24 comments

[−] catlifeonmars 41d ago
This article has “why stabbing yourself with a screwdriver is bad” vibes.
[−] randomNumber7 41d ago
Yes. It really makes no sense to take a screwdriver instead of a knife.
[−] ks2048 41d ago
Had me wonder - if you ask an LLM for a random number 1...100, what distribution do you get? Surely many have run this experiment. Here's a link that looks like a good example, https://sanand0.github.io/llmrandom/
[−] RIMR 41d ago
That is interesting data. Just from looking at those graphs, it looks like AIs are consistently avoidant of the number 69, likely because of safeguards to prevent it from being offensive. Otherwise its training would probably tell it that it was a really nice number.
[−] ks2048 41d ago
I wonder the human results. If a friend asks you, maybe you say 69, but if it's a psych exam, people might avoid it.
[−] Ferret7446 41d ago
I imagine you'd get a similar distribution as when asking humans to come up with a random number on the spot
[−] weare138 41d ago
If anyone is that desperate for a secure random password here's a Perl one-liner I came up with that will generate random cryptographically secure passwords with all unique characters using /dev/urandom. No dependencies:

  perl -E 'while (open($_,"/dev/urandom") && $#p+1 != $ARGV[0]) { $_ = getc $_, push @p, grep !$u{$_}++, /[!-~]/g } say @p' 24

Minified:

  perl -E 'while(open($_,"/dev/urandom")&&$#p+1!=$ARGV[0]){$_=getc$_;push@p,/[!-~]/g}say@p' 42
[−] gmuslera 41d ago
This asks for a dictionary attack, not of common words, but for tokens from training that have some weight related to good passwords.

At least regarding “normal” text generation, if you tell somewhat to the LLM that generate a Python script to write down a random password and use it it may have better quality.

[−] petcat 41d ago

> LLM-generated passwords (generated directly by the LLM, rather than by an agent using a tool)

This seems like kind of a pointless analysis to me? Humans also generate bad passwords. It's why we use crypto-hardened RNG tools.

[−] jmull 41d ago
It’s pointless if you believe no one is asking LLMs to generate passwords for them.
[−] Pooge 41d ago
Humans will always smash a screw with the handle of a spoon and be proud of themselves when they manage to do it.
[−] RIMR 41d ago
I mean, people are still rotating passwords because they refuse to remember anything. I only know this, because I am in a customer-facing position, and these customers rarely care about revealing their passwords when they need help...
[−] himata4113 41d ago
huh, for me it just generates 123 when I ask it to generate a password lol, sometimes adds a !, more often it just forces changeme rather than having any password.
[−] Mordisquitos 41d ago
I only clicked on the article with no intention of reading it (no time), but rather out of morbid curiosity as to why on earth anybody would need to be told that LLMs should absolutely not be used to generate passwords.

> [...] Despite this, LLM-generated passwords appear in the real world – used by real users, and invisibly chosen by coding agents as part of code development tasks, instead of relying on traditional secure password generation methods.

Jesus F'ing Christ. I hope to have time to read the whole thing later.

[−] sowbug 41d ago
The article is a bit of a strawman, and a bit of an advertisement for a security consultancy. If you ask someone else to pick a password for you, then it's a secret known by two people. So don't do that. That was true a thousand* years ago. It's still true today.

*I know, I know, hash functions didn't exist on Earth a thousand years ago. Still true.

[−] RIMR 41d ago
I urge you to actually read the article, because it doesn't say anything about the risks of the LLM knowing your password (e.g., stored in server-side logs), it talks about LLMs generating predicatable passwords because they are deterministic pattern-following machines.

While the loss of secrecy between you and the LLM provider is a legitimate risk, the point of the article was that you should only use vetted RNGs to generate passwords, because LLMs will frequently generate identical secure-looking passwords when asked to do so repeatedly, meaning that all a bad actor has to do is collect the most frequent ones and go hunting.

The loss of secrecy between you and the LLM only poses a risk if the LLM logs are compromised, exposing your generated passwords. The harvesting of commonly-generated passwords from LLMs poses a much broader attack surface for anyone who uses this method, because any attacker with access to publicly available LLMs can start mining commonly generated passwords and using them today without having to compromise anything first.

[−] sowbug 41d ago
You're right; I could have phrased the issue better, though I certainly did read the article. Let me try again: letting someone else pick a password for you requires you to trust that they did it well, and you get no benefit in exchange for that trust. That's true for other humans, websites, and now LLMs.
[−] CrzyLngPwd 41d ago
The article reads like it was written by a machine.
[−] camgunz 41d ago
Honest question, how much money would I make off an MCP service to generate passwords for claws and agents. Is there still gas left in the griftmobile, are prospectors still in need of shovels, will the gods bless my humble, shameless lunge for my slice of the pie?
[−] TheDong 41d ago
There is a marketplace for free skills (in this case a markdown file saying "run openssl rand -hex 32")

I do not think there is any money for something that trivial.

Even the irrationally exuberant VCs wouldn't put money in that.

[−] RIMR 41d ago
No, but if those VCs let their AI agents purchase things on their behalf, you could maybe trick those agents into thinking your cloud service was the better option.
[−] throwatdem12311 41d ago
Not much because if you gain any traction, within a day somebody will make a clone and make it free/open source.

This is the default answer for all vibe coded slop business ideas for a while.

[−] Havoc 41d ago
why would you LLM generate a password?!?
[−] stanmancan 41d ago
[−] dfir-lab 41d ago
[flagged]
[−] Terr_ 44d ago
[dead]