- AI-generated passwords follow patterns that hackers can study
- Surface complexity hides beneath statistical predictability
- Entropy gaps in AI passwords expose structural weaknesses in AI logins
Large Language Models (LLM) can make passwords look complex, but recent tests suggest that those strings are far from random.
A study from Irregular examined the password results of AI systems such as Claude, ChatGPT, and Gemini, asking each to generate 16-character passwords with symbols, numbers, and upper- and lowercase letters.
At first glance, the results seemed solid and passed common online security tests, with some testers estimating that cracking them would take ages, but a closer look at these passwords told a different story.
LLM Passwords Show Repetitive and Guessable Statistical Patterns
When the researchers analyzed 50 passwords generated in separate sessions, many were duplicates and several followed almost identical structural patterns.
Most began and ended with similar character types and none contained repeated characters.
This lack of repetition may seem reassuring, but it actually indicates that the result follows learned conventions and not true randomness.
Using entropy calculations based on character statistics and model log probabilities, the researchers estimated that these AI-generated passwords contained approximately 20 to 27 bits of entropy.
A genuinely random 16-character password would typically measure between 98 and 120 bits using the same methods.
The gap is substantial and, in practical terms, could mean that such passwords are vulnerable to brute force attacks within hours, even on outdated hardware.
Online password strength meters assess surface complexity, not the statistical patterns hidden behind a string, and because they don’t take into account how AI tools generate text, they can classify predictable results as secure.
Attackers who understand those patterns could refine their guessing strategies, dramatically narrowing the search space.
The study also found that similar sequences appear in documentation and public code repositories, suggesting that AI-generated passwords may already be widely circulating.
If developers rely on these results during testing or deployment, the risk increases over time; In fact, even the AI systems that generate these passwords do not fully trust them and can issue warnings when pressed.
Gemini 3 Pro, for example, returned password hints along with a warning that chat-generated credentials should not be used for sensitive accounts.
Instead, it recommended passphrases and advised users to rely on a dedicated password manager.
A password generator built into such tools relies on cryptographic randomness rather than language prediction.
In simple terms, LLMs are trained to produce plausible, repeatable texts, not unpredictable sequences; therefore, the broader concern is structural.
The design principles behind LLM-generated passwords conflict with strong authentication requirements, so it offers protection with a loophole.
“Individuals and encryption agents should not rely on LLMs to generate passwords,” Irregular said.
“Passwords generated via direct LLM output are fundamentally weak, and this cannot be solved by prompts or temperature adjustments: LLMs are optimized to produce predictable and plausible results, which is incompatible with secure password generation.”
Through The Registry
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




