OPEN_SOURCE ↗
REDDIT · REDDIT// 8d agoTUTORIAL
Claude Repeats Same Random Word
A Reddit user says Claude kept returning “Ephemeral” after repeated requests to generate a random word, even across new chats on desktop, while other users reported different outputs. The thread quickly turns into a reminder that LLM “randomness” is sampled behavior, not true randomness.
// ANALYSIS
This looks more like a sampling expectation mismatch than a bug: if the prompt, model snapshot, and hidden system context stay the same, Claude can keep landing on the same high-probability token.
- –Anthropic’s docs say `temperature` controls randomness, but even `temperature: 0.0` is not fully deterministic
- –Claude’s defaults lean toward creative output, yet a tiny prompt can still collapse onto a favorite token like “ephemeral”
- –Starting a new chat does not necessarily mean a new seed, a new model version, or a meaningfully different context
- –The fact that other users got different words suggests subtle differences in model state, settings, or account/session context
- –If you want actual variety, force a choice set or ask for multiple candidates instead of a single “random” word
// TAGS
claudeanthropicllmchatbotprompt-engineering
DISCOVERED
8d ago
2026-04-04
PUBLISHED
8d ago
2026-04-04
RELEVANCE
8/ 10
AUTHOR
Mathemodel