Large Language Models (LLMs) cannot flip a coin. If you ask an AI to "choose at random," it picks the most probable next token based on its training, not chance. It might give you "Heads" because it saw that word often after "choose," or it might just repeat the last result you saw. It cannot surprise itself.
> [!NOTE] Get the Tool
> You can install this tool directly from the Open WebUI registry: [Get the Randomness Tool](https://openwebui.com/t/whogben/randomness)
I built a **Randomness Tool** for Open WebUI to fix this. It offloads chance to Python code, allowing the AI to serve as an unpredictable Dungeon Master or a creative collaborator that breaks its own patterns.
## Why AI Game Masters Cheat
I play tabletop RPGs with friends and wanted to try a game where the AI runs the game. But I knew that without the right tools, an AI Game Master would cheat.
If you ask an AI to "roll a d20," it doesn't roll anything. It hallucinates a number that fits the context. If the narrative suggests success, you "roll" a 19. If the AI thinks failure is dramatic, you "roll" a 3. The game feels hollow because the risk is fake. To fix this, I needed an impartial arbiter—something that could force the AI to react to an outcome it didn't choose.
## How It Works
I built a tool that gives the LLM access to Python's `random` module. The AI assesses the situation and determines the odds, but the code decides the result.
The tool exposes two key functions:
1. `weighted_random(options)`: The AI provides a list of outcomes and their probabilities (e.g., `{"Success": 0.6, "Fail": 0.4}`). The tool returns one key based on those weights.
2. `roll_dice(dice_ct, sides_ct)`: The AI specifies standard dice notation (like 2d6 or 1d20). The tool returns the total and the individual rolls.
This reverses the usual flow. Instead of the AI controlling the story, the tool injects a hard fact—"You failed"—and the AI must write the consequences.
## Seeing It in Action
Here is how it looks in play.
**1. The Skill Check**
I attempt a risky jump. The AI decides the odds but lets the tool decide my fate.
**User**: "I try to leap across the chasm."
**AI Thought**: *This is hard. 60% he makes it, 30% he falls, 10% he clings to the edge.*
**Tool Call**: `weighted_random({"Make it": 0.6, "Fall": 0.3, "Cling": 0.1})`
**Tool Output**: `Fall`
**AI Response**: "You sprint to the edge and push off, but your foot slips. You tumble into the dark."
**2. The Sanity Check**
Delta Green requires fair consequences for seeing the impossible.
**User**: "I examine the ritual geometry on the wall."
**AI Response**: "The angles hurt your eyes. It shouldn't be possible. Roll for Sanity Loss (1d6)."
**Tool Call**: `roll_dice(1, 6)`
**Tool Output**: `1d6 result: 5 (5)`
**AI Response**: "You lose 5 Sanity. The pattern burns into your mind, and you realize the room is folding in on itself."
**3. Probabilistic Personality**
You can also use this to give an agent personality quirks without making them repetitive.
**System Instruction**: "Speak in a haiku in 10% of your responses."
**Tool Call**: `weighted_random({"Haiku": 0.1, "Normal": 0.9})`
**Tool Output**: `Haiku`
**AI Response**: "File saved to the disk. / Data safe and secure now. / Want anything else?"
Use it to build games, break writer's block, or just make your AI agents a little less predictable.