## I Built the Missing Open WebUI Client in One Day
Open WebUI is the best interface for accessing LLMs. It’s slick, powerful, and rapidly evolving. But as I started building AI agents to automate my workflows, I hit a wall: there was no complete Python library to control it.
I found a couple of unofficial projects, but they covered less than 10% of the API. I didn't just want to send a chat message; I wanted full control. I wanted to manage users, upload files, configure system settings, and run complex administrative tasks programmatically—without ever opening a browser.
The solution was to build a custom client. To do it efficiently, I leveraged an AI-assisted workflow.
In 13 hours, I released `owui-client`: a fully typed, async Python client covering every single endpoint in the Open WebUI API. Because the client is fully asynchronous, it can handle high-concurrency workflows—like managing 50 different agents simultaneously or doing bulk user onboarding—without blocking.

## The Vibe Coding Experiment
This project served as a test case for "Vibe Coding"—using an advanced LLM (Gemini 3 via Cursor) to do the heavy lifting while I focused on architecture and direction.
Project statistics:
* **Time**: 13 hours from empty folder to PyPi release.
* **Output**: 18,000 lines of code generated (10,000 accepted into the final build).
* **Cost**: ~$45 (86 million tokens).
* **Coverage**: 100% of the API (Auth, Users, Chats, Images, Audio, System).
## Architecture is Everything
Generating 10,000 lines of reliable code requires more than just prompting; it requires preparation. If I had simply asked Gemini to "make an API client," I would have ended up with a messy, hallucinated disaster.
The first hour was spent writing a "Constitution" for the project (an `AGENTS.md` file). This file defined the strict rules the AI had to follow:
```markdown
# Architecture & Organization Rules
1. **Mirror Backend Structure**:
- The client structure must exactly match the Open WebUI backend source structure...
- **Models**: If a Pydantic model is defined in `backend/open_webui/models/auths.py`, its client counterpart must reside in `owui_client/models/auths.py`.
2. **Strict Naming**:
- Maintain the same file names and roughly the same class/variable names as the backend to ensure easy navigation and updates.
```
Crucially, the workflow included a script to download the *actual* source code of the latest Open WebUI release. Instead of asking the AI to guess what the API looked like, I fed it the backend router files and said, "Translate this FastAPI endpoint into a Python client method."
This turned the process into an assembly line. The workflow involved highlighting a backend router (e.g., `auths.py`), typing "Implement the client for this," and letting Gemini generate the code. Two minutes later, I'd review the code, run the tests, and move to the next module.

## The "Self-Healing" Client & Rigorous Testing
Generating code is easy; validating it is hard. To ensure reliability, I didn't rely on simple mocks. I built a test suite that spins up a live Docker container with the latest Open WebUI, alongside a real LDAP server and a mock OpenAI provider.
A key validation of the system occurred halfway through the day. As I was building the `users` module, Open WebUI released a new version.
Suddenly, my integration tests started failing because the API signature for user creation had changed.
In a traditional workflow, this would require significant debugging time. In this workflow, I ran the "drift check" script—a static analysis tool I wrote to compare my client models against the live Open WebUI source code. It flagged exactly where the mismatch occurred.
The error was fed back into the chat with the instruction, "Fix the drift." Gemini analyzed the new backend code, updated my client library to match the new version, and fixed the tests. It took 30 seconds. This demonstrated that the system could maintain itself against Open WebUI's rapid release cycle.
## What `owui-client` Actually Does
The library feels native because it mirrors the backend structure 1:1.
If you want to get a user's session, the Open WebUI backend path is `auths/session`. So, in my client, you write:
```python
user = await client.auths.get_session_user()
```
If you want to list models, the backend is `models/`. The client is:
```python
models = await client.models.get_models()
```
It supports everything:
* **Auth & Users**: Signups, logins, permissions, groups.
* **Inference**: OpenAI-compatible chat completions, Ollama config, image generation.
* **Content**: Chat history management, file uploads, prompt management.
* **System**: Global configurations, model imports, tool management.
## Why This Matters
This library unlocks "Headless Agents." You can now treat Open WebUI not just as a chat interface for humans, but as a backend engine for autonomous workers. Your Python programs can easily create their own knowledge bases, onboard new users, or reconfigure the server—all programmatically.
You can install it right now:
```bash
pip install owui-client
```
Check out the code on GitHub: [https://github.com/whogben/owui_client](https://github.com/whogben/owui_client)