Navigate:
Open WebUI
~$OPEN0.8%

Open WebUI: Self-hosted AI chat interface

Extensible multi-LLM chat platform with RAG pipeline.

LIVE RANKINGS • 11:32 AM • STEADY
TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50TOP 50
OVERALL
#36
9
AI & ML
#21
4
30 DAY RANKING TREND
ovr#36
·AI#21
STARS
125.1K
FORKS
17.7K
7D STARS
+973
7D FORKS
+175
Tags:
See Repo:
Share:

Learn more about Open WebUI

Open WebUI is a self-hosted web application that provides a chat interface for multiple LLM backends and services. It operates as a containerized application deployable via Docker or Kubernetes, supporting both CPU and GPU execution through tagged image variants. The platform includes a built-in RAG engine for document processing, integrates with external APIs for web search, and offers administrative controls for user management and permissions. Common deployment scenarios include local Ollama instances, cloud-hosted API services, and enterprise environments with identity provider integration.

Open WebUI

1

Unified Multi-Backend Interface

Connects to Ollama, OpenAI, and OpenAI-compatible endpoints through configurable API URLs. Switch between local and remote LLM sources without changing the chat interface or redeploying.

2

Built-in RAG Pipeline

Processes local documents and web search results directly within the application. Injects context into chat through command syntax without external vector databases or API dependencies.

3

SCIM 2.0 Provisioning

Automates user and group synchronization with enterprise identity providers like Okta and Azure AD. Provides role-based access control for multi-tenant deployments without manual account management.


from open_webui.tools import Tool

class WeatherTool(Tool):
    def __init__(self):
        self.name = "weather"
        self.description = "Get current weather for a location"
    
    async def run(self, location: str) -> dict:
        # Tool logic to fetch weather data
        return {"location": location, "temp": 72, "condition": "sunny"}

vv0.7.2

Fixes workspace prompts editor, improves local Whisper STT support, and optimizes evaluations page performance.

  • Users can now create and save prompts in the workspace prompts editor without encountering errors
  • Users can now use local Whisper for speech-to-text when STT_ENGINE is left empty (the default for local mode)
  • The Evaluations page now loads faster by eliminating duplicate API calls to the leaderboard and feedbacks endpoints
  • Fixed missing Settings tab i18n label keys
vv0.7.1

Improved reliability for low-spec and SQLite deployments by disabling database session sharing by default.

  • Improved reliability for low-spec and SQLite deployments. Fixed page timeouts by disabling database session sharing by default
  • Users can re-enable via 'DATABASE_ENABLE_SESSION_SHARING=true' if needed
vv0.7.0

Introduces native function calling with built-in tools for multi-step tasks combining web research, knowledge bases, and memory.

  • Users with models that support interleaved thinking now get more refined results from multi-step workflows
  • When models invoke web search, search results appear as clickable citations in real-time for full source verification
  • Users can selectively disable specific built-in tools (timestamps, memory, chat history, notes, web search, knowledge bases) per model
  • Pending tool calls are now displayed during response generation, so users know which tools are being invoked
  • Administrators can now limit the number of files that can be uploaded to folders using the FOLDER_MAX_FILE_COUNT setting

See how people are using Open WebUI

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers