Navigate:
Stable Diffusion WebUI
~$SDW0.5%

Stable Diffusion web UI: Gradio-based interface for image generation

Web UI for Stable Diffusion enabling AI image generation and editing in browser.

LIVE RANKINGS • 10:20 AM • STEADY
TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100
OVERALL
#66
112
AI & ML
#33
35
30 DAY RANKING TREND
ovr#66
·AI#33
STARS
161.3K
FORKS
30.1K
7D STARS
+780
7D FORKS
+118
Tags:
See Repo:
Share:

Learn more about Stable Diffusion WebUI

Stable Diffusion WebUI is a comprehensive Gradio-based web interface for Stable Diffusion models, enabling text-to-image and image-to-image generation without coding. The platform supports multiple Stable Diffusion checkpoints, LoRA models, embeddings, and VAEs. Key features include img2img transformation, inpainting, outpainting, prompt weighting, highres fix, attention mechanisms, and batch processing. It integrates extensions for custom scripts, supports multiple samplers (Euler, DPM, DDIM), includes CLIP interrogator for reverse prompt engineering, and offers training capabilities for embeddings and hypernetworks. The architecture leverages PyTorch for model inference with optimizations for CUDA, DirectML, and CPU execution.

Stable Diffusion WebUI

1

Extensible Plugin Architecture

Built-in extension system allows community-developed plugins to add functionality like ControlNet, dynamic prompts, and custom preprocessors. Extensions integrate seamlessly into the UI, enabling users to customize their workflow without modifying core code. The marketplace includes hundreds of community extensions for specialized generation techniques, model management, and workflow automation.

2

Advanced Prompt Engineering

Supports sophisticated prompt syntax including attention/emphasis mechanisms with weighted tokens, prompt scheduling across generation steps, and alternating words for variation. Features CLIP interrogator to reverse-engineer prompts from existing images, prompt templates, and styles library for consistent aesthetic generation. Enables fine-grained control over composition through regional prompting and negative prompt capabilities.

3

Comprehensive Model Management

Seamlessly handles multiple model formats including Safetensors, CKPT, and Diffusers. Built-in support for LoRA, LyCORIS, hypernetworks, textual inversion embeddings, and VAE models with hot-swapping capabilities. Includes model merging, pruning, and conversion tools. Automatic model downloading from CivitAI and HuggingFace with hash verification for reproducible results.


import requests

payload = {
    "prompt": "a serene mountain landscape at sunset",
    "steps": 20,
    "width": 512,
    "height": 512
}

response = requests.post('http://localhost:7860/sdapi/v1/txt2img', json=payload)
image_data = response.json()['images'][0]


vv1.10.1

Patch release fixing image upscaling functionality when running on CPU.

  • Fix image upscale on CPU
vv1.10.0

Major release adding Stable Diffusion 3 support with new schedulers and performance improvements.

  • Stable Diffusion 3 support with Euler sampler recommended
  • T5 text model is disabled by default, enable it in settings
  • Significant performance improvements
  • New schedulers and samplers added
vv1.10.0-RC

Release candidate with Stable Diffusion 3 support and performance improvements.

  • Stable Diffusion 3 support with Euler sampler recommended
  • T5 text model is disabled by default, enable it in settings
  • Significant performance improvements
  • New schedulers added

See how people are using Stable Diffusion WebUI

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers