ComfyUI: Node-based diffusion model interface
Visual graph-based diffusion model workflow builder.
Learn more about ComfyUI
ComfyUI is a Python-based backend and frontend application that provides a node-graph interface for building diffusion model workflows. It implements a modular architecture where individual operations (model loading, sampling, image processing) are represented as nodes that connect to form execution graphs. The system supports multiple diffusion model families including Stable Diffusion variants, SDXL, Flux, and specialized models for video and audio generation, with backend support for NVIDIA, AMD, Intel, and Apple Silicon GPUs. Workflows are constructed visually without requiring code, though the underlying system can be accessed programmatically via API.
Node-Based Workflow Construction
Each operation is a discrete, connectable node forming an execution graph. Complex pipelines are built visually without code, making workflow logic explicit, modular, and reusable across projects.
Multi-Model Family Support
Supports Stable Diffusion 1.x through Flux, SDXL, Hunyuan, plus specialized video, audio, and 3D models. Different model families can be mixed within a single workflow without compatibility barriers.
Abstracted GPU Backends
Unified interface across NVIDIA CUDA, AMD ROCm, Intel Arc, Apple Metal, and Ascend NPUs. Platform-specific optimizations are handled internally, eliminating manual backend configuration.
import requests
import json
workflow = json.load(open('workflow.json'))
response = requests.post(
'http://127.0.0.1:8188/prompt',
json={'prompt': workflow}
)
prompt_id = response.json()['prompt_id']
print(f"Queued: {prompt_id}")Portable build now requires CUDA 13.0 and Python 3.13.9; update NVIDIA drivers if portable fails to start. Multiple fp8 torch.compile regressions fixed; pinned memory optimizations improve offload speed but are limited on Windows.
- –Update NVIDIA drivers and ensure PyTorch is current to avoid portable startup failures and torch.compile performance issues.
- –New RAM Pressure cache mode, ScaleROPE node for WAN/Lumina models, and mixed precision quantization system are now available.
Fixes --cache-none with loops, adds async network client v2 for API nodes, and bumps portable deps to torch cu130 + Python 3.13.9.
- –Update portable environments to torch cu130 and Python 3.13.9; use new run_without_api_nodes.bat if API nodes cause issues.
- –Dependency-aware caching now works correctly with --cache-none in loops and lazy execution; async network client v2 supports cancellation.
Adds Python 3.14 support, fixes VAE memory bloat on PyTorch 2.9/CUDA, and reverts a caching change that broke workflows with loops.
- –Apply VAE memory workaround if using PyTorch 2.9+ with CUDA; affects torch.compile and cudnn 91200+.
- –Note that dependency-aware caching fix (#10368) was reverted due to issues; --cache-none with loops remains broken.
See how people are using ComfyUI
Related Repositories
Discover similar tools and frameworks used by developers
TTS
PyTorch toolkit for deep learning text-to-speech synthesis.
crawl4ai
Async browser automation extracting web content for LLMs.
PaddleOCR
Multilingual OCR toolkit with document structure extraction.
StabilityMatrix
Multi-backend inference UI manager with embedded dependencies.
DeepSpeed
PyTorch library for training billion-parameter models efficiently.