Navigate:
All ReposComfyUI
~$COMFYU0.1%

ComfyUI: Node-based diffusion model interface

Visual graph-based diffusion model workflow builder.

LIVE RANKINGS • 06:51 AM • STEADY
TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100
OVERALL
#71
59
AI & ML
#36
29
30 DAY RANKING TREND
ovr#71
·AI#36
STARS
99.6K
FORKS
11.3K
DOWNLOADS
5
7D STARS
+91
7D FORKS
+37
Tags:
See Repo:
Share:

Learn more about ComfyUI

ComfyUI is a Python-based backend and frontend application that provides a node-graph interface for building diffusion model workflows. It implements a modular architecture where individual operations (model loading, sampling, image processing) are represented as nodes that connect to form execution graphs. The system supports multiple diffusion model families including Stable Diffusion variants, SDXL, Flux, and specialized models for video and audio generation, with backend support for NVIDIA, AMD, Intel, and Apple Silicon GPUs. Workflows are constructed visually without requiring code, though the underlying system can be accessed programmatically via API.


1

Node-Based Workflow Construction

Each operation is a discrete, connectable node forming an execution graph. Complex pipelines are built visually without code, making workflow logic explicit, modular, and reusable across projects.

2

Multi-Model Family Support

Supports Stable Diffusion 1.x through Flux, SDXL, Hunyuan, plus specialized video, audio, and 3D models. Different model families can be mixed within a single workflow without compatibility barriers.

3

Abstracted GPU Backends

Unified interface across NVIDIA CUDA, AMD ROCm, Intel Arc, Apple Metal, and Ascend NPUs. Platform-specific optimizations are handled internally, eliminating manual backend configuration.


import requests
import json

workflow = json.load(open('workflow.json'))

response = requests.post(
    'http://127.0.0.1:8188/prompt',
    json={'prompt': workflow}
)

prompt_id = response.json()['prompt_id']
print(f"Queued: {prompt_id}")

vv0.3.68

Portable build now requires CUDA 13.0 and Python 3.13.9; update NVIDIA drivers if portable fails to start. Multiple fp8 torch.compile regressions fixed; pinned memory optimizations improve offload speed but are limited on Windows.

  • Update NVIDIA drivers and ensure PyTorch is current to avoid portable startup failures and torch.compile performance issues.
  • New RAM Pressure cache mode, ScaleROPE node for WAN/Lumina models, and mixed precision quantization system are now available.
vv0.3.67

Fixes --cache-none with loops, adds async network client v2 for API nodes, and bumps portable deps to torch cu130 + Python 3.13.9.

  • Update portable environments to torch cu130 and Python 3.13.9; use new run_without_api_nodes.bat if API nodes cause issues.
  • Dependency-aware caching now works correctly with --cache-none in loops and lazy execution; async network client v2 supports cancellation.
vv0.3.66

Adds Python 3.14 support, fixes VAE memory bloat on PyTorch 2.9/CUDA, and reverts a caching change that broke workflows with loops.

  • Apply VAE memory workaround if using PyTorch 2.9+ with CUDA; affects torch.compile and cudnn 91200+.
  • Note that dependency-aware caching fix (#10368) was reverted due to issues; --cache-none with loops remains broken.

See how people are using ComfyUI

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers