Real-ESRGAN: Image and video restoration tool
PyTorch framework for blind super-resolution using GANs.
Learn more about Real-ESRGAN
Real-ESRGAN is a PyTorch-based image restoration framework designed to handle multiple degradation types in a unified approach. The implementation uses neural network models trained exclusively on synthetic data to generalize across real-world image quality issues. It provides multiple model variants optimized for different use cases, including general image restoration and specialized anime content processing. The framework supports inference through multiple backends including PyTorch, NCNN with Vulkan acceleration, and cloud deployment options.
Synthetic Training Data
Models train exclusively on synthetically degraded images without requiring paired real-world datasets. Eliminates costly data collection while generalizing effectively to real image restoration scenarios.
Unified Degradation Handling
Single model addresses super-resolution, denoising, and JPEG artifact removal simultaneously. Replaces multiple specialized networks with one inference pass for combined restoration tasks.
Multi-Backend Deployment
Supports multiple inference backends including PyTorch, NCNN, and Vulkan for broad hardware compatibility. Run on NVIDIA GPUs with CUDA, AMD GPUs with Vulkan, or CPU-only environments, enabling deployment across diverse infrastructure setups.
from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import cv2
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32)
upsampler = RealESRGANer(scale=4, model_path='RealESRGAN_x4plus.pth', model=model)
img = cv2.imread('input.jpg', cv2.IMREAD_COLOR)
output, _ = upsampler.enhance(img, outscale=4)
cv2.imwrite('output.jpg', output)Adds two tiny general-purpose upscaling models with optional denoise control and auto-download; fixes colorspace bug and adds multi-GPU support.
- –Use new realesr-general-x4v3 or realesr-general-wdn-x4v3 models for general scenes; set -dn flag (0-1) to control denoise strength.
- –Leverage multi-GPU and multi-processing for video inference; models now auto-download if missing, and ffmpeg streaming is supported.
Ships RealESRGAN AnimeVideo-v3 model with improved naturalness, fewer artifacts, and faster inference; updates ncnn executables for all platforms.
- –Download realesr-animevideov3.pth for better texture/background restoration and more faithful color reproduction in anime video upscaling.
- –Use updated ncnn Vulkan executables (Windows/Linux/MacOS) for Intel/AMD/Nvidia GPU acceleration with the new model.
Release notes do not specify breaking changes, new requirements, or functional updates; only a logo addition is mentioned.
- –No code changes, migrations, or dependency updates are documented in this release.
- –A new logo was added to the repository README; no action required for existing deployments.
Related Repositories
Discover similar tools and frameworks used by developers
ByteTrack
Multi-object tracker associating low-confidence detections across frames.
nanoGPT
Minimal PyTorch implementation for training GPT models.
open_clip
PyTorch library for contrastive language-image pretraining.
onnxruntime
Cross-platform engine for optimized ONNX model execution.
llama_index
Connect LLMs to external data via RAG workflows.