Navigate:
All ReposReal-ESRGAN
~$REALES0.1%

Real-ESRGAN: Image and video restoration tool

PyTorch framework for blind super-resolution using GANs.

LIVE RANKINGS • 06:52 AM • STEADY
TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100
OVERALL
#93
5
AI & ML
#43
7
30 DAY RANKING TREND
ovr#93
·AI#43
STARS
33.8K
FORKS
4.2K
DOWNLOADS
7D STARS
+36
7D FORKS
+1
Tags:
See Repo:
Share:

Learn more about Real-ESRGAN

Real-ESRGAN is a PyTorch-based image restoration framework designed to handle multiple degradation types in a unified approach. The implementation uses neural network models trained exclusively on synthetic data to generalize across real-world image quality issues. It provides multiple model variants optimized for different use cases, including general image restoration and specialized anime content processing. The framework supports inference through multiple backends including PyTorch, NCNN with Vulkan acceleration, and cloud deployment options.


1

Synthetic Training Data

Models train exclusively on synthetically degraded images without requiring paired real-world datasets. Eliminates costly data collection while generalizing effectively to real image restoration scenarios.

2

Unified Degradation Handling

Single model addresses super-resolution, denoising, and JPEG artifact removal simultaneously. Replaces multiple specialized networks with one inference pass for combined restoration tasks.

3

Multi-Backend Deployment

Supports multiple inference backends including PyTorch, NCNN, and Vulkan for broad hardware compatibility. Run on NVIDIA GPUs with CUDA, AMD GPUs with Vulkan, or CPU-only environments, enabling deployment across diverse infrastructure setups.


from basicsr.archs.rrdbnet_arch import RRDBNet
from realesrgan import RealESRGANer
import cv2

model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32)
upsampler = RealESRGANer(scale=4, model_path='RealESRGAN_x4plus.pth', model=model)

img = cv2.imread('input.jpg', cv2.IMREAD_COLOR)
output, _ = upsampler.enhance(img, outscale=4)
cv2.imwrite('output.jpg', output)


vv0.3.0

Adds two tiny general-purpose upscaling models with optional denoise control and auto-download; fixes colorspace bug and adds multi-GPU support.

  • Use new realesr-general-x4v3 or realesr-general-wdn-x4v3 models for general scenes; set -dn flag (0-1) to control denoise strength.
  • Leverage multi-GPU and multi-processing for video inference; models now auto-download if missing, and ffmpeg streaming is supported.
vv0.2.5.0

Ships RealESRGAN AnimeVideo-v3 model with improved naturalness, fewer artifacts, and faster inference; updates ncnn executables for all platforms.

  • Download realesr-animevideov3.pth for better texture/background restoration and more faithful color reproduction in anime video upscaling.
  • Use updated ncnn Vulkan executables (Windows/Linux/MacOS) for Intel/AMD/Nvidia GPU acceleration with the new model.
vv0.2.4.0

Release notes do not specify breaking changes, new requirements, or functional updates; only a logo addition is mentioned.

  • No code changes, migrations, or dependency updates are documented in this release.
  • A new logo was added to the repository README; no action required for existing deployments.


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers