CodeFormer: Blind face restoration with transformer codebook lookup
Transformer-based face restoration using vector-quantized codebook lookup.
Learn more about CodeFormer
from basicsr.archs.codeformer_arch import CodeFormer\nnet = CodeFormer().cuda()\noutput = net(input_image, w=0.5, fidelity_weight=0.5)
Codebook-based restoration
Uses vector quantized codebook lookup to map degraded facial regions to learned high-quality representations, rather than direct pixel-level regression. This approach constrains outputs to realistic face distributions learned during training.
Blind restoration capability
Operates without explicit knowledge of degradation type or severity, handling mixed degradation scenarios including compression artifacts, noise, blur, and missing regions. The model generalizes across different degradation patterns without task-specific fine-tuning.
Multi-task framework
Supports face restoration, inpainting, and colorization through a unified architecture with task-specific checkpoints. Includes video processing support for temporal consistency and integration with multiple face detection backends including dlib and RetinaFace.
from basicsr.archs.codeformer_arch import CodeFormer
import torch
net = CodeFormer().cuda()
net.eval()
with torch.no_grad():
restored_face = net(input_face_tensor, w=0.5, fidelity_weight=0.5)
result = restored_face[0].cpu().clamp(0, 1)Related Repositories
Discover similar tools and frameworks used by developers
Video2X
ML-powered video upscaling, frame interpolation, and restoration with multiple backend support.
Codex CLI
OpenAI's command-line coding assistant that runs locally with ChatGPT integration for terminal use.
DALL-E
Official PyTorch package implementing the discrete VAE component for image tokenization used in OpenAI's DALL-E system.
OpenAI.fm
Web demo showcasing OpenAI's Speech API text-to-speech capabilities with an interactive Next.js interface.
Stanford Alpaca
Research project that fine-tunes LLaMA models to follow instructions using self-generated training data.