LivePortrait: Portrait Animation with Motion Control
PyTorch implementation for animating portraits by transferring expressions from driving videos.
Learn more about LivePortrait
LivePortrait is a deep learning framework for portrait animation that transfers motion from driving videos to static portrait images. The system uses neural networks with stitching and retargeting control mechanisms to maintain portrait identity while applying facial expressions and head movements. It supports both human and animal portraits, with separate models trained for different subject types. The framework includes features for regional control, pose editing, and video-to-video portrait editing workflows.
Stitching Control
Implements neural stitching mechanisms to maintain portrait identity and visual consistency during animation. The system preserves facial features while applying motion transformations.
Multi-Subject Support
Includes separate models for human and animal portrait animation. Both models handle facial expressions and head movements with subject-specific optimizations.
Regional Control
Provides granular control over different facial regions during animation. Users can selectively apply motion to specific areas like eyes, mouth, or head pose independently.
See how people are using LivePortrait
Related Repositories
Discover similar tools and frameworks used by developers
nanoGPT
Minimal PyTorch implementation for training GPT models.
ByteTrack
Multi-object tracker associating low-confidence detections across frames.
Chart-GPT
AI tool that generates charts from natural language text descriptions.
DINOv2
PyTorch vision transformers pretrained on 142M unlabeled images.
Video2X
ML-powered video upscaling, frame interpolation, and restoration with multiple backend support.