LivePortrait: Portrait Animation with Motion Control
PyTorch implementation for animating portraits by transferring expressions from driving videos.
Learn more about LivePortrait
LivePortrait is a deep learning framework for portrait animation that transfers motion from driving videos to static portrait images. The system uses neural networks with stitching and retargeting control mechanisms to maintain portrait identity while applying facial expressions and head movements. It supports both human and animal portraits, with separate models trained for different subject types. The framework includes features for regional control, pose editing, and video-to-video portrait editing workflows.
Stitching Control
Implements neural stitching mechanisms to maintain portrait identity and visual consistency during animation. The system preserves facial features while applying motion transformations.
Multi-Subject Support
Includes separate models for human and animal portrait animation. Both models handle facial expressions and head movements with subject-specific optimizations.
Regional Control
Provides granular control over different facial regions during animation. Users can selectively apply motion to specific areas like eyes, mouth, or head pose independently.
See how people are using LivePortrait
Top in AI & ML
Related Repositories
Discover similar tools and frameworks used by developers
PaddleOCR
Multilingual OCR toolkit with document structure extraction.
X Recommendation Algorithm
Open source implementation of X's recommendation algorithm for timeline and notification ranking.
DeepSpeed
PyTorch library for training billion-parameter models efficiently.
OpenAI Python
Type-safe Python client for OpenAI's REST API.
Video2X
ML-powered video upscaling, frame interpolation, and restoration with multiple backend support.