- If you find PersonaLive useful or interesting, please give us a Star🌟! Your support drives us to keep improving.
- Fix bugs (If you encounter any issues, please feel free to open an issue or contact me! 🙏)
- Release
training code. - [2026.02.21] 🥳 PersonaLive is accepted by CVPR2026 🎉.
- [2025.12.29] 🔥 Enhance WebUI (Support reference image replacement).
- [2025.12.22] 🔥 Supported streaming strategy in offline inference to generate long videos on 12GB VRAM!
- [2025.12.17] 🔥 ComfyUI-PersonaLive is now supported! (Thanks to @okdalto)
- [2025.12.15] 🔥 Release
paper! - [2025.12.12] 🔥 Release
inference code,config, andpretrained weights!
- This project is released for academic research only.
- Users must not use this repository to generate harmful, defamatory, or illegal content.
- The authors bear no responsibility for any misuse or legal consequences arising from the use of this tool.
- By using this code, you agree that you are solely responsible for any content generated.
We present PersonaLive, a real-time and streamable diffusion framework capable of generating infinite-length portrait animations.
# clone this repo git clone https://github.com/GVCLab/PersonaLive cd PersonaLive # Create conda environment conda create -n personalive python=3.10 conda activate personalive # Install packages with pip pip install -r requirements_base.txt Option 1: Download pre-trained weights of base models and other components (sd-image-variations-diffusers and sd-vae-ft-mse). You can run the following command to download weights automatically:
python tools/download_weights.pyOption 2: Download pre-trained weights into the ./pretrained_weights folder from one of the below URLs:
Finally, these weights should be organized as follows:
pretrained_weights ├── onnx │ ├── unet_opt │ │ ├── unet_opt.onnx │ │ └── unet_opt.onnx.data │ └── unet ├── personalive │ ├── denoising_unet.pth │ ├── motion_encoder.pth │ ├── motion_extractor.pth │ ├── pose_guider.pth │ ├── reference_unet.pth │ └── temporal_module.pth ├── sd-vae-ft-mse │ ├── diffusion_pytorch_model.bin │ └── config.json ├── sd-image-variations-diffusers │ ├── image_encoder │ │ ├── pytorch_model.bin │ │ └── config.json │ ├── unet │ │ ├── diffusion_pytorch_model.bin │ │ └── config.json │ └── model_index.json └── tensorrt └── unet_work.engine Run offline inference with the default configuration:
python inference_offline.py -L: Max number of frames to generate. (Default: 100)--use_xformers: Enable xFormers memory efficient attention. (Default: True)--stream_gen: Enable streaming generation strategy. (Default: True)--reference_image: Path to a specific reference image. Overrides settings in config.--driving_video: Path to a specific driving video. Overrides settings in config.
python inference_offline.py --use_xformers False # install Node.js 18+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash nvm install 18 source web_start.sh Converting the model to TensorRT can significantly speed up inference (~ 2x ⚡️). Building the engine may take about 20 minutes depending on your device. Note that TensorRT optimizations may lead to slight variations or a small drop in output quality.
# Install packages with pip pip install -r requirements_trt.txt # Converting the model to TensorRT python torch2trt.py 💡 PyCUDA Installation Issues: If you encounter a "Failed to build wheel for pycuda" error during the installation above, please follow these steps:
# Install PyCUDA manually using Conda (avoids compilation issues): conda install -c conda-forge pycuda "numpy<2.0" # Open requirements_trt.txt and comment out or remove the line "pycuda==2024.1.2" # Install other packages with pip pip install -r requirements_trt.txt # Converting the model to TensorRT python torch2trt.py H100. We recommend ALL users (including H100 users) re-run python torch2trt.py locally to ensure best compatibility.
python inference_online.py --acceleration none (for RTX 50-Series) or xformers or tensorrt Then open http://0.0.0.0:7860 in your browser. (*If http://0.0.0.0:7860 does not work well, try http://localhost:7860)
How to use: Upload Image ➡️ Fuse Reference ➡️ Start Animation ➡️ Enjoy! 🎉
Regarding Latency: Latency varies depending on your device's computing power. You can try the following methods to optimize it:
- Lower the "Driving FPS" setting in the WebUI to reduce the computational workload.
- You can increase the multiplier (e.g., set to
num_frames_needed * 4or higher) to better match your device's inference speed.Line 73 in 6953d1a
Special thanks to the community for providing helpful setups! 🥂
-
Windows + RTX 50-Series Guide: Thanks to @dknos for providing a detailed guide on running this project on Windows with Blackwell GPUs.
-
TensorRT on Windows: If you are trying to convert TensorRT models on Windows, this discussion might be helpful. Special thanks to @MaraScott and @Jeremy8776 for their insights.
-
ComfyUI: Thanks to @okdalto for helping implement the ComfyUI-PersonaLive support.
-
Useful Scripts: Thanks to @suruoxi for implementing
download_weights.py, and to @andchir for adding audio merging functionality.
demo_1.mp4 | demo_2.mp4 |
demo_3.mp4 | demo_4.mp4 | demo_5.mp4 | demo_6.mp4 |
demo_7.mp4 | demo_8.mp4 | demo_9.mp4 | demo_0.mp4 |
same_id.mp4 |
cross_id_1.mp4 |
cross_id_2.mp4 |
If you find PersonaLive useful for your research, welcome to cite our work using the following BibTeX:
@article{li2025personalive, title={PersonaLive! Expressive Portrait Image Animation for Live Streaming}, author={Li, Zhiyuan and Pun, Chi-Man and Fang, Chen and Wang, Jue and Cun, Xiaodong}, journal={arXiv preprint arXiv:2512.11253}, year={2025} }This code is mainly built upon Moore-AnimateAnyone, X-NeMo, StreamDiffusion, RAIN and LivePortrait, thanks to their invaluable contributions.



