lora-script for animate diff

Yuwei Guo 170005366a gradio 1 rok pred
__assets__ 4f0efbe11f gradio 1 rok pred
animatediff 05fdf470ad optimize memory cost 1 rok pred
configs 4f0efbe11f gradio 1 rok pred
download_bashscripts ccf63fb0dd add bashscripts 1 rok pred
models 5b702ae4e9 update 1 rok pred
scripts 05fdf470ad optimize memory cost 1 rok pred
LICENSE.txt 81f2422dc3 add LICENSE 1 rok pred
README.md 119e13ba16 gradio 1 rok pred
app.py 170005366a gradio 1 rok pred
environment.yaml 07236ae2da gradio 1 rok pred

README.md

AnimateDiff

This repository is the official implementation of AnimateDiff.

AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning

Yuwei Guo, Ceyuan Yang*, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai

*Corresponding Author

Arxiv Report | Project Page

Todo

  • Code Release
  • Arxiv Report
  • GPU Memory Optimization
  • Gradio Interface

Common Issues

Installation Please ensure the installation of xformer that is applied to reduce the inference memory.

Various resolution or number of frames Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.

Animating a given image We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the talesofai.

Contributions from community Contributions are always welcome!! We will create another branch which community could contribute to. As for the main branch, we would like to align it with the original technical report:)

Setup for Inference

Prepare Environment

Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.

We updated our inference code with xformers and a sequential decoding trick. Now AnimateDiff takes only ~12GB VRAM to inference, and run on a single RTX3090 !!

git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff

conda env create -f environment.yaml
conda activate animatediff

Download Base T2I & Motion Module Checkpoints

We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately. It's recommanded to try both of them for best results.

git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/

bash download_bashscripts/0-MotionModule.sh

You may also directly download the motion module checkpoints from Google Drive, then put them in models/Motion_Module/ folder.

Prepare Personalize T2I

Here we provide inference configs for 6 demo T2I on CivitAI. You may run the following bash scripts to download these checkpoints.

bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh

Inference

After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to samples/ folder.

python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml

To generate animations with a new DreamBooth/LoRA model, you may create a new config .yaml file in the following format:

NewModel:
  path: "[path to your DreamBooth/LoRA model .safetensors file]"
  base: "[path to LoRA base model .safetensors file, leave it empty string if not needed]"

  motion_module:
    - "models/Motion_Module/mm_sd_v14.ckpt"
    - "models/Motion_Module/mm_sd_v15.ckpt"
    
  steps:          25
  guidance_scale: 7.5

  prompt:
    - "[positive prompt]"

  n_prompt:
    - "[negative prompt]"

Then run the following commands:

python -m scripts.animate --config [path to the config file]

Gradio Demo

We have created a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:

conda activate animatediff
python app.py

By default, the demo will run at localhost:7860.

Gallery

Here we demonstrate several best results we found in our experiments.

<tr>
<td><img src="__assets__/animations/model_01/01.gif"></td>
<td><img src="__assets__/animations/model_01/02.gif"></td>
<td><img src="__assets__/animations/model_01/03.gif"></td>
<td><img src="__assets__/animations/model_01/04.gif"></td>
</tr>

Model:ToonYou

<tr>
<td><img src="__assets__/animations/model_02/01.gif"></td>
<td><img src="__assets__/animations/model_02/02.gif"></td>
<td><img src="__assets__/animations/model_02/03.gif"></td>
<td><img src="__assets__/animations/model_02/04.gif"></td>
</tr>

Model:Counterfeit V3.0

<tr>
<td><img src="__assets__/animations/model_03/01.gif"></td>
<td><img src="__assets__/animations/model_03/02.gif"></td>
<td><img src="__assets__/animations/model_03/03.gif"></td>
<td><img src="__assets__/animations/model_03/04.gif"></td>
</tr>

Model:Realistic Vision V2.0

<tr>
<td><img src="__assets__/animations/model_04/01.gif"></td>
<td><img src="__assets__/animations/model_04/02.gif"></td>
<td><img src="__assets__/animations/model_04/03.gif"></td>
<td><img src="__assets__/animations/model_04/04.gif"></td>
</tr>

Model: majicMIX Realistic

<tr>
<td><img src="__assets__/animations/model_05/01.gif"></td>
<td><img src="__assets__/animations/model_05/02.gif"></td>
<td><img src="__assets__/animations/model_05/03.gif"></td>
<td><img src="__assets__/animations/model_05/04.gif"></td>
</tr>

Model:RCNZ Cartoon

<tr>
<td><img src="__assets__/animations/model_06/01.gif"></td>
<td><img src="__assets__/animations/model_06/02.gif"></td>
<td><img src="__assets__/animations/model_06/03.gif"></td>
<td><img src="__assets__/animations/model_06/04.gif"></td>
</tr>

Model:FilmVelvia

Community Cases

Here are some samples contributed by the community artists. Create a Pull Request if you would like to show your results here😚.

<tr>
<td><img src="__assets__/animations/model_07/init.jpg"></td>
<td><img src="__assets__/animations/model_07/01.gif"></td>
<td><img src="__assets__/animations/model_07/02.gif"></td>
<td><img src="__assets__/animations/model_07/03.gif"></td>
<td><img src="__assets__/animations/model_07/04.gif"></td>
</tr>

Character Model:Yoimiya (with an initial reference image, see WIP fork for the extended implementation.)

<tr>
<td><img src="__assets__/animations/model_08/01.gif"></td>
<td><img src="__assets__/animations/model_08/02.gif"></td>
<td><img src="__assets__/animations/model_08/03.gif"></td>
<td><img src="__assets__/animations/model_08/04.gif"></td>
</tr>

Character Model:Paimon; Pose Model:Hold Sign

BibTeX

@article{guo2023animatediff,
  title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
  author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Wang, Yaohui and Qiao, Yu and Lin, Dahua and Dai, Bo},
  journal={arXiv preprint arXiv:2307.04725},
  year={2023}
}

Contact Us

Yuwei Guo: guoyuwei@pjlab.org.cn
Ceyuan Yang: yangceyuan@pjlab.org.cn
Bo Dai: daibo@pjlab.org.cn

Acknowledgements

Codebase built upon Tune-a-Video.