|
|
2 anni fa | |
|---|---|---|
| __assets__ | 2 anni fa | |
| animatediff | 2 anni fa | |
| configs | 2 anni fa | |
| download_bashscripts | 2 anni fa | |
| models | 2 anni fa | |
| scripts | 2 anni fa | |
| README.md | 2 anni fa | |
| requirements.txt | 2 anni fa |
This repository is the official implementation of [AnimateDiff]().
[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning]()
Yuwei Guo, Ceyuan Yang*, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai*Coresponding Author
[Arxiv Report]() | Project Page
Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recommanded.
git clone https://github.com/guoyww/animatediff.git
cd animatediff
conda create -n animatediff python=3.8
conda activate animatediff
pip install -r requirments.txt
We provide two versions of our Motion Module, which are trained on stable-diffusion-v1-4 and finetuned on v1-5 seperately. It's recommanded to try both of them for best results.
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
bash download_bashscripts/0-MotionModule.sh
You may also directly download the motion module checkpoints from Google Drive, then put them in models/Motion_Module/ folder.
Here we provide inference configs for 6 demo T2I on CivitAI. You may run the following bash scripts to download these checkpoints.
bash download_bashscripts/1-ToonYou.sh
bash download_bashscripts/2-Lyriel.sh
bash download_bashscripts/3-RcnzCartoon.sh
bash download_bashscripts/4-MajicMix.sh
bash download_bashscripts/5-RealisticVision.sh
bash download_bashscripts/6-Tusun.sh
bash download_bashscripts/7-FilmVelvia.sh
bash download_bashscripts/8-GhibliBackground.sh
After downloading the above peronalized T2I checkpoints, run the following commands to generate animations.
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
python -m scripts.animate --config configs/prompts/4-MajicMix.yaml
python -m scripts.animate --config configs/prompts/5-RealisticVision.yaml
python -m scripts.animate --config configs/prompts/6-Tusun.yaml
python -m scripts.animate --config configs/prompts/7-FilmVelvia.yaml
python -m scripts.animate --config configs/prompts/8-GhibliBackground.yaml
Here we demonstrate several best results we get in our early experiments.
<tr>
<td><img src="__assets__/animations/model_01/01.gif"></td>
<td><img src="__assets__/animations/model_01/02.gif"></td>
<td><img src="__assets__/animations/model_01/03.gif"></td>
<td><img src="__assets__/animations/model_01/04.gif"></td>
</tr>
Model:ToonYou
<tr>
<td><img src="__assets__/animations/model_02/01.gif"></td>
<td><img src="__assets__/animations/model_02/02.gif"></td>
<td><img src="__assets__/animations/model_02/03.gif"></td>
<td><img src="__assets__/animations/model_02/04.gif"></td>
</tr>
Model:Counterfeit V3.0
<tr>
<td><img src="__assets__/animations/model_03/01.gif"></td>
<td><img src="__assets__/animations/model_03/02.gif"></td>
<td><img src="__assets__/animations/model_03/03.gif"></td>
<td><img src="__assets__/animations/model_03/04.gif"></td>
</tr>
Model:Realistic Vision V2.0
<tr>
<td><img src="__assets__/animations/model_04/01.gif"></td>
<td><img src="__assets__/animations/model_04/02.gif"></td>
<td><img src="__assets__/animations/model_04/03.gif"></td>
<td><img src="__assets__/animations/model_04/04.gif"></td>
</tr>
Model: majicMIX Realistic
<tr>
<td><img src="__assets__/animations/model_05/01.gif"></td>
<td><img src="__assets__/animations/model_05/02.gif"></td>
<td><img src="__assets__/animations/model_05/03.gif"></td>
<td><img src="__assets__/animations/model_05/04.gif"></td>
</tr>
Model:RCNZ Cartoon>
<tr>
<td><img src="__assets__/animations/model_06/01.gif"></td>
<td><img src="__assets__/animations/model_06/02.gif"></td>
<td><img src="__assets__/animations/model_06/03.gif"></td>
<td><img src="__assets__/animations/model_06/04.gif"></td>
</tr>
Model:FilmVelvia
Coming soon.
Codebase built upon Tune-a-Video.