|
|
@@ -31,12 +31,10 @@ Our approach takes around 60 GB GPU memory to inference. NVIDIA A100 is recomman
|
|
|
|
|
|
```
|
|
|
git clone https://github.com/guoyww/animatediff.git
|
|
|
-cd animatediff
|
|
|
+cd AnimateDiff
|
|
|
|
|
|
-conda create -n animatediff python=3.8
|
|
|
+conda env create -f environment.yaml
|
|
|
conda activate animatediff
|
|
|
-
|
|
|
-pip install -r requirements.txt
|
|
|
```
|
|
|
|
|
|
### Download Base T2I & Motion Module Checkpoints
|
|
|
@@ -65,7 +63,7 @@ bash download_bashscripts/8-GhibliBackground.sh
|
|
|
```
|
|
|
|
|
|
### Inference
|
|
|
-After downloading the above peronalized T2I checkpoints, run the following commands to generate animations.
|
|
|
+After downloading the above peronalized T2I checkpoints, run the following commands to generate animations. The results will automatically be saved to `samples/` folder.
|
|
|
```
|
|
|
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
|
|
|
python -m scripts.animate --config configs/prompts/2-Lyriel.yaml
|
|
|
@@ -100,7 +98,16 @@ python -m scripts.animate --prompt configs/prompts/lora.yaml
|
|
|
``` -->
|
|
|
|
|
|
## Gallery
|
|
|
-Here we demonstrate several best results we got in previous experiments.
|
|
|
+Here we demonstrate several best results we found in our experiments or generated by other artists.
|
|
|
+<table class="center">
|
|
|
+ <tr>
|
|
|
+ <td><img src="__assets__/animations/model_07/01.gif"></td>
|
|
|
+ <td><img src="__assets__/animations/model_07/02.gif"></td>
|
|
|
+ <td><img src="__assets__/animations/model_07/03.gif"></td>
|
|
|
+ <td><img src="__assets__/animations/model_07/04.gif"></td>
|
|
|
+ </tr>
|
|
|
+</table>
|
|
|
+<p style="margin-left: 2em; margin-top: -1em">Model:<a href="https://civitai.com/models/107295?modelVersionId=115371">holding_sign</a> (samples are contributed by CivitAI artists)</p>
|
|
|
|
|
|
<table class="center">
|
|
|
<tr>
|