@@ -61,13 +61,15 @@ Resources:
61
61
To generate a video from prompt, run the following python command
62
62
```python
63
63
import torch
64
+ import imageio
64
65
from diffusers import TextToVideoZeroPipeline
65
66
66
67
model_id = " runwayml/stable-diffusion-v1-5"
67
68
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype =torch.float16).to("cuda")
68
69
69
70
prompt = " A panda is playing guitar on times square"
70
71
result = pipe(prompt=prompt).images
72
+ result = [(r * 255).astype("uint8") for r in result]
71
73
imageio.mimsave("video.mp4", result, fps =4)
72
74
```
73
75
You can change these parameters in the pipeline call:
@@ -95,6 +97,7 @@ To generate a video from prompt with additional pose control
95
97
96
98
2. Read video containing extracted pose images
97
99
```python
100
+ from PIL import Image
98
101
import imageio
99
102
100
103
reader = imageio.get_reader(video_path, " ffmpeg" )
@@ -151,6 +154,7 @@ To perform text-guided video editing (with [InstructPix2Pix](./stable_diffusion/
151
154
152
155
2. Read video from path
153
156
```python
157
+ from PIL import Image
154
158
import imageio
155
159
156
160
reader = imageio.get_reader(video_path, " ffmpeg" )
@@ -174,14 +178,14 @@ To perform text-guided video editing (with [InstructPix2Pix](./stable_diffusion/
174
178
```
175
179
176
180
177
- ### Dreambooth specialization
181
+ ### DreamBooth specialization
178
182
179
183
Methods **Text-To-Video**, **Text-To-Video with Pose Control** and **Text-To-Video with Edge Control**
180
184
can run with custom [DreamBooth](../training/dreambooth) models, as shown below for
181
185
[Canny edge ControlNet model](https://huggingface.co/lllyasviel/sd-controlnet-canny) and
182
186
[Avatar style DreamBooth](https://huggingface.co/PAIR/text2video-zero-controlnet-canny-avatar) model
183
187
184
- 1. Download demo video from huggingface
188
+ 1. Download a demo video
185
189
186
190
```python
187
191
from huggingface_hub import hf_hub_download
@@ -193,6 +197,7 @@ can run with custom [DreamBooth](../training/dreambooth) models, as shown below
193
197
194
198
2. Read video from path
195
199
```python
200
+ from PIL import Image
196
201
import imageio
197
202
198
203
reader = imageio.get_reader(video_path, " ffmpeg" )
0 commit comments