Skip to content

[MS Text To Video] Add first text to video #2738

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 48 commits into from
Mar 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
d6912ac
[MS Text To Video} Add first text to video
patrickvonplaten Mar 19, 2023
bf1c935
upload
patrickvonplaten Mar 19, 2023
8a29fe6
make first model example
patrickvonplaten Mar 20, 2023
5973584
match unet3d params
patrickvonplaten Mar 20, 2023
d91862d
make sure weights are correcctly converted
patrickvonplaten Mar 20, 2023
aeab5ad
improve
patrickvonplaten Mar 21, 2023
d9dd98c
forward pass works, but diff result
patrickvonplaten Mar 21, 2023
40c80e2
make forward work
patrickvonplaten Mar 21, 2023
c4f0aeb
fix more
patrickvonplaten Mar 21, 2023
faa4e6d
finish
patrickvonplaten Mar 21, 2023
e9d4340
Merge branch 'main' of https://github.com/huggingface/diffusers into …
patrickvonplaten Mar 21, 2023
e27769b
refactor video output class.
sayakpaul Mar 22, 2023
d5e544f
feat: add support for a video export utility.
sayakpaul Mar 22, 2023
5945729
fix: opencv availability check.
sayakpaul Mar 22, 2023
5251c3a
run make fix-copies.
sayakpaul Mar 22, 2023
cf8ac80
add: docs for the model components.
sayakpaul Mar 22, 2023
7a80764
add: standalone pipeline doc.
sayakpaul Mar 22, 2023
f5b3fe4
edit docstring of the pipeline.
sayakpaul Mar 22, 2023
fb916ba
add: right path to TransformerTempModel
sayakpaul Mar 22, 2023
880cfce
add: first set of tests.
sayakpaul Mar 22, 2023
6f0f5e3
complete fast tests for text to video.
sayakpaul Mar 22, 2023
d58cb7f
fix bug
patrickvonplaten Mar 22, 2023
60503b1
Merge branch 'text_to_video' of https://github.com/huggingface/diffus…
patrickvonplaten Mar 22, 2023
387181c
up
patrickvonplaten Mar 22, 2023
0a9c495
three fast tests failing.
sayakpaul Mar 22, 2023
50e8950
add: note on slow tests
sayakpaul Mar 22, 2023
4799670
make work with all schedulers
patrickvonplaten Mar 22, 2023
b131d48
apply styling.
sayakpaul Mar 22, 2023
f3c13ab
Merge branch 'text_to_video' of https://github.com/huggingface/diffus…
patrickvonplaten Mar 22, 2023
bd50840
add slow tests
patrickvonplaten Mar 22, 2023
4a5267a
change file name
patrickvonplaten Mar 22, 2023
7b3c48d
update
patrickvonplaten Mar 22, 2023
48d05a4
more correction
patrickvonplaten Mar 22, 2023
fb060ab
more fixes
patrickvonplaten Mar 22, 2023
e47969d
finish
patrickvonplaten Mar 22, 2023
436babe
up
patrickvonplaten Mar 22, 2023
03275f5
Apply suggestions from code review
patrickvonplaten Mar 22, 2023
e700c08
up
patrickvonplaten Mar 22, 2023
49b6c3c
Merge branch 'text_to_video' of https://github.com/huggingface/diffu…
patrickvonplaten Mar 22, 2023
b7bebeb
finish
patrickvonplaten Mar 22, 2023
fc832f9
make copies
patrickvonplaten Mar 22, 2023
975b02d
fix pipeline tests
patrickvonplaten Mar 22, 2023
5b6be9b
fix more tests
patrickvonplaten Mar 22, 2023
9d7cd2d
Apply suggestions from code review
patrickvonplaten Mar 22, 2023
522f3ae
apply suggestions
patrickvonplaten Mar 22, 2023
9795ff9
apply suggestions
patrickvonplaten Mar 22, 2023
d4a11a3
up
patrickvonplaten Mar 22, 2023
04ac574
revert
patrickvonplaten Mar 22, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,8 @@
title: Stable unCLIP
- local: api/pipelines/stochastic_karras_ve
title: Stochastic Karras VE
- local: api/pipelines/text_to_video
title: Text-to-Video
- local: api/pipelines/unclip
title: UnCLIP
- local: api/pipelines/latent_diffusion_uncond
Expand Down
12 changes: 12 additions & 0 deletions docs/source/en/api/models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,12 @@ The models are built on the base class ['ModelMixin'] that is a `torch.nn.module
## UNet2DConditionModel
[[autodoc]] UNet2DConditionModel

## UNet3DConditionOutput
[[autodoc]] models.unet_3d_condition.UNet3DConditionOutput

## UNet3DConditionModel
[[autodoc]] UNet3DConditionModel

## DecoderOutput
[[autodoc]] models.vae.DecoderOutput

Expand All @@ -58,6 +64,12 @@ The models are built on the base class ['ModelMixin'] that is a `torch.nn.module
## Transformer2DModelOutput
[[autodoc]] models.transformer_2d.Transformer2DModelOutput

## TransformerTemporalModel
[[autodoc]] models.transformer_temporal.TransformerTemporalModel

## Transformer2DModelOutput
[[autodoc]] models.transformer_temporal.TransformerTemporalModelOutput

## PriorTransformer
[[autodoc]] models.prior_transformer.PriorTransformer

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/api/pipelines/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ available a colab notebook to directly try them out.
| [stable_unclip](./stable_unclip) | **Stable unCLIP** | Text-to-Image Generation |
| [stable_unclip](./stable_unclip) | **Stable unCLIP** | Image-to-Image Text-Guided Generation |
| [stochastic_karras_ve](./stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
| [unclip](./unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation |
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
| [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
Expand Down
122 changes: 122 additions & 0 deletions docs/source/en/api/pipelines/text_to_video.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
<!--Copyright 2023 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Text-to-video synthesis

Text-to-video synthesis from [ModelScope](https://modelscope.cn/) can be considered the same as Stable Diffusion structure-wise but it is extended to videos instead of static images. More specifically, this system allows us to generate videos from a natural language text prompt.

From the [model summary](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis):

*This model is based on a multi-stage text-to-video generation diffusion model, which inputs a description text and returns a video that matches the text description. Only English input is supported.*

Resources:

* [Website](https://modelscope.cn/models/damo/text-to-video-synthesis/summary)
* [GitHub repository](https://github.com/modelscope/modelscope/)
* [Spaces] (TODO)

## Available Pipelines:

| Pipeline | Tasks | Demo
|---|---|:---:|
| [DiffusionPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py) | *Text-to-Video Generation* | [Spaces] (TODO)

## Usage example

Let's start by generating a short video with the default length of 16 frames (2s at 8 fps):

```python
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe = pipe.to("cuda")

prompt = "Spiderman is surfing"
video_frames = pipe(prompt).frames
video_path = export_to_video(video_frames)
video_path
```

Diffusers supports different optimization techniques to improve the latency
and memory footprint of a pipeline. Since videos are often more memory-heavy than images,
we can enable CPU offloading and VAE slicing to keep the memory footprint at bay.

Let's generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing:

```python
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe.enable_model_cpu_offload()

# memory optimization
pipe.enable_vae_slicing()

prompt = "Darth Vader surfing a wave"
video_frames = pipe(prompt, num_frames=64).frames
video_path = export_to_video(video_frames)
video_path
```

It just takes **7 GBs of GPU memory** to generate the 64 video frames using PyTorch 2.0, "fp16" precision and the techniques mentioned above.

We can also use a different scheduler easily, using the same method we'd use for Stable Diffusion:

```python
import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video

pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()

prompt = "Spiderman is surfing"
video_frames = pipe(prompt, num_inference_steps=25).frames
video_path = export_to_video(video_frames)
video_path
```

Here are some sample outputs:

<table>
<tr>
<td><center>
An astronaut riding a horse.
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astr.gif"
alt="An astronaut riding a horse."
style="width: 300px;" />
</center></td>
<td ><center>
Darth vader surfing in waves.
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vader.gif"
alt="Darth vader surfing in waves."
style="width: 300px;" />
</center></td>
</tr>
</table>

## Available checkpoints

* [damo-vilab/text-to-video-ms-1.7b](https://huggingface.co/damo-vilab/text-to-video-ms-1.7b/)
* [damo-vilab/text-to-video-ms-1.7b-legacy](https://huggingface.co/damo-vilab/text-to-video-ms-1.7b-legacy)

## DiffusionPipeline
[[autodoc]] DiffusionPipeline
- all
- __call__
3 changes: 2 additions & 1 deletion docs/source/en/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,9 @@ The library has three main components:
| [stable_unclip](./stable_unclip) | Stable unCLIP | Text-to-Image Generation |
| [stable_unclip](./stable_unclip) | Stable unCLIP | Image-to-Image Text-Guided Generation |
| [stochastic_karras_ve](./api/pipelines/stochastic_karras_ve) | [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
| [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
| [unclip](./api/pipelines/unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)(implementation by [kakaobrain](https://github.com/kakaobrain/karlo)) | Text-to-Image Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
2 changes: 1 addition & 1 deletion examples/community/stable_diffusion_controlnet_img2img.py
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ def enable_model_cpu_offload(self, gpu_id=0):
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
from accelerate import cpu_offload_with_hook
else:
raise ImportError("`enable_model_offload` requires `accelerate v0.17.0` or higher.")
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")

device = torch.device(f"cuda:{gpu_id}")

Expand Down
2 changes: 1 addition & 1 deletion examples/community/stable_diffusion_controlnet_inpaint.py
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ def enable_model_cpu_offload(self, gpu_id=0):
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
from accelerate import cpu_offload_with_hook
else:
raise ImportError("`enable_model_offload` requires `accelerate v0.17.0` or higher.")
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")

device = torch.device(f"cuda:{gpu_id}")

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ def enable_model_cpu_offload(self, gpu_id=0):
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
from accelerate import cpu_offload_with_hook
else:
raise ImportError("`enable_model_offload` requires `accelerate v0.17.0` or higher.")
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")

device = torch.device(f"cuda:{gpu_id}")

Expand Down
Loading