Skip to content

Commit d7bbea7

Browse files
lawrence-cjsayakpaulstevhliuglegendre01asomoza
committed
[docs] add doc for PixArtSigmaPipeline (#7857)
* 1. add doc for PixArtSigmaPipeline; --------- Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Steven Liu <[email protected]> Co-authored-by: Guillaume LEGENDRE <[email protected]> Co-authored-by: Álvaro Somoza <[email protected]> Co-authored-by: Bagheera <[email protected]> Co-authored-by: bghira <[email protected]> Co-authored-by: Hyoungwon Cho <[email protected]> Co-authored-by: yiyixuxu <[email protected]> Co-authored-by: Tolga Cangöz <[email protected]> Co-authored-by: Philip Pham <[email protected]>
1 parent 6da9e63 commit d7bbea7

File tree

4 files changed

+160
-7
lines changed

4 files changed

+160
-7
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -305,6 +305,8 @@
305305
title: Personalized Image Animator (PIA)
306306
- local: api/pipelines/pixart
307307
title: PixArt-α
308+
- local: api/pipelines/pixart_sigma
309+
title: PixArt-Σ
308310
- local: api/pipelines/self_attention_guidance
309311
title: Self-Attention Guidance
310312
- local: api/pipelines/semantic_stable_diffusion

docs/source/en/api/pipelines/pixart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Some notes about this pipeline:
3131

3232
<Tip>
3333

34-
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
34+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
3535

3636
</Tip>
3737

Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# PixArt-Σ
14+
15+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/header_collage_sigma.jpg)
16+
17+
[PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation](https://huggingface.co/papers/2403.04692) is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li.
18+
19+
The abstract from the paper is:
20+
21+
*In this paper, we introduce PixArt-Σ, a Diffusion Transformer model (DiT) capable of directly generating images at 4K resolution. PixArt-Σ represents a significant advancement over its predecessor, PixArt-α, offering images of markedly higher fidelity and improved alignment with text prompts. A key feature of PixArt-Σ is its training efficiency. Leveraging the foundational pre-training of PixArt-α, it evolves from the ‘weaker’ baseline to a ‘stronger’ model via incorporating higher quality data, a process we term “weak-to-strong training”. The advancements in PixArt-Σ are twofold: (1) High-Quality Training Data: PixArt-Σ incorporates superior-quality image data, paired with more precise and detailed image captions. (2) Efficient Token Compression: we propose a novel attention module within the DiT framework that compresses both keys and values, significantly improving efficiency and facilitating ultra-high-resolution image generation. Thanks to these improvements, PixArt-Σ achieves superior image quality and user prompt adherence capabilities with significantly smaller model size (0.6B parameters) than existing text-to-image diffusion models, such as SDXL (2.6B parameters) and SD Cascade (5.1B parameters). Moreover, PixArt-Σ’s capability to generate 4K images supports the creation of high-resolution posters and wallpapers, efficiently bolstering the production of highquality visual content in industries such as film and gaming.*
22+
23+
You can find the original codebase at [PixArt-alpha/PixArt-sigma](https://github.com/PixArt-alpha/PixArt-sigma) and all the available checkpoints at [PixArt-alpha](https://huggingface.co/PixArt-alpha).
24+
25+
Some notes about this pipeline:
26+
27+
* It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as [DiT](https://hf.co/docs/transformers/model_doc/dit).
28+
* It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details.
29+
* It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found [here](https://github.com/PixArt-alpha/PixArt-sigma/blob/master/diffusion/data/datasets/utils.py).
30+
* It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as PixArt-α, Stable Diffusion XL, Playground V2.0 and DALL-E 3, while being more efficient than them.
31+
* It shows the ability of generating super high resolution images, such as 2048px or even 4K.
32+
* It shows that text-to-image models can grow from a weak model to a stronger one through several improvements (VAEs, datasets, and so on.)
33+
34+
<Tip>
35+
36+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
37+
38+
</Tip>
39+
40+
## Inference with under 8GB GPU VRAM
41+
42+
Run the [`PixArtSigmaPipeline`] with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let's walk through a full-fledged example.
43+
44+
First, install the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library:
45+
46+
```bash
47+
pip install -U bitsandbytes
48+
```
49+
50+
Then load the text encoder in 8-bit:
51+
52+
```python
53+
from transformers import T5EncoderModel
54+
from diffusers import PixArtSigmaPipeline
55+
import torch
56+
57+
text_encoder = T5EncoderModel.from_pretrained(
58+
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
59+
subfolder="text_encoder",
60+
load_in_8bit=True,
61+
device_map="auto",
62+
63+
)
64+
pipe = PixArtSigmaPipeline.from_pretrained(
65+
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
66+
text_encoder=text_encoder,
67+
transformer=None,
68+
device_map="balanced"
69+
)
70+
```
71+
72+
Now, use the `pipe` to encode a prompt:
73+
74+
```python
75+
with torch.no_grad():
76+
prompt = "cute cat"
77+
prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt)
78+
```
79+
80+
Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up som GPU VRAM:
81+
82+
```python
83+
import gc
84+
85+
def flush():
86+
gc.collect()
87+
torch.cuda.empty_cache()
88+
89+
del text_encoder
90+
del pipe
91+
flush()
92+
```
93+
94+
Then compute the latents with the prompt embeddings as inputs:
95+
96+
```python
97+
pipe = PixArtSigmaPipeline.from_pretrained(
98+
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
99+
text_encoder=None,
100+
torch_dtype=torch.float16,
101+
).to("cuda")
102+
103+
latents = pipe(
104+
negative_prompt=None,
105+
prompt_embeds=prompt_embeds,
106+
negative_prompt_embeds=negative_embeds,
107+
prompt_attention_mask=prompt_attention_mask,
108+
negative_prompt_attention_mask=negative_prompt_attention_mask,
109+
num_images_per_prompt=1,
110+
output_type="latent",
111+
).images
112+
113+
del pipe.transformer
114+
flush()
115+
```
116+
117+
<Tip>
118+
119+
Notice that while initializing `pipe`, you're setting `text_encoder` to `None` so that it's not loaded.
120+
121+
</Tip>
122+
123+
Once the latents are computed, pass it off to the VAE to decode into a real image:
124+
125+
```python
126+
with torch.no_grad():
127+
image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0]
128+
image = pipe.image_processor.postprocess(image, output_type="pil")[0]
129+
image.save("cat.png")
130+
```
131+
132+
By deleting components you aren't using and flushing the GPU VRAM, you should be able to run [`PixArtSigmaPipeline`] with under 8GB GPU VRAM.
133+
134+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/8bits_cat.png)
135+
136+
If you want a report of your memory-usage, run this [script](https://gist.github.com/sayakpaul/3ae0f847001d342af27018a96f467e4e).
137+
138+
<Tip warning={true}>
139+
140+
Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It's recommended to compare the outputs with and without 8-bit.
141+
142+
</Tip>
143+
144+
While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could also specify `load_in_4bit` to bring your memory requirements down even further to under 7GB.
145+
146+
## PixArtSigmaPipeline
147+
148+
[[autodoc]] PixArtSigmaPipeline
149+
- all
150+
- __call__
151+

src/diffusers/pipelines/pixart_alpha/pipeline_pixart_sigma.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323

2424
from ...image_processor import PixArtImageProcessor
2525
from ...models import AutoencoderKL, Transformer2DModel
26-
from ...schedulers import DPMSolverMultistepScheduler
26+
from ...schedulers import KarrasDiffusionSchedulers
2727
from ...utils import (
2828
BACKENDS_MAPPING,
2929
deprecate,
@@ -203,7 +203,7 @@ def __init__(
203203
text_encoder: T5EncoderModel,
204204
vae: AutoencoderKL,
205205
transformer: Transformer2DModel,
206-
scheduler: DPMSolverMultistepScheduler,
206+
scheduler: KarrasDiffusionSchedulers,
207207
):
208208
super().__init__()
209209

@@ -214,7 +214,7 @@ def __init__(
214214
self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
215215
self.image_processor = PixArtImageProcessor(vae_scale_factor=self.vae_scale_factor)
216216

217-
# Copied from diffusers.pipelines.pixart_alpha.pipeline_pixart_alpha.PixArtAlphaPipeline.encode_prompt
217+
# Copied from diffusers.pipelines.pixart_alpha.pipeline_pixart_alpha.PixArtAlphaPipeline.encode_prompt with 120->300
218218
def encode_prompt(
219219
self,
220220
prompt: Union[str, List[str]],
@@ -227,7 +227,7 @@ def encode_prompt(
227227
prompt_attention_mask: Optional[torch.Tensor] = None,
228228
negative_prompt_attention_mask: Optional[torch.Tensor] = None,
229229
clean_caption: bool = False,
230-
max_sequence_length: int = 120,
230+
max_sequence_length: int = 300,
231231
**kwargs,
232232
):
233233
r"""
@@ -254,7 +254,7 @@ def encode_prompt(
254254
string.
255255
clean_caption (`bool`, defaults to `False`):
256256
If `True`, the function will preprocess and clean the provided caption before encoding.
257-
max_sequence_length (`int`, defaults to 120): Maximum sequence length to use for the prompt.
257+
max_sequence_length (`int`, defaults to 300): Maximum sequence length to use for the prompt.
258258
"""
259259

260260
if "mask_feature" in kwargs:
@@ -707,7 +707,7 @@ def __call__(
707707
If set to `True`, the requested height and width are first mapped to the closest resolutions using
708708
`ASPECT_RATIO_1024_BIN`. After the produced latents are decoded into images, they are resized back to
709709
the requested resolution. Useful for generating non-square images.
710-
max_sequence_length (`int` defaults to 120): Maximum sequence length to use with the `prompt`.
710+
max_sequence_length (`int` defaults to 300): Maximum sequence length to use with the `prompt`.
711711
712712
Examples:
713713

0 commit comments

Comments
 (0)