Skip to content

Commit 3e5d960

Browse files
asfiyab-nvidiapatrickvonplaten
authored andcommitted
Add TensorRT SD/txt2img Community Pipeline to diffusers along with TensorRT utils (huggingface#2974)
* Add SD/txt2img Community Pipeline to diffusers along with TensorRT utils Signed-off-by: Asfiya Baig <[email protected]> * update installation command Signed-off-by: Asfiya Baig <[email protected]> * update tensorrt installation Signed-off-by: Asfiya Baig <[email protected]> * changes 1. Update setting of cache directory 2. Address comments: merge utils and pipeline code. 3. Address comments: Add section in README Signed-off-by: Asfiya Baig <[email protected]> * apply make style Signed-off-by: Asfiya Baig <[email protected]> --------- Signed-off-by: Asfiya Baig <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
1 parent ace441e commit 3e5d960

File tree

2 files changed

+958
-1
lines changed

2 files changed

+958
-1
lines changed

examples/community/README.md

Lines changed: 32 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt
3131
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
3232
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - |[Aengus (Duc-Anh)](https://github.com/aengusng8) |
3333
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
34-
34+
| TensorRT Stable Diffusion Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - |[Asfiya Baig](https://github.com/asfiyab-nvidia) |
3535

3636

3737
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
@@ -1130,3 +1130,34 @@ Init Image
11301130
Output Image
11311131

11321132
![img2img_clip_guidance](https://huggingface.co/datasets/njindal/images/resolve/main/clip_guided_img2img.jpg)
1133+
1134+
### TensorRT Text2Image Stable Diffusion Pipeline
1135+
1136+
The TensorRT Pipeline can be used to accelerate the Text2Image Stable Diffusion Inference run.
1137+
1138+
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
1139+
1140+
```python
1141+
import torch
1142+
from diffusers import DDIMScheduler
1143+
from diffusers.pipelines.stable_diffusion import StableDiffusionPipeline
1144+
1145+
# Use the DDIMScheduler scheduler here instead
1146+
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
1147+
subfolder="scheduler")
1148+
1149+
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
1150+
custom_pipeline="stable_diffusion_tensorrt_txt2img",
1151+
revision='fp16',
1152+
torch_dtype=torch.float16,
1153+
scheduler=scheduler,)
1154+
1155+
# re-use cached folder to save ONNX models and TensorRT Engines
1156+
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
1157+
1158+
pipe = pipe.to("cuda")
1159+
1160+
prompt = "a beautiful photograph of Mt. Fuji during cherry blossom"
1161+
image = pipe(prompt).images[0]
1162+
image.save('tensorrt_mt_fuji.png')
1163+
```

0 commit comments

Comments
 (0)