|
1 | 1 | # Community Examples |
2 | 2 |
|
| 3 | +> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).** |
| 4 | +
|
3 | 5 | **Community** examples consist of both inference and training examples that have been added by the community. |
4 | 6 |
|
5 | 7 | | Example | Description | Author | Colab | |
6 | 8 | |:----------|:----------------------|:-----------------|----------:| |
7 | 9 | | CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion| [Suraj Patil](https://github.com/patil-suraj/) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/CLIP_Guided_Stable_diffusion_with_diffusers.ipynb) | |
8 | 10 | | One Step U-Net (Dummy) | Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) | [Patrick von Platen](https://github.com/patrickvonplaten/) | - | |
9 | | -| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Nate Raw](https://github.com/nateraw/) | - | |
| 11 | +| Stable Diffusion Interpolation | Interpolate the latent space of Stable Diffusion between different prompts/seeds | [Nate Raw](https://github.com/nateraw/) | - | |
| 12 | + |
| 13 | +## Example usages |
| 14 | + |
| 15 | +### CLIP Guided Stable Diffusion |
| 16 | + |
| 17 | +CLIP guided stable diffusion can help to generate more realistic images |
| 18 | +by guiding stable diffusion at every denoising step with an additional CLIP model. |
| 19 | + |
| 20 | +The following code requires roughly 12GB of GPU RAM. |
| 21 | + |
| 22 | +```python |
| 23 | +from diffusers import DiffusionPipeline |
| 24 | +from transformers import CLIPFeatureExtractor, CLIPModel |
| 25 | +import torch |
| 26 | + |
| 27 | + |
| 28 | +feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") |
| 29 | +clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) |
| 30 | + |
| 31 | + |
| 32 | +guided_pipeline = DiffusionPipeline.from_pretrained( |
| 33 | + "CompVis/stable-diffusion-v1-4", |
| 34 | + custom_pipeline="clip_guided_stable_diffusion", |
| 35 | + clip_model=clip_model, |
| 36 | + feature_extractor=feature_extractor, |
| 37 | + revision="fp16", |
| 38 | + torch_dtype=torch.float16, |
| 39 | +) |
| 40 | +guided_pipeline.enable_attention_slicing() |
| 41 | +guided_pipeline = guided_pipeline.to("cuda") |
| 42 | + |
| 43 | +prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" |
| 44 | + |
| 45 | +generator = torch.Generator(device="cuda").manual_seed(0) |
| 46 | +images = [] |
| 47 | +for i in range(4): |
| 48 | + image = guided_pipeline( |
| 49 | + prompt, |
| 50 | + num_inference_steps=50, |
| 51 | + guidance_scale=7.5, |
| 52 | + clip_guidance_scale=100, |
| 53 | + num_cutouts=4, |
| 54 | + use_cutouts=False, |
| 55 | + generator=generator, |
| 56 | + ).images[0] |
| 57 | + images.append(image) |
| 58 | + |
| 59 | +# save images locally |
| 60 | +for i, img in enumerate(images): |
| 61 | + img.save(f"./clip_guided_sd/image_{i}.png") |
| 62 | +``` |
| 63 | + |
| 64 | +The `images` list contains a list of PIL images that can be saved locally or displayed directly in a google colab. |
| 65 | +Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images: |
| 66 | + |
| 67 | +. |
| 68 | + |
| 69 | +### One Step U-Net (Dummy) |
| 70 | + |
| 71 | +The dummy "one-step-unet" can be run as follows: |
| 72 | + |
| 73 | +```python |
| 74 | +from diffusers import DiffusionPipeline |
| 75 | + |
| 76 | +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") |
| 77 | +pipe() |
| 78 | +``` |
| 79 | + |
| 80 | +**Note**: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841). |
| 81 | + |
| 82 | +### Stable Diffusion Interpolation |
| 83 | + |
| 84 | +The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes. |
| 85 | + |
| 86 | +```python |
| 87 | +from diffusers import DiffusionPipeline |
| 88 | +import torch |
| 89 | + |
| 90 | +pipe = DiffusionPipeline.from_pretrained( |
| 91 | + "CompVis/stable-diffusion-v1-4", |
| 92 | + revision='fp16', |
| 93 | + torch_dtype=torch.float16, |
| 94 | + safety_checker=None, # Very important for videos...lots of false positives while interpolating |
| 95 | + custom_pipeline="interpolate_stable_diffusion", |
| 96 | +).to('cuda') |
| 97 | +pipe.enable_attention_slicing() |
| 98 | + |
| 99 | +frame_filepaths = pipe.walk( |
| 100 | + prompts=['a dog', 'a cat', 'a horse'], |
| 101 | + seeds=[42, 1337, 1234], |
| 102 | + num_interpolation_steps=16, |
| 103 | + output_dir='./dreams', |
| 104 | + batch_size=4, |
| 105 | + height=512, |
| 106 | + width=512, |
| 107 | + guidance_scale=8.5, |
| 108 | + num_inference_steps=50, |
| 109 | +) |
| 110 | +``` |
| 111 | + |
| 112 | +The output of the `walk(...)` function returns a list of images saved under the folder as defined in `output_dir`. You can use these images to create videos of stable diffusion. |
| 113 | + |
| 114 | +> **Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality.** |
0 commit comments