You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
27
-
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
28
-
You can move the generator object to GPU, just like you would in PyTorch.
32
+
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU.
33
+
You can move the generator object to a GPU, just like you would in PyTorch:
29
34
30
35
```python
31
36
>>> generator.to("cuda")
@@ -37,10 +42,19 @@ Now you can use the `generator` on your text prompt:
37
42
>>> image = generator("An image of a squirrel in Picasso style").images[0]
38
43
```
39
44
40
-
The output is by default wrapped into a [PILImage object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
45
+
The output is by default wrapped into a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object.
41
46
42
-
You can save the image by simply calling:
47
+
You can save the image by calling:
43
48
44
49
```python
45
50
>>> image.save("image_of_squirrel_painting.png")
46
51
```
52
+
53
+
Try out the Spaces below, and feel free to play around with the guidance scale parameter to see how it affects the image quality!
💡 `strength` is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the `ghibli style` tokens) and run the pipeline:
71
62
72
63
```python
64
+
prompt ="ghibli style, a fantasy landscape with castles"
Check out the Spaces below, and try generating images with different values for `strength`. You'll notice that using lower values for `strength` produces images that are more similar to the original image.
91
+
92
+
Feel free to also switch the scheduler to the [`LMSDiscreteScheduler`] and see how that affects the output.
<imgsrc="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"alt="drawing"width="250"/> | <imgsrc="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"alt="drawing"width="250"/> | ***Face of a yellow cat, high resolution, sitting on a park bench*** | <imgsrc="https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/in_paint/yellow_cat_sitting_on_a_park_bench.png"alt="drawing"width="250"/> |
50
61
51
62
52
-
You can also run this example on colab [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
53
-
54
63
<Tipwarning={true}>
55
-
A previous experimental implementation of in-painting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn't contain the new model will still apply the old in-painting method.
64
+
65
+
A previous experimental implementation of inpainting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn't contain the new model will still apply the old inpainting method.
66
+
56
67
</Tip>
68
+
69
+
Check out the Spaces below to try out image inpainting yourself!
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
29
-
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
30
-
You can move the generator object to GPU, just like you would in PyTorch.
39
+
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU.
40
+
You can move the generator object to a GPU, just like you would in PyTorch:
31
41
32
42
```python
33
43
>>> generator.to("cuda")
34
44
```
35
45
36
-
Now you can use the `generator`on your text prompt:
46
+
Now you can use the `generator`to generate an image:
37
47
38
48
```python
39
49
>>> image = generator().images[0]
40
50
```
41
51
42
-
The output is by default wrapped into a [PILImage object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
52
+
The output is by default wrapped into a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object.
43
53
44
-
You can save the image by simply calling:
54
+
You can save the image by calling:
45
55
46
56
```python
47
57
>>> image.save("generated_image.png")
48
58
```
49
59
60
+
Try out the Spaces below, and feel free to play around with the inference steps parameter to see how it affects the image quality!
0 commit comments