Skip to content

Commit 37a44bb

Browse files
Add ModelEditing pipeline (#2721)
* TIME first commit * styling. * styling 2. * fixes; tests * apply styling and doc fix. * remove sups. * fixes * remove temp file * move augmentations to const * added doc entry * code quality * customize augmentations * quality * quality --------- Co-authored-by: Sayak Paul <[email protected]>
1 parent 4a98d6e commit 37a44bb

File tree

10 files changed

+1105
-1
lines changed

10 files changed

+1105
-1
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,8 @@
191191
title: MultiDiffusion Panorama
192192
- local: api/pipelines/stable_diffusion/controlnet
193193
title: Text-to-Image Generation with ControlNet Conditioning
194+
- local: api/pipelines/stable_diffusion/model_editing
195+
title: Text-to-Image Model Editing
194196
title: Stable Diffusion
195197
- local: api/pipelines/stable_diffusion_2
196198
title: Stable Diffusion 2
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Editing Implicit Assumptions in Text-to-Image Diffusion Models
14+
15+
## Overview
16+
17+
[Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://arxiv.org/abs/2303.08084) by Hadas Orgad, Bahjat Kawar, and Yonatan Belinkov.
18+
19+
The abstract of the paper is the following:
20+
21+
*Text-to-image diffusion models often make implicit assumptions about the world when generating images. While some assumptions are useful (e.g., the sky is blue), they can also be outdated, incorrect, or reflective of social biases present in the training data. Thus, there is a need to control these assumptions without requiring explicit user input or costly re-training. In this work, we aim to edit a given implicit assumption in a pre-trained diffusion model. Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a "source" under-specified prompt for which the model makes an implicit assumption (e.g., "a pack of roses"), and a "destination" prompt that describes the same setting, but with a specified desired attribute (e.g., "a pack of blue roses"). TIME then updates the model's cross-attention layers, as these layers assign visual meaning to textual tokens. We edit the projection matrices in these layers such that the source prompt is projected close to the destination prompt. Our method is highly efficient, as it modifies a mere 2.2% of the model's parameters in under one second. To evaluate model editing approaches, we introduce TIMED (TIME Dataset), containing 147 source and destination prompt pairs from various domains. Our experiments (using Stable Diffusion) show that TIME is successful in model editing, generalizes well for related prompts unseen during editing, and imposes minimal effect on unrelated generations.*
22+
23+
Resources:
24+
25+
* [Project Page](https://time-diffusion.github.io/).
26+
* [Paper](https://arxiv.org/abs/2303.08084).
27+
* [Original Code](https://github.com/bahjat-kawar/time-diffusion).
28+
* [Demo](https://huggingface.co/spaces/bahjat-kawar/time-diffusion).
29+
30+
## Available Pipelines:
31+
32+
| Pipeline | Tasks | Demo
33+
|---|---|:---:|
34+
| [StableDiffusionModelEditingPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py) | *Text-to-Image Model Editing* | [🤗 Space](https://huggingface.co/spaces/bahjat-kawar/time-diffusion)) |
35+
36+
This pipeline enables editing the diffusion model weights, such that its assumptions on a given concept are changed. The resulting change is expected to take effect in all prompt generations pertaining to the edited concept.
37+
38+
## Usage example
39+
40+
```python
41+
import torch
42+
from diffusers import StableDiffusionModelEditingPipeline
43+
44+
model_ckpt = "CompVis/stable-diffusion-v1-4"
45+
pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt)
46+
47+
pipe = pipe.to("cuda")
48+
49+
source_prompt = "A pack of roses"
50+
destination_prompt = "A pack of blue roses"
51+
pipe.edit_model(source_prompt, destination_prompt)
52+
53+
prompt = "A field of roses"
54+
image = pipe(prompt).images[0]
55+
image.save("field_of_roses.png")
56+
```
57+
58+
## StableDiffusionModelEditingPipeline
59+
[[autodoc]] StableDiffusionModelEditingPipeline
60+
- __call__
61+
- all

docs/source/en/api/pipelines/stable_diffusion/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ For more details about how Stable Diffusion works and how it differs from the ba
3535
| [StableDiffusionInstructPix2PixPipeline](./pix2pix) | **Experimental** *Text-Based Image Editing * | | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/spaces/timbrooks/instruct-pix2pix)
3636
| [StableDiffusionAttendAndExcitePipeline](./attend_and_excite) | **Experimental** *Text-to-Image Generation * | | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite)
3737
| [StableDiffusionPix2PixZeroPipeline](./pix2pix_zero) | **Experimental** *Text-Based Image Editing * | | [Zero-shot Image-to-Image Translation](https://arxiv.org/abs/2302.03027)
38+
| [StableDiffusionModelEditingPipeline](./model_editing) | **Experimental** *Text-to-Image Model Editing * | | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://arxiv.org/abs/2303.08084)
3839

3940

4041

docs/source/en/index.mdx

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,7 @@ The library has three main components:
7676
| [stable_diffusion_self_attention_guidance](./api/pipelines/stable_diffusion/self_attention_guidance) | [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation |
7777
| [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
7878
| [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [Stable Diffusion Latent Upscaler](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
79+
| [stable_diffusion_model_editing](./api/pipelines/stable_diffusion/model_editing) | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://time-diffusion.github.io/) | Text-to-Image Model Editing |
7980
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
8081
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Stable Diffusion 2](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
8182
| [stable_diffusion_2](./api/pipelines/stable_diffusion_2) | [Depth-Conditional Stable Diffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion) | Depth-to-Image Generation |
@@ -89,4 +90,4 @@ The library has three main components:
8990
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
9091
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
9192
| [versatile_diffusion](./api/pipelines/versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
92-
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
93+
| [vq_diffusion](./api/pipelines/vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |

src/diffusers/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -126,6 +126,7 @@
126126
StableDiffusionInpaintPipelineLegacy,
127127
StableDiffusionInstructPix2PixPipeline,
128128
StableDiffusionLatentUpscalePipeline,
129+
StableDiffusionModelEditingPipeline,
129130
StableDiffusionPanoramaPipeline,
130131
StableDiffusionPipeline,
131132
StableDiffusionPipelineSafe,

src/diffusers/pipelines/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@
5959
StableDiffusionInpaintPipelineLegacy,
6060
StableDiffusionInstructPix2PixPipeline,
6161
StableDiffusionLatentUpscalePipeline,
62+
StableDiffusionModelEditingPipeline,
6263
StableDiffusionPanoramaPipeline,
6364
StableDiffusionPipeline,
6465
StableDiffusionPix2PixZeroPipeline,

src/diffusers/pipelines/stable_diffusion/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,7 @@ class StableDiffusionPipelineOutput(BaseOutput):
5151
from .pipeline_stable_diffusion_inpaint_legacy import StableDiffusionInpaintPipelineLegacy
5252
from .pipeline_stable_diffusion_instruct_pix2pix import StableDiffusionInstructPix2PixPipeline
5353
from .pipeline_stable_diffusion_latent_upscale import StableDiffusionLatentUpscalePipeline
54+
from .pipeline_stable_diffusion_model_editing import StableDiffusionModelEditingPipeline
5455
from .pipeline_stable_diffusion_panorama import StableDiffusionPanoramaPipeline
5556
from .pipeline_stable_diffusion_sag import StableDiffusionSAGPipeline
5657
from .pipeline_stable_diffusion_upscale import StableDiffusionUpscalePipeline

0 commit comments

Comments
 (0)