Skip to content

Update dreambooth.mdx #2742

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 20, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/source/en/training/dreambooth.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ python train_dreambooth_flax.py \

Prior preservation is used to avoid overfitting and language-drift (check out the [paper](https://arxiv.org/abs/2208.12242) to learn more if you're interested). For prior preservation, you use other images of the same class as part of the training process. The nice thing is that you can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path you specify.

The author's recommend generating `num_epochs * num_samples` images for prior preservation. In most cases, 200-300 images work well.
The authors recommend generating `num_epochs * num_samples` images for prior preservation. In most cases, 200-300 images work well.

<frameworkcontent>
<pt>
Expand Down Expand Up @@ -321,7 +321,7 @@ Depending on your hardware, there are a few different ways to optimize DreamBoot

### xFormers

[xFormers](https://github.com/facebookresearch/xformers) is a toolbox for optimizing Transformers, and it include a [memory-efficient attention](https://facebookresearch.github.io/xformers/components/ops.html#module-xformers.ops) mechanism that is used in 🧨 Diffusers. You'll need to [install xFormers](./optimization/xformers) and then add the following argument to your training script:
[xFormers](https://github.com/facebookresearch/xformers) is a toolbox for optimizing Transformers, and it includes a [memory-efficient attention](https://facebookresearch.github.io/xformers/components/ops.html#module-xformers.ops) mechanism that is used in 🧨 Diffusers. You'll need to [install xFormers](./optimization/xformers) and then add the following argument to your training script:

```bash
--enable_xformers_memory_efficient_attention
Expand Down Expand Up @@ -469,4 +469,4 @@ image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
image.save("dog-bucket.png")
```

You may also run inference from any of the [saved training checkpoints](#inference-from-a-saved-checkpoint).
You may also run inference from any of the [saved training checkpoints](#inference-from-a-saved-checkpoint).