Skip to content

Commit 25ed7cb

Browse files
authored
Update dreambooth.mdx (#2742)
Fix typos
1 parent af86b0c commit 25ed7cb

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/source/en/training/dreambooth.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ python train_dreambooth_flax.py \
118118

119119
Prior preservation is used to avoid overfitting and language-drift (check out the [paper](https://arxiv.org/abs/2208.12242) to learn more if you're interested). For prior preservation, you use other images of the same class as part of the training process. The nice thing is that you can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path you specify.
120120

121-
The author's recommend generating `num_epochs * num_samples` images for prior preservation. In most cases, 200-300 images work well.
121+
The authors recommend generating `num_epochs * num_samples` images for prior preservation. In most cases, 200-300 images work well.
122122

123123
<frameworkcontent>
124124
<pt>
@@ -321,7 +321,7 @@ Depending on your hardware, there are a few different ways to optimize DreamBoot
321321

322322
### xFormers
323323

324-
[xFormers](https://github.com/facebookresearch/xformers) is a toolbox for optimizing Transformers, and it include a [memory-efficient attention](https://facebookresearch.github.io/xformers/components/ops.html#module-xformers.ops) mechanism that is used in 🧨 Diffusers. You'll need to [install xFormers](./optimization/xformers) and then add the following argument to your training script:
324+
[xFormers](https://github.com/facebookresearch/xformers) is a toolbox for optimizing Transformers, and it includes a [memory-efficient attention](https://facebookresearch.github.io/xformers/components/ops.html#module-xformers.ops) mechanism that is used in 🧨 Diffusers. You'll need to [install xFormers](./optimization/xformers) and then add the following argument to your training script:
325325

326326
```bash
327327
--enable_xformers_memory_efficient_attention
@@ -469,4 +469,4 @@ image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
469469
image.save("dog-bucket.png")
470470
```
471471

472-
You may also run inference from any of the [saved training checkpoints](#inference-from-a-saved-checkpoint).
472+
You may also run inference from any of the [saved training checkpoints](#inference-from-a-saved-checkpoint).

0 commit comments

Comments
 (0)