Skip to content

Commit 78de3b7

Browse files
author
bghira
committed
7879 - adjust documentation to use naruto dataset, since pokemon is now gated
1 parent 23e0915 commit 78de3b7

30 files changed

+57
-57
lines changed

docs/source/en/training/kandinsky.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ model_pred = unet(noisy_latents, timesteps, None, added_cond_kwargs=added_cond_k
205205

206206
Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀
207207

208-
You'll train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the [Create a dataset for training](create_dataset) guide. Set the environment variable `DATASET_NAME` to the name of the dataset on the Hub or if you're training on your own files, set the environment variable `TRAIN_DIR` to a path to your dataset.
208+
You'll train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the [Create a dataset for training](create_dataset) guide. Set the environment variable `DATASET_NAME` to the name of the dataset on the Hub or if you're training on your own files, set the environment variable `TRAIN_DIR` to a path to your dataset.
209209

210210
If you’re training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.
211211

@@ -219,7 +219,7 @@ To monitor training progress with Weights & Biases, add the `--report_to=wandb`
219219
<hfoption id="prior model">
220220

221221
```bash
222-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
222+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
223223

224224
accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \
225225
--dataset_name=$DATASET_NAME \
@@ -242,7 +242,7 @@ accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \
242242
<hfoption id="decoder model">
243243

244244
```bash
245-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
245+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
246246

247247
accelerate launch --mixed_precision="fp16" train_text_to_image_decoder.py \
248248
--dataset_name=$DATASET_NAME \

docs/source/en/training/lora.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ Aside from setting up the LoRA layers, the training script is more or less the s
170170

171171
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
172172

173-
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate our own Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
173+
Let's train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate our own Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
174174

175175
- saved model checkpoints
176176
- `pytorch_lora_weights.safetensors` (the trained LoRA weights)
@@ -187,7 +187,7 @@ A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM.
187187
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
188188
export OUTPUT_DIR="/sddata/finetune/lora/pokemon"
189189
export HUB_MODEL_ID="pokemon-lora"
190-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
190+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
191191

192192
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
193193
--pretrained_model_name_or_path=$MODEL_NAME \

docs/source/en/training/sdxl.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ If you want to learn more about how the training loop works, check out the [Unde
176176

177177
Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀
178178

179-
Let’s train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate your own Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with `VAE_NAME` to avoid numerical instabilities.
179+
Let’s train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with `VAE_NAME` to avoid numerical instabilities.
180180

181181
<Tip>
182182

@@ -187,7 +187,7 @@ To monitor training progress with Weights & Biases, add the `--report_to=wandb`
187187
```bash
188188
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
189189
export VAE_NAME="madebyollin/sdxl-vae-fp16-fix"
190-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
190+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
191191

192192
accelerate launch train_text_to_image_sdxl.py \
193193
--pretrained_model_name_or_path=$MODEL_NAME \

docs/source/en/training/text2image.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ Once you've made all your changes or you're okay with the default configuration,
158158
<hfoptions id="training-inference">
159159
<hfoption id="PyTorch">
160160

161-
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate your own Pokémon. Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path). If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.
161+
Let's train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Pokémon. Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path). If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.
162162

163163
<Tip>
164164

@@ -168,7 +168,7 @@ To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment va
168168

169169
```bash
170170
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
171-
export dataset_name="lambdalabs/pokemon-blip-captions"
171+
export dataset_name="lambdalabs/naruto-blip-captions"
172172

173173
accelerate launch --mixed_precision="fp16" train_text_to_image.py \
174174
--pretrained_model_name_or_path=$MODEL_NAME \
@@ -202,7 +202,7 @@ To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment va
202202

203203
```bash
204204
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
205-
export dataset_name="lambdalabs/pokemon-blip-captions"
205+
export dataset_name="lambdalabs/naruto-blip-captions"
206206

207207
python train_text_to_image_flax.py \
208208
--pretrained_model_name_or_path=$MODEL_NAME \

docs/source/en/training/wuerstchen.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ If you want to learn more about how the training loop works, check out the [Unde
131131

132132
Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀
133133

134-
Set the `DATASET_NAME` environment variable to the dataset name from the Hub. This guide uses the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset, but you can create and train on your own datasets as well (see the [Create a dataset for training](create_dataset) guide).
134+
Set the `DATASET_NAME` environment variable to the dataset name from the Hub. This guide uses the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset, but you can create and train on your own datasets as well (see the [Create a dataset for training](create_dataset) guide).
135135

136136
<Tip>
137137

@@ -140,7 +140,7 @@ To monitor training progress with Weights & Biases, add the `--report_to=wandb`
140140
</Tip>
141141

142142
```bash
143-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
143+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
144144

145145
accelerate launch train_text_to_image_prior.py \
146146
--mixed_precision="fp16" \

docs/source/ko/training/lora.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,15 +49,15 @@ huggingface-cli login
4949

5050
### 학습[[dreambooth-training]]
5151

52-
[Pokémon BLIP 캡션](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) 데이터셋으로 [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)를 파인튜닝해 나만의 포켓몬을 생성해 보겠습니다.
52+
[Pokémon BLIP 캡션](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) 데이터셋으로 [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)를 파인튜닝해 나만의 포켓몬을 생성해 보겠습니다.
5353

5454
시작하려면 `MODEL_NAME``DATASET_NAME` 환경 변수가 설정되어 있는지 확인하십시오. `OUTPUT_DIR``HUB_MODEL_ID` 변수는 선택 사항이며 허브에서 모델을 저장할 위치를 지정합니다.
5555

5656
```bash
5757
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
5858
export OUTPUT_DIR="/sddata/finetune/lora/pokemon"
5959
export HUB_MODEL_ID="pokemon-lora"
60-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
60+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
6161
```
6262

6363
학습을 시작하기 전에 알아야 할 몇 가지 플래그가 있습니다.

docs/source/ko/training/text2image.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -73,12 +73,12 @@ xFormers는 Flax에 사용할 수 없습니다.
7373

7474
<frameworkcontent>
7575
<pt>
76-
다음과 같이 [Pokémon BLIP 캡션](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) 데이터셋에서 파인튜닝 실행을 위해 [PyTorch 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)를 실행합니다:
76+
다음과 같이 [Pokémon BLIP 캡션](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) 데이터셋에서 파인튜닝 실행을 위해 [PyTorch 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)를 실행합니다:
7777

7878

7979
```bash
8080
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
81-
export dataset_name="lambdalabs/pokemon-blip-captions"
81+
export dataset_name="lambdalabs/naruto-blip-captions"
8282

8383
accelerate launch train_text_to_image.py \
8484
--pretrained_model_name_or_path=$MODEL_NAME \
@@ -136,7 +136,7 @@ pip install -U -r requirements_flax.txt
136136

137137
```bash
138138
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
139-
export dataset_name="lambdalabs/pokemon-blip-captions"
139+
export dataset_name="lambdalabs/naruto-blip-captions"
140140

141141
python train_text_to_image_flax.py \
142142
--pretrained_model_name_or_path=$MODEL_NAME \

examples/consistency_distillation/README_sdxl.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,11 +115,11 @@ accelerate launch train_lcm_distill_lora_sdxl_wds.py \
115115

116116
We provide another version for LCM LoRA SDXL that follows best practices of `peft` and leverages the `datasets` library for quick experimentation. The script doesn't load two UNets unlike `train_lcm_distill_lora_sdxl_wds.py` which reduces the memory requirements quite a bit.
117117

118-
Below is an example training command that trains an LCM LoRA on the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions):
118+
Below is an example training command that trains an LCM LoRA on the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions):
119119

120120
```bash
121121
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
122-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
122+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
123123
export VAE_PATH="madebyollin/sdxl-vae-fp16-fix"
124124

125125
accelerate launch train_lcm_distill_lora_sdxl.py \

examples/consistency_distillation/train_lcm_distill_lora_sdxl.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
logger = get_logger(__name__)
7272

7373
DATASET_NAME_MAPPING = {
74-
"lambdalabs/pokemon-blip-captions": ("image", "text"),
74+
"lambdalabs/naruto-blip-captions": ("image", "text"),
7575
}
7676

7777

examples/kandinsky2_2/text_to_image/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ To disable wandb logging, remove the `--report_to=="wandb"` and `--validation_pr
5757

5858
<!-- accelerate_snippet_start -->
5959
```bash
60-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
60+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
6161

6262
accelerate launch --mixed_precision="fp16" train_text_to_image_decoder.py \
6363
--dataset_name=$DATASET_NAME \
@@ -139,7 +139,7 @@ You can fine-tune the Kandinsky prior model with `train_text_to_image_prior.py`
139139

140140
<!-- accelerate_snippet_start -->
141141
```bash
142-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
142+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
143143

144144
accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \
145145
--dataset_name=$DATASET_NAME \
@@ -183,7 +183,7 @@ If you want to use a fine-tuned decoder checkpoint along with your fine-tuned pr
183183
for running distributed training with `accelerate`. Here is an example command:
184184

185185
```bash
186-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
186+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
187187

188188
accelerate launch --mixed_precision="fp16" --multi_gpu train_text_to_image_decoder.py \
189189
--dataset_name=$DATASET_NAME \
@@ -227,13 +227,13 @@ on consumer GPUs like Tesla T4, Tesla V100.
227227

228228
### Training
229229

230-
First, you need to set up your development environment as explained in the [installation](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables. Here, we will use [Kandinsky 2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder) and the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions).
230+
First, you need to set up your development environment as explained in the [installation](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables. Here, we will use [Kandinsky 2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder) and the [Pokemons dataset](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions).
231231

232232

233233
#### Train decoder
234234

235235
```bash
236-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
236+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
237237

238238
accelerate launch --mixed_precision="fp16" train_text_to_image_decoder_lora.py \
239239
--dataset_name=$DATASET_NAME --caption_column="text" \
@@ -252,7 +252,7 @@ accelerate launch --mixed_precision="fp16" train_text_to_image_decoder_lora.py \
252252
#### Train prior
253253

254254
```bash
255-
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
255+
export DATASET_NAME="lambdalabs/naruto-blip-captions"
256256

257257
accelerate launch --mixed_precision="fp16" train_text_to_image_prior_lora.py \
258258
--dataset_name=$DATASET_NAME --caption_column="text" \

0 commit comments

Comments
 (0)