Skip to content

Commit 5ef3b5c

Browse files
committed
Unifies tabs/spaces in README.
1 parent cd084f5 commit 5ef3b5c

File tree

1 file changed

+20
-20
lines changed

1 file changed

+20
-20
lines changed

examples/controlnet/README.md

Lines changed: 20 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -284,9 +284,9 @@ TPU_TYPE=v4-8
284284
VM_NAME=hg_flax
285285
286286
gcloud alpha compute tpus tpu-vm create $VM_NAME \
287-
--zone $ZONE \
288-
--accelerator-type $TPU_TYPE \
289-
--version tpu-vm-v4-base
287+
--zone $ZONE \
288+
--accelerator-type $TPU_TYPE \
289+
--version tpu-vm-v4-base
290290
291291
gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE -- \
292292
```
@@ -380,20 +380,20 @@ export OUTPUT_DIR="runs/uncanny-faces-{timestamp}"
380380
export HUB_MODEL_ID="controlnet-uncanny-faces"
381381

382382
python3 train_controlnet_flax.py \
383-
--pretrained_model_name_or_path=$MODEL_DIR \
384-
--output_dir=$OUTPUT_DIR \
385-
--dataset_name=multimodalart/facesyntheticsspigacaptioned \
386-
--streaming \
387-
--conditioning_image_column=spiga_seg \
388-
--image_column=image \
389-
--caption_column=image_caption \
390-
--resolution=512 \
391-
--max_train_samples 100000 \
392-
--learning_rate=1e-5 \
393-
--train_batch_size=1 \
394-
--revision="flax" \
395-
--report_to="wandb" \
396-
--tracker_project_name=$HUB_MODEL_ID
383+
--pretrained_model_name_or_path=$MODEL_DIR \
384+
--output_dir=$OUTPUT_DIR \
385+
--dataset_name=multimodalart/facesyntheticsspigacaptioned \
386+
--streaming \
387+
--conditioning_image_column=spiga_seg \
388+
--image_column=image \
389+
--caption_column=image_caption \
390+
--resolution=512 \
391+
--max_train_samples 100000 \
392+
--learning_rate=1e-5 \
393+
--train_batch_size=1 \
394+
--revision="flax" \
395+
--report_to="wandb" \
396+
--tracker_project_name=$HUB_MODEL_ID
397397
```
398398

399399
Note, however, that the performance of the TPUs might get bottlenecked as streaming with `datasets` is not optimized for images. For ensuring maximum throughput, we encourage you to explore the following options:
@@ -405,14 +405,14 @@ Note, however, that the performance of the TPUs might get bottlenecked as stream
405405
When work with a larger dataset, you may need to run training process for a long time and it’s useful to save regular checkpoints during the process. You can use the following argument to enable intermediate checkpointing:
406406

407407
```bash
408-
--checkpointing_steps=500
408+
--checkpointing_steps=500
409409
```
410410
This will save the trained model in subfolders of your output_dir. Subfolder names is the number of steps performed so far; for example: a checkpoint saved after 500 training steps would be saved in a subfolder named 500
411411

412412
You can then start your training from this saved checkpoint with
413413

414414
```bash
415-
--controlnet_model_name_or_path="./control_out/500"
415+
--controlnet_model_name_or_path="./control_out/500"
416416
```
417417

418418
We support training with the Min-SNR weighting strategy proposed in [Efficient Diffusion Training via Min-SNR Weighting Strategy](https://arxiv.org/abs/2303.09556) which helps to achieve faster convergence by rebalancing the loss. To use it, one needs to set the `--snr_gamma` argument. The recommended value when using it is `5.0`.
@@ -422,7 +422,7 @@ We also support gradient accumulation - it is a technique that lets you use a bi
422422
You can **profile your code** with:
423423

424424
```bash
425-
--profile_steps==5
425+
--profile_steps==5
426426
```
427427

428428
Refer to the [JAX documentation on profiling](https://jax.readthedocs.io/en/latest/profiling.html). To inspect the profile trace, you'll have to install and start Tensorboard with the profile plugin:

0 commit comments

Comments
 (0)