diff --git a/docs/source/en/training/controlnet.mdx b/docs/source/en/training/controlnet.mdx index 94e3d969b80a..1c91298477c7 100644 --- a/docs/source/en/training/controlnet.mdx +++ b/docs/source/en/training/controlnet.mdx @@ -33,7 +33,12 @@ cd diffusers pip install -e . ``` -Then navigate into the example folder and run: +Then navigate into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) +```bash +cd examples/controlnet +``` + +Now run: ```bash pip install -r requirements.txt ``` diff --git a/docs/source/en/training/custom_diffusion.mdx b/docs/source/en/training/custom_diffusion.mdx index 08604f101ea2..ee8fb19bd18c 100644 --- a/docs/source/en/training/custom_diffusion.mdx +++ b/docs/source/en/training/custom_diffusion.mdx @@ -33,7 +33,13 @@ cd diffusers pip install -e . ``` -Then cd in the example folder and run +Then cd into the [example folder](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion) + +``` +cd examples/custom_diffusion +``` + +Now run ```bash pip install -r requirements.txt diff --git a/docs/source/en/training/instructpix2pix.mdx b/docs/source/en/training/instructpix2pix.mdx index ff34ec335656..6b6d4d908673 100644 --- a/docs/source/en/training/instructpix2pix.mdx +++ b/docs/source/en/training/instructpix2pix.mdx @@ -24,7 +24,7 @@ The output is an "edited" image that reflects the edit instruction applied on th instructpix2pix-output

-The `train_instruct_pix2pix.py` script shows how to implement the training procedure and adapt it for Stable Diffusion. +The `train_instruct_pix2pix.py` script (you can find the it [here](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py)) shows how to implement the training procedure and adapt it for Stable Diffusion. ***Disclaimer: Even though `train_instruct_pix2pix.py` implements the InstructPix2Pix training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.*** @@ -44,7 +44,12 @@ cd diffusers pip install -e . ``` -Then cd in the example folder and run +Then cd in the example folder +```bash +cd examples/instruct_pix2pix +``` + +Now run ```bash pip install -r requirements.txt ```