Skip to content

[Tests] parallelize #3078

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 23 additions & 10 deletions .github/workflows/pr_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,22 +21,27 @@ jobs:
fail-fast: false
matrix:
config:
- name: Fast PyTorch CPU tests on Ubuntu
framework: pytorch
- name: Fast PyTorch Pipeline CPU tests
framework: pytorch_pipelines
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu
- name: Fast Flax CPU tests on Ubuntu
report: torch_cpu_pipelines
- name: Fast PyTorch Models & Schedulers CPU tests
framework: pytorch_models
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
report: torch_cpu_models_schedulers
- name: Fast Flax CPU tests
framework: flax
runner: docker-cpu
image: diffusers/diffusers-flax-cpu
report: flax_cpu
- name: Fast ONNXRuntime CPU tests on Ubuntu
- name: Fast ONNXRuntime CPU tests
framework: onnxruntime
runner: docker-cpu
image: diffusers/diffusers-onnxruntime-cpu
report: onnx_cpu
- name: PyTorch Example CPU tests on Ubuntu
- name: PyTorch Example CPU tests
framework: pytorch_examples
runner: docker-cpu
image: diffusers/diffusers-pytorch-cpu
Expand Down Expand Up @@ -71,21 +76,29 @@ jobs:
run: |
python utils/print_env.py

- name: Run fast PyTorch CPU tests
if: ${{ matrix.config.framework == 'pytorch' }}
- name: Run fast PyTorch Pipeline CPU tests
if: ${{ matrix.config.framework == 'pytorch_pipelines' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
tests/pipelines

- name: Run fast PyTorch Model Scheduler CPU tests
if: ${{ matrix.config.framework == 'pytorch_models' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_${{ matrix.config.report }} \
tests/models tests/schedulers tests/others
Comment on lines +79 to +93
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two should now run in parallel, right?


- name: Run fast Flax TPU tests
if: ${{ matrix.config.framework == 'flax' }}
run: |
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Flax" \
--make-reports=tests_${{ matrix.config.report }} \
tests/
tests

- name: Run fast ONNXRuntime CPU tests
if: ${{ matrix.config.framework == 'onnxruntime' }}
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion tests/models/test_models_unet_1d.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
from diffusers import UNet1DModel
from diffusers.utils import floats_tensor, slow, torch_device

from ..test_modeling_common import ModelTesterMixin
from .test_modeling_common import ModelTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_unet_2d.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from diffusers import UNet2DModel
from diffusers.utils import floats_tensor, logging, slow, torch_all_close, torch_device

from ..test_modeling_common import ModelTesterMixin
from .test_modeling_common import ModelTesterMixin


logger = logging.get_logger(__name__)
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_unet_2d_condition.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
)
from diffusers.utils.import_utils import is_xformers_available

from ..test_modeling_common import ModelTesterMixin
from .test_modeling_common import ModelTesterMixin


logger = logging.get_logger(__name__)
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_unet_3d_condition.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
)
from diffusers.utils.import_utils import is_xformers_available

from ..test_modeling_common import ModelTesterMixin
from .test_modeling_common import ModelTesterMixin


logger = logging.get_logger(__name__)
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_vae.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from diffusers import AutoencoderKL
from diffusers.utils import floats_tensor, load_hf_numpy, require_torch_gpu, slow, torch_all_close, torch_device

from ..test_modeling_common import ModelTesterMixin
from .test_modeling_common import ModelTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_vae_flax.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from diffusers.utils import is_flax_available
from diffusers.utils.testing_utils import require_flax

from ..test_modeling_common_flax import FlaxModelTesterMixin
from .test_modeling_common_flax import FlaxModelTesterMixin


if is_flax_available():
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_vq.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
from diffusers import VQModel
from diffusers.utils import floats_tensor, torch_device

from ..test_modeling_common import ModelTesterMixin
from .test_modeling_common import ModelTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
File renamed without changes.
12 changes: 12 additions & 0 deletions tests/test_config.py → tests/others/test_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ def test_save_load(self):

def test_load_ddim_from_pndm(self):
logger = logging.get_logger("diffusers.configuration_utils")
# 30 for warning
logger.setLevel(30)

with CaptureLogger(logger) as cap_logger:
ddim = DDIMScheduler.from_pretrained(
Expand All @@ -153,6 +155,8 @@ def test_load_ddim_from_pndm(self):

def test_load_euler_from_pndm(self):
logger = logging.get_logger("diffusers.configuration_utils")
# 30 for warning
logger.setLevel(30)

with CaptureLogger(logger) as cap_logger:
euler = EulerDiscreteScheduler.from_pretrained(
Expand All @@ -165,6 +169,8 @@ def test_load_euler_from_pndm(self):

def test_load_euler_ancestral_from_pndm(self):
logger = logging.get_logger("diffusers.configuration_utils")
# 30 for warning
logger.setLevel(30)

with CaptureLogger(logger) as cap_logger:
euler = EulerAncestralDiscreteScheduler.from_pretrained(
Expand All @@ -177,6 +183,8 @@ def test_load_euler_ancestral_from_pndm(self):

def test_load_pndm(self):
logger = logging.get_logger("diffusers.configuration_utils")
# 30 for warning
logger.setLevel(30)

with CaptureLogger(logger) as cap_logger:
pndm = PNDMScheduler.from_pretrained(
Expand All @@ -189,6 +197,8 @@ def test_load_pndm(self):

def test_overwrite_config_on_load(self):
logger = logging.get_logger("diffusers.configuration_utils")
# 30 for warning
logger.setLevel(30)

with CaptureLogger(logger) as cap_logger:
ddpm = DDPMScheduler.from_pretrained(
Expand All @@ -212,6 +222,8 @@ def test_overwrite_config_on_load(self):

def test_load_dpmsolver(self):
logger = logging.get_logger("diffusers.configuration_utils")
# 30 for warning
logger.setLevel(30)

with CaptureLogger(logger) as cap_logger:
dpm = DPMSolverMultistepScheduler.from_pretrained(
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion tests/test_utils.py → tests/others/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,4 +167,4 @@ def test_deprecate_stacklevel(self):
with self.assertWarns(FutureWarning) as warning:
deprecate(("deprecated_arg", self.higher_version, "This message is better!!!"), standard_warn=False)
assert str(warning.warning) == "This message is better!!!"
assert "diffusers/tests/test_utils.py" in warning.filename
assert "diffusers/tests/others/test_utils.py" in warning.filename
4 changes: 2 additions & 2 deletions tests/pipelines/altdiffusion/test_alt_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@
from diffusers.utils import slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu

from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/audioldm/test_audioldm.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@
)
from diffusers.utils import slow, torch_device

from ...pipeline_params import TEXT_TO_AUDIO_BATCH_PARAMS, TEXT_TO_AUDIO_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_TO_AUDIO_BATCH_PARAMS, TEXT_TO_AUDIO_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


class AudioLDMPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/dance_diffusion/test_dance_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@
from diffusers.utils import slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu, skip_mps

from ...pipeline_params import UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS, UNCONDITIONAL_AUDIO_GENERATION_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS, UNCONDITIONAL_AUDIO_GENERATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/ddim/test_ddim.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@
from diffusers import DDIMPipeline, DDIMScheduler, UNet2DModel
from diffusers.utils.testing_utils import require_torch_gpu, slow, torch_device

from ...pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/dit/test_dit.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@
from diffusers.utils import is_xformers_available, load_numpy, slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu

from ...pipeline_params import (
from ..pipeline_params import (
CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS,
CLASS_CONDITIONED_IMAGE_GENERATION_PARAMS,
)
from ...test_pipelines_common import PipelineTesterMixin
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/latent_diffusion/test_latent_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@
from diffusers import AutoencoderKL, DDIMScheduler, LDMTextToImagePipeline, UNet2DConditionModel
from diffusers.utils.testing_utils import load_numpy, nightly, require_torch_gpu, slow, torch_device

from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/paint_by_example/test_paint_by_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@
from diffusers.utils import floats_tensor, load_image, slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu

from ...pipeline_params import IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
File renamed without changes.
4 changes: 2 additions & 2 deletions tests/pipelines/repaint/test_repaint.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@
from diffusers import RePaintPipeline, RePaintScheduler, UNet2DModel
from diffusers.utils.testing_utils import load_image, load_numpy, nightly, require_torch_gpu, skip_mps, torch_device

from ...pipeline_params import IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_INPAINTING_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@
from diffusers.utils import require_torch_gpu, skip_mps, slow, torch_device
from diffusers.utils.testing_utils import require_note_seq, require_onnxruntime

from ...pipeline_params import TOKENS_TO_AUDIO_GENERATION_BATCH_PARAMS, TOKENS_TO_AUDIO_GENERATION_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TOKENS_TO_AUDIO_GENERATION_BATCH_PARAMS, TOKENS_TO_AUDIO_GENERATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/stable_diffusion/test_cycle_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@
from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu, skip_mps

from ...pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
)
from diffusers.utils.testing_utils import is_onnx_available, nightly, require_onnxruntime, require_torch_gpu

from ...test_pipelines_onnx_common import OnnxPipelineTesterMixin
from ..test_pipelines_onnx_common import OnnxPipelineTesterMixin


if is_onnx_available():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
require_torch_gpu,
)

from ...test_pipelines_onnx_common import OnnxPipelineTesterMixin
from ..test_pipelines_onnx_common import OnnxPipelineTesterMixin


if is_onnx_available():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
require_torch_gpu,
)

from ...test_pipelines_onnx_common import OnnxPipelineTesterMixin
from ..test_pipelines_onnx_common import OnnxPipelineTesterMixin


if is_onnx_available():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
require_torch_gpu,
)

from ...test_pipelines_onnx_common import OnnxPipelineTesterMixin
from ..test_pipelines_onnx_common import OnnxPipelineTesterMixin


if is_onnx_available():
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/stable_diffusion/test_stable_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@
from diffusers.utils.testing_utils import CaptureLogger, require_torch_gpu

from ...models.test_models_unet_2d_condition import create_lora_layers
from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@
from diffusers.utils.import_utils import is_xformers_available
from diffusers.utils.testing_utils import require_torch_gpu

from ...pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


class StableDiffusionControlNetPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@
from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu

from ...pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@
from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu, skip_mps

from ...pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@
from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import require_torch_gpu

from ...pipeline_params import TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
from ...test_pipelines_common import PipelineTesterMixin
from ..pipeline_params import TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
Expand Down
Loading