Skip to content

[Tests] better determinism #3374

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 82 commits into from
May 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
50ae297
enable deterministic pytorch and cuda operations.
sayakpaul May 9, 2023
44b0ad6
disable manual seeding.
sayakpaul May 9, 2023
1d88907
make style && make quality for unet_2d tests.
sayakpaul May 9, 2023
9ef07c6
enable determinism for the unet2dconditional model.
sayakpaul May 9, 2023
ba8f9c8
add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
sayakpaul May 9, 2023
56ee5d0
relax tolerance (very weird issue, though).
sayakpaul May 9, 2023
8b1e927
revert to torch manual_seed() where needed.
sayakpaul May 9, 2023
a57abd9
relax more tolerance.
sayakpaul May 9, 2023
30ee9e1
better placement of the cuda variable and relax more tolerance.
sayakpaul May 9, 2023
a1fc9fa
'Merge branch 'main' into tests/fix-determinism
sayakpaul May 10, 2023
1684c11
enable determinism for 3d condition model.
sayakpaul May 10, 2023
df6c0ad
relax tolerance.
sayakpaul May 10, 2023
1760fbc
add: determinism to alt_diffusion.
sayakpaul May 10, 2023
d709b19
relax tolerance for alt diffusion.
sayakpaul May 10, 2023
9f40ef1
dance diffusion.
sayakpaul May 10, 2023
ae884b7
dance diffusion is flaky.
sayakpaul May 10, 2023
7738519
test_dict_tuple_outputs_equivalent edit.
sayakpaul May 10, 2023
ba3a893
fix two more tests.
sayakpaul May 10, 2023
a7dfbea
fix more ddim tests.
sayakpaul May 10, 2023
4142669
fix: argument.
sayakpaul May 10, 2023
dc564f7
change to diff in place of difference.
sayakpaul May 10, 2023
4ca382d
fix: test_save_load call.
sayakpaul May 10, 2023
0789933
test_save_load_float16 call.
sayakpaul May 10, 2023
f120d7a
fix: expected_max_diff
sayakpaul May 10, 2023
202d76d
fix: paint by example.
sayakpaul May 10, 2023
81e287a
relax tolerance.
sayakpaul May 10, 2023
6f9a6f0
add determinism to 1d unet model.
sayakpaul May 10, 2023
6a19ce3
torch 2.0 regressions seem to be brutal
sayakpaul May 10, 2023
71b0782
determinism to vae.
sayakpaul May 11, 2023
ce3d25f
add reason to skipping.
sayakpaul May 11, 2023
061a179
up tolerance.
sayakpaul May 11, 2023
438353c
determinism to vq.
sayakpaul May 11, 2023
063a5b7
determinism to cuda.
sayakpaul May 11, 2023
864e2bc
determinism to the generic test pipeline file.
sayakpaul May 11, 2023
a344861
refactor general pipelines testing a bit.
sayakpaul May 11, 2023
37fb81b
determinism to alt diffusion i2i
sayakpaul May 11, 2023
46495f9
up tolerance for alt diff i2i and audio diff
sayakpaul May 11, 2023
5df5445
up tolerance.
sayakpaul May 11, 2023
5c700aa
determinism to audioldm
sayakpaul May 11, 2023
288c2cf
increase tolerance for audioldm lms.
sayakpaul May 11, 2023
440f2ae
increase tolerance for paint by paint.
sayakpaul May 11, 2023
5cd316d
increase tolerance for repaint.
sayakpaul May 11, 2023
21b8f7a
determinism to cycle diffusion and sd 1.
sayakpaul May 11, 2023
99269f0
relax tol for cycle diffusion 🚲
sayakpaul May 11, 2023
9f2616c
relax tol for sd 1.0
sayakpaul May 11, 2023
6538392
relax tol for controlnet.
sayakpaul May 11, 2023
9f47481
determinism to img var.
sayakpaul May 11, 2023
306a9ce
relax tol for img variation.
sayakpaul May 11, 2023
0a863bc
tolerance to i2i sd
sayakpaul May 11, 2023
1c89025
make style
sayakpaul May 11, 2023
6e5e518
determinism to inpaint.
sayakpaul May 11, 2023
47c583a
relax tolerance for inpaiting.
sayakpaul May 11, 2023
8b9d5b8
determinism for inpainting legacy
sayakpaul May 11, 2023
a6a6532
relax tolerance.
sayakpaul May 11, 2023
89dd26b
determinism to instruct pix2pix
sayakpaul May 11, 2023
acad10f
determinism to model editing.
sayakpaul May 11, 2023
3176160
model editing tolerance.
sayakpaul May 11, 2023
221f0eb
panorama determinism
sayakpaul May 11, 2023
edd0837
determinism to pix2pix zero.
sayakpaul May 11, 2023
a323939
determinism to sag.
sayakpaul May 11, 2023
fa50f12
sd 2. determinism
sayakpaul May 11, 2023
0080889
sd. tolerance
sayakpaul May 11, 2023
19fce17
disallow tf32 matmul.
sayakpaul May 11, 2023
70d5de0
relax tolerance is all you need.
sayakpaul May 11, 2023
74d5bae
make style and determinism to sd 2 depth
sayakpaul May 11, 2023
6c56f09
relax tolerance for depth.
sayakpaul May 11, 2023
5cd391a
tolerance to diffedit.
sayakpaul May 11, 2023
1b44420
tolerance to sd 2 inpaint.
sayakpaul May 11, 2023
12ec5c8
up tolerance.
sayakpaul May 11, 2023
25525e3
determinism in upscaling.
sayakpaul May 11, 2023
d98e296
tolerance in upscaler.
sayakpaul May 11, 2023
06f94bd
more tolerance relaxation.
sayakpaul May 11, 2023
b78dee6
determinism to v pred.
sayakpaul May 11, 2023
9805f15
up tol for v_pred
sayakpaul May 11, 2023
2db2296
unclip determinism
sayakpaul May 11, 2023
8008687
determinism to unclip img2img
sayakpaul May 11, 2023
05f52b2
determinism to text to video.
sayakpaul May 11, 2023
db9eef6
determinism to last set of tests
sayakpaul May 11, 2023
05612f0
up tol.
sayakpaul May 11, 2023
08320a3
vq cumsum doesn't have a deterministic kernel
sayakpaul May 11, 2023
8c09cf0
relax tol
sayakpaul May 11, 2023
3afc0c0
relax tol
sayakpaul May 11, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/workflows/push_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,9 @@ jobs:
if: ${{ matrix.config.framework == 'pytorch' }}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8

run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
Expand Down
4 changes: 2 additions & 2 deletions tests/models/test_modeling_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,7 @@ def test_from_save_pretrained_dtype(self):
new_model = self.model_class.from_pretrained(tmpdirname, low_cpu_mem_usage=False, torch_dtype=dtype)
assert new_model.dtype == dtype

def test_determinism(self):
def test_determinism(self, expected_max_diff=1e-5):
init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
model = self.model_class(**init_dict)
model.to(torch_device)
Expand All @@ -288,7 +288,7 @@ def test_determinism(self):
out_1 = out_1[~np.isnan(out_1)]
out_2 = out_2[~np.isnan(out_2)]
max_diff = np.amax(np.abs(out_1 - out_2))
self.assertLessEqual(max_diff, 1e-5)
self.assertLessEqual(max_diff, expected_max_diff)

def test_output(self):
init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_models_unet_1d.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ def test_unet_1d_maestro(self):
output_sum = output.abs().sum()
output_max = output.abs().max()

assert (output_sum - 224.0896).abs() < 4e-2
assert (output_sum - 224.0896).abs() < 0.5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

totally fine for me!

assert (output_max - 0.0607).abs() < 4e-4


Expand Down
11 changes: 2 additions & 9 deletions tests/models/test_models_unet_2d.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@

logger = logging.get_logger(__name__)
torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class Unet2DModelTests(ModelTesterMixin, unittest.TestCase):
Expand Down Expand Up @@ -246,10 +247,6 @@ def test_output_pretrained_ve_mid(self):
model = UNet2DModel.from_pretrained("google/ncsnpp-celebahq-256")
model.to(torch_device)

torch.manual_seed(0)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(0)

Comment on lines -249 to -252
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed with torch.use_deterministic_algorithms(True).

batch_size = 4
num_channels = 3
sizes = (256, 256)
Expand All @@ -262,7 +259,7 @@ def test_output_pretrained_ve_mid(self):

output_slice = output[0, -3:, -3:, -1].flatten().cpu()
# fmt: off
expected_output_slice = torch.tensor([-4836.2231, -6487.1387, -3816.7969, -7964.9253, -10966.2842, -20043.6016, 8137.0571, 2340.3499, 544.6114])
expected_output_slice = torch.tensor([-4842.8691, -6499.6631, -3800.1953, -7978.2686, -10980.7129, -20028.8535, 8148.2822, 2342.2905, 567.7608])
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With PyTorch 2.0, this had to be changed.

# fmt: on

self.assertTrue(torch_all_close(output_slice, expected_output_slice, rtol=1e-2))
Expand All @@ -271,10 +268,6 @@ def test_output_pretrained_ve_large(self):
model = UNet2DModel.from_pretrained("fusing/ncsnpp-ffhq-ve-dummy-update")
model.to(torch_device)

torch.manual_seed(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice clean up!

if torch.cuda.is_available():
torch.cuda.manual_seed_all(0)

batch_size = 4
num_channels = 3
sizes = (32, 32)
Expand Down
13 changes: 7 additions & 6 deletions tests/models/test_models_unet_2d_condition.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@

logger = logging.get_logger(__name__)
torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


def create_lora_layers(model, mock_weights: bool = True):
Expand Down Expand Up @@ -442,8 +443,8 @@ def test_lora_processors(self):
sample3 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
sample4 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample

assert (sample1 - sample2).abs().max() < 1e-4
assert (sample3 - sample4).abs().max() < 1e-4
assert (sample1 - sample2).abs().max() < 3e-3
assert (sample3 - sample4).abs().max() < 3e-3
Comment on lines +446 to +447
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explained in the PR description why I had to relax the tolerance.


# sample 2 and sample 3 should be different
assert (sample2 - sample3).abs().max() > 1e-4
Expand Down Expand Up @@ -587,7 +588,7 @@ def test_lora_on_off(self):
new_sample = model(**inputs_dict).sample

assert (sample - new_sample).abs().max() < 1e-4
assert (sample - old_sample).abs().max() < 1e-4
assert (sample - old_sample).abs().max() < 3e-3

@unittest.skipIf(
torch_device != "cuda" or not is_xformers_available(),
Expand Down Expand Up @@ -642,7 +643,7 @@ def test_custom_diffusion_processors(self):
with torch.no_grad():
sample2 = model(**inputs_dict).sample

assert (sample1 - sample2).abs().max() < 1e-4
assert (sample1 - sample2).abs().max() < 3e-3

def test_custom_diffusion_save_load(self):
# enable deterministic behavior for gradient checkpointing
Expand Down Expand Up @@ -677,7 +678,7 @@ def test_custom_diffusion_save_load(self):
assert (sample - new_sample).abs().max() < 1e-4

# custom diffusion and no custom diffusion should be the same
assert (sample - old_sample).abs().max() < 1e-4
assert (sample - old_sample).abs().max() < 3e-3

@unittest.skipIf(
torch_device != "cuda" or not is_xformers_available(),
Expand Down Expand Up @@ -957,7 +958,7 @@ def test_compvis_sd_inpaint(self, seed, timestep, expected_slice):
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
expected_output_slice = torch.tensor(expected_slice)

assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)

@parameterized.expand(
[
Expand Down
9 changes: 5 additions & 4 deletions tests/models/test_models_unet_3d_condition.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@

logger = logging.get_logger(__name__)
torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


def create_lora_layers(model, mock_weights: bool = True):
Expand Down Expand Up @@ -224,11 +225,11 @@ def test_lora_processors(self):
sample3 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample
sample4 = model(**inputs_dict, cross_attention_kwargs={"scale": 0.5}).sample

assert (sample1 - sample2).abs().max() < 1e-4
assert (sample3 - sample4).abs().max() < 1e-4
assert (sample1 - sample2).abs().max() < 3e-3
assert (sample3 - sample4).abs().max() < 3e-3

# sample 2 and sample 3 should be different
assert (sample2 - sample3).abs().max() > 1e-4
assert (sample2 - sample3).abs().max() > 3e-3

def test_lora_save_load(self):
init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
Expand Down Expand Up @@ -365,7 +366,7 @@ def test_lora_on_off(self):
new_sample = model(**inputs_dict).sample

assert (sample - new_sample).abs().max() < 1e-4
assert (sample - old_sample).abs().max() < 1e-4
assert (sample - old_sample).abs().max() < 3e-3

@unittest.skipIf(
torch_device != "cuda" or not is_xformers_available(),
Expand Down
10 changes: 7 additions & 3 deletions tests/models/test_models_vae.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,13 @@

from diffusers import AutoencoderKL
from diffusers.utils import floats_tensor, load_hf_numpy, require_torch_gpu, slow, torch_all_close, torch_device
from diffusers.utils.import_utils import is_xformers_available

from .test_modeling_common import ModelTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class AutoencoderKLTests(ModelTesterMixin, unittest.TestCase):
Expand Down Expand Up @@ -225,7 +227,7 @@ def test_stable_diffusion(self, seed, expected_slice, expected_slice_mps):
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)

assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)

@parameterized.expand(
[
Expand Down Expand Up @@ -271,7 +273,7 @@ def test_stable_diffusion_mode(self, seed, expected_slice, expected_slice_mps):
output_slice = sample[-1, -2:, -2:, :2].flatten().float().cpu()
expected_output_slice = torch.tensor(expected_slice_mps if torch_device == "mps" else expected_slice)

assert torch_all_close(output_slice, expected_output_slice, atol=1e-3)
assert torch_all_close(output_slice, expected_output_slice, atol=3e-3)

@parameterized.expand(
[
Expand Down Expand Up @@ -321,6 +323,7 @@ def test_stable_diffusion_decode_fp16(self, seed, expected_slice):

@parameterized.expand([13, 16, 27])
@require_torch_gpu
@unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
def test_stable_diffusion_decode_xformers_vs_2_0_fp16(self, seed):
model = self.get_sd_vae_model(fp16=True)
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64), fp16=True)
Expand All @@ -338,6 +341,7 @@ def test_stable_diffusion_decode_xformers_vs_2_0_fp16(self, seed):

@parameterized.expand([13, 16, 37])
@require_torch_gpu
@unittest.skipIf(not is_xformers_available(), reason="xformers is not required when using PyTorch 2.0.")
def test_stable_diffusion_decode_xformers_vs_2_0(self, seed):
model = self.get_sd_vae_model()
encoding = self.get_sd_image(seed, shape=(3, 4, 64, 64))
Expand Down Expand Up @@ -375,5 +379,5 @@ def test_stable_diffusion_encode_sample(self, seed, expected_slice):
output_slice = sample[0, -1, -3:, -3:].flatten().cpu()
expected_output_slice = torch.tensor(expected_slice)

tolerance = 1e-3 if torch_device != "mps" else 1e-2
tolerance = 3e-3 if torch_device != "mps" else 1e-2
assert torch_all_close(output_slice, expected_output_slice, atol=tolerance)
1 change: 1 addition & 0 deletions tests/models/test_models_vq.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class VQModelTests(ModelTesterMixin, unittest.TestCase):
Expand Down
4 changes: 4 additions & 0 deletions tests/others/test_ema.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,10 @@
from diffusers.utils.testing_utils import skip_mps, torch_device


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class EMAModelTests(unittest.TestCase):
model_id = "hf-internal-testing/tiny-stable-diffusion-pipe"
batch_size = 1
Expand Down
7 changes: 7 additions & 0 deletions tests/pipelines/altdiffusion/test_alt_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class AltDiffusionPipelineFastTests(PipelineLatentTesterMixin, PipelineTesterMixin, unittest.TestCase):
Expand Down Expand Up @@ -126,6 +127,12 @@ def get_dummy_inputs(self, device, seed=0):
}
return inputs

def test_attention_slicing_forward_pass(self):
super().test_attention_slicing_forward_pass(expected_max_diff=3e-3)

def test_inference_batch_single_identical(self):
super().test_inference_batch_single_identical(expected_max_diff=3e-3)

def test_alt_diffusion_ddim(self):
device = "cpu" # ensure determinism for the device-dependent torch.Generator

Expand Down
5 changes: 3 additions & 2 deletions tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class AltDiffusionImg2ImgPipelineFastTests(unittest.TestCase):
Expand Down Expand Up @@ -251,7 +252,7 @@ def test_stable_diffusion_img2img_pipeline_multiple_of_8(self):
assert image.shape == (504, 760, 3)
expected_slice = np.array([0.9358, 0.9397, 0.9599, 0.9901, 1.0000, 1.0000, 0.9882, 1.0000, 1.0000])

assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2


@slow
Expand Down Expand Up @@ -297,4 +298,4 @@ def test_stable_diffusion_img2img_pipeline_default(self):

assert image.shape == (512, 768, 3)
# img2img is flaky across GPUs even in fp32, so using MAE here
assert np.abs(expected_image - image).max() < 1e-3
assert np.abs(expected_image - image).max() < 1e-2
1 change: 1 addition & 0 deletions tests/pipelines/audio_diffusion/test_audio_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class PipelineFastTests(unittest.TestCase):
Expand Down
6 changes: 5 additions & 1 deletion tests/pipelines/audioldm/test_audioldm.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,10 @@
from ..test_pipelines_common import PipelineTesterMixin


torch.backends.cuda.matmul.allow_tf32 = False
torch.use_deterministic_algorithms(True)


class AudioLDMPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
pipeline_class = AudioLDMPipeline
params = TEXT_TO_AUDIO_PARAMS
Expand Down Expand Up @@ -413,4 +417,4 @@ def test_audioldm_lms(self):
audio_slice = audio[27780:27790]
expected_slice = np.array([-0.2131, -0.0873, -0.0124, -0.0189, 0.0569, 0.1373, 0.1883, 0.2886, 0.3297, 0.2212])
max_diff = np.abs(expected_slice - audio_slice).max()
assert max_diff < 1e-2
assert max_diff < 3e-2
5 changes: 4 additions & 1 deletion tests/pipelines/dance_diffusion/test_dance_diffusion.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def test_save_load_local(self):

@skip_mps
def test_dict_tuple_outputs_equivalent(self):
return super().test_dict_tuple_outputs_equivalent()
return super().test_dict_tuple_outputs_equivalent(expected_max_difference=3e-3)

@skip_mps
def test_save_load_optional_components(self):
Expand All @@ -113,6 +113,9 @@ def test_save_load_optional_components(self):
def test_attention_slicing_forward_pass(self):
return super().test_attention_slicing_forward_pass()

def test_inference_batch_single_identical(self):
super().test_inference_batch_single_identical(expected_max_diff=3e-3)


@slow
@require_torch_gpu
Expand Down
12 changes: 12 additions & 0 deletions tests/pipelines/ddim/test_ddim.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,18 @@ def test_inference(self):
max_diff = np.abs(image_slice.flatten() - expected_slice).max()
self.assertLessEqual(max_diff, 1e-3)

def test_dict_tuple_outputs_equivalent(self):
super().test_dict_tuple_outputs_equivalent(expected_max_difference=3e-3)

def test_save_load_local(self):
super().test_save_load_local(expected_max_difference=3e-3)

def test_save_load_optional_components(self):
super().test_save_load_optional_components(expected_max_difference=3e-3)

def test_inference_batch_single_identical(self):
super().test_inference_batch_single_identical(expected_max_diff=3e-3)


@slow
@require_torch_gpu
Expand Down
2 changes: 1 addition & 1 deletion tests/pipelines/deepfloyd_if/test_if.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def test_save_load_optional_components(self):
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
def test_save_load_float16(self):
# Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
self._test_save_load_float16(expected_max_diff=1e-1)
super().test_save_load_float16(expected_max_diff=1e-1)

def test_attention_slicing_forward_pass(self):
self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
Expand Down
4 changes: 2 additions & 2 deletions tests/pipelines/deepfloyd_if/test_if_img2img.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,11 @@ def test_save_load_optional_components(self):
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
def test_save_load_float16(self):
# Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
self._test_save_load_float16(expected_max_diff=1e-1)
super().test_save_load_float16(expected_max_diff=1e-1)

@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
def test_float16_inference(self):
self._test_float16_inference(expected_max_diff=1e-1)
super().test_float16_inference(expected_max_diff=1e-1)

def test_attention_slicing_forward_pass(self):
self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def test_save_load_optional_components(self):
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
def test_save_load_float16(self):
# Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
self._test_save_load_float16(expected_max_diff=1e-1)
super().test_save_load_float16(expected_max_diff=1e-1)

def test_attention_slicing_forward_pass(self):
self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
Expand Down
2 changes: 1 addition & 1 deletion tests/pipelines/deepfloyd_if/test_if_inpainting.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def test_save_load_optional_components(self):
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
def test_save_load_float16(self):
# Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
self._test_save_load_float16(expected_max_diff=1e-1)
super().test_save_load_float16(expected_max_diff=1e-1)

def test_attention_slicing_forward_pass(self):
self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ def test_save_load_optional_components(self):
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
def test_save_load_float16(self):
# Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
self._test_save_load_float16(expected_max_diff=1e-1)
super().test_save_load_float16(expected_max_diff=1e-1)

def test_attention_slicing_forward_pass(self):
self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
Expand Down
Loading