Skip to content

Conversation

@justusschock
Copy link
Member

@justusschock justusschock commented Nov 11, 2025

What does this PR do?

Fixes #<issue_number>

Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

📚 Documentation preview 📚: https://pytorch-lightning--21354.org.readthedocs.build/en/21354/

@github-actions github-actions bot added fabric lightning.fabric.Fabric pl Generic label for PyTorch Lightning package dependencies Pull requests that update a dependency file labels Nov 11, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 11, 2025

⛈️ Required checks status: Has failure 🔴

Warning
This job will need to be re-run to merge your PR. If you do not have write access to the repository, you can ask Lightning-AI/lai-frameworks to re-run it. If you push a new commit, all of CI will re-trigger.

Groups summary

🔴 pytorch_lightning: Tests workflow
Check ID Status
pl-cpu-guardian failure

These checks are required after the changes to requirements/fabric/base.txt, src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, requirements/pytorch/base.txt, src/lightning/pytorch/accelerators/xla.py, src/lightning/pytorch/loggers/comet.py, src/lightning/pytorch/loggers/mlflow.py, src/lightning/pytorch/loggers/neptune.py, src/lightning/pytorch/loggers/wandb.py, src/lightning/pytorch/plugins/precision/deepspeed.py, src/lightning/pytorch/plugins/precision/xla.py, src/lightning/pytorch/strategies/deepspeed.py, src/lightning/pytorch/strategies/launchers/xla.py, src/lightning/pytorch/strategies/single_xla.py, src/lightning/pytorch/strategies/xla.py, src/lightning/pytorch/utilities/deepspeed.py, tests/tests_pytorch/accelerators/test_xla.py, tests/tests_pytorch/callbacks/progress/test_tqdm_progress_bar.py, tests/tests_pytorch/conftest.py, tests/tests_pytorch/deprecated_api/test_no_removal_version.py, tests/tests_pytorch/graveyard/test_tpu.py, tests/tests_pytorch/loggers/conftest.py, tests/tests_pytorch/loggers/test_all.py, tests/tests_pytorch/loggers/test_comet.py, tests/tests_pytorch/loggers/test_mlflow.py, tests/tests_pytorch/loggers/test_neptune.py, tests/tests_pytorch/loggers/test_wandb.py, tests/tests_pytorch/models/test_tpu.py, tests/tests_pytorch/plugins/precision/test_deepspeed_precision.py, tests/tests_pytorch/plugins/precision/test_transformer_engine.py, tests/tests_pytorch/strategies/test_deepspeed.py, tests/tests_pytorch/strategies/test_xla.py, tests/tests_pytorch/utilities/migration/test_utils.py.

🔴 pytorch_lightning: lit GPU
Check ID Status
pytorch.yml / Lit Job (nvidia/cuda:12.1.1-devel-ubuntu22.04, pytorch, 3.10) success
pytorch.yml / Lit Job (lightning, 3.12) failure
pytorch.yml / Lit Job (pytorch, 3.12) failure

These checks are required after the changes to requirements/pytorch/base.txt, src/lightning/pytorch/accelerators/xla.py, src/lightning/pytorch/loggers/comet.py, src/lightning/pytorch/loggers/mlflow.py, src/lightning/pytorch/loggers/neptune.py, src/lightning/pytorch/loggers/wandb.py, src/lightning/pytorch/plugins/precision/deepspeed.py, src/lightning/pytorch/plugins/precision/xla.py, src/lightning/pytorch/strategies/deepspeed.py, src/lightning/pytorch/strategies/launchers/xla.py, src/lightning/pytorch/strategies/single_xla.py, src/lightning/pytorch/strategies/xla.py, src/lightning/pytorch/utilities/deepspeed.py, tests/tests_pytorch/accelerators/test_xla.py, tests/tests_pytorch/callbacks/progress/test_tqdm_progress_bar.py, tests/tests_pytorch/conftest.py, tests/tests_pytorch/deprecated_api/test_no_removal_version.py, tests/tests_pytorch/graveyard/test_tpu.py, tests/tests_pytorch/loggers/conftest.py, tests/tests_pytorch/loggers/test_all.py, tests/tests_pytorch/loggers/test_comet.py, tests/tests_pytorch/loggers/test_mlflow.py, tests/tests_pytorch/loggers/test_neptune.py, tests/tests_pytorch/loggers/test_wandb.py, tests/tests_pytorch/models/test_tpu.py, tests/tests_pytorch/plugins/precision/test_deepspeed_precision.py, tests/tests_pytorch/plugins/precision/test_transformer_engine.py, tests/tests_pytorch/strategies/test_deepspeed.py, tests/tests_pytorch/strategies/test_xla.py, tests/tests_pytorch/utilities/migration/test_utils.py, requirements/fabric/base.txt, src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py.

🟢 Benchmarks
Check ID Status
benchmark.yml / Lit Job (fabric) success
benchmark.yml / Lit Job (pytorch) success

These checks are required after the changes to requirements/fabric/base.txt, requirements/pytorch/base.txt, src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, src/lightning/pytorch/accelerators/xla.py, src/lightning/pytorch/loggers/comet.py, src/lightning/pytorch/loggers/mlflow.py, src/lightning/pytorch/loggers/neptune.py, src/lightning/pytorch/loggers/wandb.py, src/lightning/pytorch/plugins/precision/deepspeed.py, src/lightning/pytorch/plugins/precision/xla.py, src/lightning/pytorch/strategies/deepspeed.py, src/lightning/pytorch/strategies/launchers/xla.py, src/lightning/pytorch/strategies/single_xla.py, src/lightning/pytorch/strategies/xla.py, src/lightning/pytorch/utilities/deepspeed.py.

🟢 fabric: Docs
Check ID Status
docs-make (fabric, doctest) success
docs-make (fabric, html) success

These checks are required after the changes to src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, .github/workflows/docs-build.yml, requirements/fabric/base.txt.

🟢 pytorch_lightning: Docs
Check ID Status
docs-make (pytorch, doctest) success
docs-make (pytorch, html) success

These checks are required after the changes to src/lightning/pytorch/accelerators/xla.py, src/lightning/pytorch/loggers/comet.py, src/lightning/pytorch/loggers/mlflow.py, src/lightning/pytorch/loggers/neptune.py, src/lightning/pytorch/loggers/wandb.py, src/lightning/pytorch/plugins/precision/deepspeed.py, src/lightning/pytorch/plugins/precision/xla.py, src/lightning/pytorch/strategies/deepspeed.py, src/lightning/pytorch/strategies/launchers/xla.py, src/lightning/pytorch/strategies/single_xla.py, src/lightning/pytorch/strategies/xla.py, src/lightning/pytorch/utilities/deepspeed.py, docs/source-pytorch/conf.py, .github/workflows/docs-build.yml, requirements/pytorch/base.txt.

🟢 lightning_fabric: CPU workflow
Check ID Status
fabric-cpu-guardian success

These checks are required after the changes to requirements/fabric/base.txt, src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, tests/tests_fabric/conftest.py, tests/tests_fabric/graveyard/test_tpu.py, tests/tests_fabric/plugins/environments/test_kubeflow.py, tests/tests_fabric/plugins/environments/test_slurm.py, tests/tests_fabric/plugins/environments/test_torchelastic.py, tests/tests_fabric/plugins/environments/test_xla.py, tests/tests_fabric/plugins/precision/test_transformer_engine.py, tests/tests_fabric/strategies/test_deepspeed.py, tests/tests_fabric/strategies/test_deepspeed_integration.py, tests/tests_fabric/strategies/test_xla.py, tests/tests_fabric/strategies/test_xla_fsdp.py, tests/tests_fabric/test_connector.py, tests/tests_fabric/utilities/test_throughput.py.

🔴 lightning_fabric: lit GPU
Check ID Status
fabric.yml / Lit Job (nvidia/cuda:12.1.1-devel-ubuntu22.04, fabric, 3.10) success
fabric.yml / Lit Job (fabric, 3.12) failure
fabric.yml / Lit Job (lightning, 3.12) failure

These checks are required after the changes to requirements/fabric/base.txt, src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, tests/tests_fabric/conftest.py, tests/tests_fabric/graveyard/test_tpu.py, tests/tests_fabric/plugins/environments/test_kubeflow.py, tests/tests_fabric/plugins/environments/test_slurm.py, tests/tests_fabric/plugins/environments/test_torchelastic.py, tests/tests_fabric/plugins/environments/test_xla.py, tests/tests_fabric/plugins/precision/test_transformer_engine.py, tests/tests_fabric/strategies/test_deepspeed.py, tests/tests_fabric/strategies/test_deepspeed_integration.py, tests/tests_fabric/strategies/test_xla.py, tests/tests_fabric/strategies/test_xla_fsdp.py, tests/tests_fabric/test_connector.py, tests/tests_fabric/utilities/test_throughput.py.

🟢 mypy
Check ID Status
mypy success

These checks are required after the changes to requirements/fabric/base.txt, requirements/pytorch/base.txt, src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, src/lightning/pytorch/accelerators/xla.py, src/lightning/pytorch/loggers/comet.py, src/lightning/pytorch/loggers/mlflow.py, src/lightning/pytorch/loggers/neptune.py, src/lightning/pytorch/loggers/wandb.py, src/lightning/pytorch/plugins/precision/deepspeed.py, src/lightning/pytorch/plugins/precision/xla.py, src/lightning/pytorch/strategies/deepspeed.py, src/lightning/pytorch/strategies/launchers/xla.py, src/lightning/pytorch/strategies/single_xla.py, src/lightning/pytorch/strategies/xla.py, src/lightning/pytorch/utilities/deepspeed.py.

🟢 install
Check ID Status
install-pkg-guardian success

These checks are required after the changes to src/lightning/fabric/accelerators/xla.py, src/lightning/fabric/plugins/environments/kubeflow.py, src/lightning/fabric/plugins/environments/lsf.py, src/lightning/fabric/plugins/environments/slurm.py, src/lightning/fabric/plugins/environments/torchelastic.py, src/lightning/fabric/plugins/environments/xla.py, src/lightning/fabric/plugins/io/xla.py, src/lightning/fabric/plugins/precision/bitsandbytes.py, src/lightning/fabric/plugins/precision/deepspeed.py, src/lightning/fabric/plugins/precision/transformer_engine.py, src/lightning/fabric/plugins/precision/xla.py, src/lightning/fabric/strategies/deepspeed.py, src/lightning/fabric/strategies/launchers/xla.py, src/lightning/fabric/strategies/single_xla.py, src/lightning/fabric/strategies/xla.py, src/lightning/fabric/strategies/xla_fsdp.py, src/lightning/fabric/utilities/imports.py, src/lightning/fabric/utilities/testing/_runif.py, src/lightning/pytorch/accelerators/xla.py, src/lightning/pytorch/loggers/comet.py, src/lightning/pytorch/loggers/mlflow.py, src/lightning/pytorch/loggers/neptune.py, src/lightning/pytorch/loggers/wandb.py, src/lightning/pytorch/plugins/precision/deepspeed.py, src/lightning/pytorch/plugins/precision/xla.py, src/lightning/pytorch/strategies/deepspeed.py, src/lightning/pytorch/strategies/launchers/xla.py, src/lightning/pytorch/strategies/single_xla.py, src/lightning/pytorch/strategies/xla.py, src/lightning/pytorch/utilities/deepspeed.py, requirements/fabric/base.txt, requirements/pytorch/base.txt.


Thank you for your contribution! 💜

Note
This comment is automatically generated and updates for 70 minutes every 180 seconds. If you have any other questions, contact carmocca for help.

@codecov
Copy link

codecov bot commented Nov 11, 2025

❌ 20 Tests Failed:

Tests completed Failed Passed Skipped
3241 20 3221 520
View the top 3 failed test(s) by shortest run time
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins-TPUPrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins', name = 'TPUPrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins.precision-TPUBf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins.precision'
name = 'TPUBf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins.precision-TPUPrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins.precision', name = 'TPUPrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins.precision-XLABf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins.precision'
name = 'XLABf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins.precision.tpu-TPUPrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins.precision.tpu'
name = 'TPUPrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins.precision.tpu_bf16-TPUBf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins.precision.tpu_bf16'
name = 'TPUBf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins.precision.xlabf16-XLABf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'lightning.pytorch.plugins.precision.xlabf16'
name = 'XLABf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins-TPUPrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins', name = 'TPUPrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins.precision-TPUBf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins.precision'
name = 'TPUBf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins.precision-TPUPrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins.precision', name = 'TPUPrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins.precision-XLABf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins.precision'
name = 'XLABf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins.precision.tpu-TPUPrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins.precision.tpu'
name = 'TPUPrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins.precision.tpu_bf16-TPUBf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins.precision.tpu_bf16'
name = 'TPUBf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins.precision.xlabf16-XLABf16PrecisionPlugin]
Stack Traces | 0s run time
import_path = 'pytorch_lightning.plugins.precision.xlabf16'
name = 'XLABf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[lightning.pytorch.plugins-TPUBf16PrecisionPlugin]
Stack Traces | 0.001s run time
import_path = 'lightning.pytorch.plugins', name = 'TPUBf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("lightning.pytorch.accelerators", "TPUAccelerator"),
            ("lightning.pytorch.accelerators.tpu", "TPUAccelerator"),
            ("lightning.pytorch.plugins", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("lightning.pytorch.plugins", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision", "XLABf16PrecisionPlugin"),
            ("lightning.pytorch.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
tests/tests_pytorch/graveyard/test_tpu.py::test_graveyard_no_device[pytorch_lightning.plugins-TPUBf16PrecisionPlugin]
Stack Traces | 0.001s run time
import_path = 'pytorch_lightning.plugins', name = 'TPUBf16PrecisionPlugin'

    @pytest.mark.parametrize(
        ("import_path", "name"),
        [
            ("pytorch_lightning.accelerators", "TPUAccelerator"),
            ("pytorch_lightning.accelerators.tpu", "TPUAccelerator"),
            ("pytorch_lightning.plugins", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu", "TPUPrecisionPlugin"),
            ("pytorch_lightning.plugins", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.tpu_bf16", "TPUBf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision", "XLABf16PrecisionPlugin"),
            ("pytorch_lightning.plugins.precision.xlabf16", "XLABf16PrecisionPlugin"),
        ],
    )
    def test_graveyard_no_device(import_path, name):
        module = import_module(import_path)
        cls = getattr(module, name)
>       with pytest.deprecated_call(match="is deprecated"), pytest.raises(ModuleNotFoundError, match="torch_xla"):
                                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       Failed: DID NOT RAISE <class 'ModuleNotFoundError'>

graveyard/test_tpu.py:41: Failed
View the full list of 12 ❄️ flaky test(s)
tests/tests_pytorch/callbacks/progress/test_tqdm_progress_bar.py::test_tqdm_progress_bar_print

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.251s run time
tqdm_write = <MagicMock name='write' id='140025804981504'>
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_tqdm_progress_bar_print0')

    @mock.patch("tqdm.tqdm.write")
    def test_tqdm_progress_bar_print(tqdm_write, tmp_path):
        """Test that printing in the LightningModule redirects arguments to the progress bar."""
        model = PrintModel()
        bar = TQDMProgressBar()
        trainer = Trainer(
            default_root_dir=tmp_path,
            num_sanity_val_steps=0,
            limit_train_batches=1,
            limit_val_batches=1,
            limit_test_batches=1,
            limit_predict_batches=1,
            max_steps=1,
            callbacks=[bar],
        )
        trainer.fit(model)
        trainer.test(model)
        trainer.predict(model)
>       assert tqdm_write.call_args_list == [
            call("training_step", end=""),
            call("validation_step", file=sys.stderr),
            call("test_step"),
            call("predict_step"),
        ]
E       assert [] == [call('traini...redict_step')]
E         
E         Right contains 4 more items, first extra item: #x1B[0mcall(#x1B[33m'#x1B[39;49;00m#x1B[33mtraining_step#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m, end=#x1B[33m'#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m)#x1B[90m#x1B[39;49;00m
E         
E         Full diff:
E         #x1B[0m#x1B[92m+ []#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m- [#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('training_step', end=''),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('validation_step', file=<_io.TextIOWrapper name="<_io.FileIO name=9 mode='rb+' closefd=True>" mode='r+' encoding='utf-8'>),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('test_step'),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('predict_step'),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m- ]#x1B[39;49;00m#x1B[90m#x1B[39;49;00m

callbacks/progress/test_tqdm_progress_bar.py:501: AssertionError
tests/tests_pytorch/callbacks/progress/test_tqdm_progress_bar.py::test_tqdm_progress_bar_print_no_train

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.044s run time
tqdm_write = <MagicMock name='write' id='140025799204672'>
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_tqdm_progress_bar_print_n0')

    @mock.patch("tqdm.tqdm.write")
    def test_tqdm_progress_bar_print_no_train(tqdm_write, tmp_path):
        """Test that printing in the LightningModule redirects arguments to the progress bar without training."""
        model = PrintModel()
        bar = TQDMProgressBar()
        trainer = Trainer(
            default_root_dir=tmp_path,
            num_sanity_val_steps=0,
            limit_val_batches=1,
            limit_test_batches=1,
            limit_predict_batches=1,
            max_steps=1,
            callbacks=[bar],
            devices=1,
        )
    
        trainer.validate(model)
        trainer.test(model)
        trainer.predict(model)
>       assert tqdm_write.call_args_list == [
            call("validation_step", file=sys.stderr),
            call("test_step"),
            call("predict_step"),
        ]
E       assert [] == [call('valida...redict_step')]
E         
E         Right contains 3 more items, first extra item: #x1B[0mcall(#x1B[33m'#x1B[39;49;00m#x1B[33mvalidation_step#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m, file=<_io.TextIOWrapper name=#x1B[33m"#x1B[39;49;00m#x1B[33m<_io.FileIO name=9 mode=#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m#x1B[33mrb+#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m#x1B[33m closefd=True>#x1B[39;49;00m#x1B[33m"#x1B[39;49;00m mode=#x1B[33m'#x1B[39;49;00m#x1B[33mr+#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m encoding=#x1B[33m'#x1B[39;49;00m#x1B[33mutf-8#x1B[39;49;00m#x1B[33m'#x1B[39;49;00m>)#x1B[90m#x1B[39;49;00m
E         
E         Full diff:
E         #x1B[0m#x1B[92m+ []#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m- [#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('validation_step', file=<_io.TextIOWrapper name="<_io.FileIO name=9 mode='rb+' closefd=True>" mode='r+' encoding='utf-8'>),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('test_step'),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m-     call('predict_step'),#x1B[39;49;00m#x1B[90m#x1B[39;49;00m
E         #x1B[91m- ]#x1B[39;49;00m#x1B[90m#x1B[39;49;00m

callbacks/progress/test_tqdm_progress_bar.py:528: AssertionError
tests/tests_pytorch/loggers/test_all.py::test_logger_default_name

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.002s run time
mlflow_mock = <module 'mlflow'>
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f5a4de3b800>
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_logger_default_name0')

    @mock.patch("pytorch_lightning_enterprise.loggers.mlflow._get_resolve_tags", Mock())
    def test_logger_default_name(mlflow_mock, monkeypatch, tmp_path):
        """Test that the default logger name is lightning_logs."""
        # CSV
        logger = CSVLogger(save_dir=tmp_path)
        assert logger.name == "lightning_logs"
    
        # TensorBoard
        if _TENSORBOARD_AVAILABLE:
            import torch.utils.tensorboard as tb
        else:
            import tensorboardX as tb
    
        monkeypatch.setattr(tb, "SummaryWriter", Mock())
        logger = _instantiate_logger(TensorBoardLogger, save_dir=tmp_path)
        assert logger.name == "lightning_logs"
    
        # MLflow
        client = mlflow_mock.tracking.MlflowClient()
        client.get_experiment_by_name.return_value = None
        logger = _instantiate_logger(MLFlowLogger, save_dir=tmp_path)
    
        _ = logger.experiment
>       logger._mlflow_client.create_experiment.assert_called_with(name="lightning_logs", artifact_location=ANY)
        ^^^^^^^^^^^^^^^^^^^^^
E       AttributeError: 'MLFlowLogger' object has no attribute '_mlflow_client'

loggers/test_all.py:361: AttributeError
tests/tests_pytorch/loggers/test_all.py::test_logger_with_prefix_all

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.008s run time
mlflow_mock = <module 'mlflow'>, wandb_mock = <module 'wandb'>
comet_mock = <module 'comet_ml'>, neptune_mock = <module 'neptune'>
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fbdf59cce60>
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_logger_with_prefix_all0')

    @mock.patch.dict(os.environ, {})
    @mock.patch("pytorch_lightning_enterprise.loggers.mlflow._get_resolve_tags", Mock())
    def test_logger_with_prefix_all(mlflow_mock, wandb_mock, comet_mock, neptune_mock, monkeypatch, tmp_path):
        """Test that prefix is added at the beginning of the metric keys."""
        prefix = "tmp"
    
        # Comet
        _patch_comet_atexit(monkeypatch)
        logger = _instantiate_logger(CometLogger, save_dir=tmp_path, prefix=prefix)
        logger.log_metrics({"test": 1.0}, step=0)
        logger.experiment.__internal_api__log_metrics__.assert_called_once_with(
            {"test": 1.0}, epoch=None, step=0, prefix=prefix, framework="pytorch-lightning"
        )
    
        # MLflow
        Metric = mlflow_mock.entities.Metric
        logger = _instantiate_logger(MLFlowLogger, save_dir=tmp_path, prefix=prefix)
        logger.log_metrics({"test": 1.0}, step=0)
        logger.experiment.log_batch.assert_called_once_with(
            run_id=ANY, metrics=[Metric(key="tmp-test", value=1.0, timestamp=ANY, step=0)]
        )
    
        # Neptune
        logger = _instantiate_logger(NeptuneLogger, api_key="test", project="project", save_dir=tmp_path, prefix=prefix)
>       assert logger.experiment.__getitem__.call_count == 0
               ^^^^^^^^^^^^^^^^^

loggers/test_all.py:313: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../lightning_fabric/loggers/logger.py:118: in experiment
    return fn(self)
           ^^^^^^^^
../../.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:260: in experiment
    return self.logger_impl.experiment
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
pytorch_lightning_enterprise/loggers/logger.py:197: in experiment
    ???
pytorch_lightning_enterprise/loggers/neptune.py:368: in experiment
    ???
pytorch_lightning_enterprise/loggers/logger.py:197: in experiment
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf5801fd0>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:379: ModuleNotFoundError
tests/tests_pytorch/loggers/test_all.py::test_loggers_fit_test_all[CometLogger]

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.004s run time
logger_class = <class 'lightning.pytorch.loggers.comet.CometLogger'>
mlflow_mock = <module 'mlflow'>, wandb_mock = <module 'wandb'>
comet_mock = <module 'comet_ml'>, neptune_mock = <module 'neptune'>
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_loggers_fit_test_all_Come0')
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7f5a4de06600>

    @mock.patch.dict(os.environ, {})
    @mock.patch("pytorch_lightning_enterprise.loggers.mlflow._get_resolve_tags", Mock())
    @pytest.mark.parametrize("logger_class", ALL_LOGGER_CLASSES)
    def test_loggers_fit_test_all(logger_class, mlflow_mock, wandb_mock, comet_mock, neptune_mock, tmp_path, monkeypatch):
        """Verify that basic functionality of all loggers."""
        monkeypatch.chdir(tmp_path)
    
        class CustomModel(BoringModel):
            def training_step(self, batch, batch_idx):
                loss = self.step(batch)
                self.log("train_some_val", loss)
                return {"loss": loss}
    
            def on_validation_epoch_end(self):
                self.log_dict({"early_stop_on": torch.tensor(1), "val_loss": torch.tensor(0.5)})
    
            def on_test_epoch_end(self):
                self.log("test_loss", torch.tensor(2))
    
        class StoreHistoryLogger(logger_class):
            def __init__(self, *args, **kwargs) -> None:
                super().__init__(*args, **kwargs)
                self.history = []
    
            def log_metrics(self, metrics, step):
                super().log_metrics(metrics, step)
                self.history.append((step, metrics))
    
        logger_args = _get_logger_args(logger_class, tmp_path)
        logger = StoreHistoryLogger(**logger_args)
    
        if logger_class == WandbLogger:
            # required mocks for Trainer
            logger.experiment.id = "foo"
            logger.experiment.name = "bar"
    
        if logger_class == CometLogger:
            logger.experiment.id = "foo"
>           logger._comet_config.offline_directory = None
            ^^^^^^^^^^^^^^^^^^^^
E           AttributeError: 'StoreHistoryLogger' object has no attribute '_comet_config'

.../tests_pytorch/loggers/test_all.py:110: AttributeError
tests/tests_pytorch/loggers/test_all.py::test_loggers_fit_test_all[NeptuneLogger]

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.028s run time
logger_class = <class 'pytorch_lightning.loggers.neptune.NeptuneLogger'>
mlflow_mock = <module 'mlflow'>, wandb_mock = <module 'wandb'>
comet_mock = <module 'comet_ml'>, neptune_mock = <module 'neptune'>
tmp_path = PosixPath('.../pytest-of-runner/pytest-0/test_loggers_fit_test_all_Nept0')
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x7fbdf59cc380>

    @mock.patch.dict(os.environ, {})
    @mock.patch("pytorch_lightning_enterprise.loggers.mlflow._get_resolve_tags", Mock())
    @pytest.mark.parametrize("logger_class", ALL_LOGGER_CLASSES)
    def test_loggers_fit_test_all(logger_class, mlflow_mock, wandb_mock, comet_mock, neptune_mock, tmp_path, monkeypatch):
        """Verify that basic functionality of all loggers."""
        monkeypatch.chdir(tmp_path)
    
        class CustomModel(BoringModel):
            def training_step(self, batch, batch_idx):
                loss = self.step(batch)
                self.log("train_some_val", loss)
                return {"loss": loss}
    
            def on_validation_epoch_end(self):
                self.log_dict({"early_stop_on": torch.tensor(1), "val_loss": torch.tensor(0.5)})
    
            def on_test_epoch_end(self):
                self.log("test_loss", torch.tensor(2))
    
        class StoreHistoryLogger(logger_class):
            def __init__(self, *args, **kwargs) -> None:
                super().__init__(*args, **kwargs)
                self.history = []
    
            def log_metrics(self, metrics, step):
                super().log_metrics(metrics, step)
                self.history.append((step, metrics))
    
        logger_args = _get_logger_args(logger_class, tmp_path)
        logger = StoreHistoryLogger(**logger_args)
    
        if logger_class == WandbLogger:
            # required mocks for Trainer
            logger.experiment.id = "foo"
            logger.experiment.name = "bar"
    
        if logger_class == CometLogger:
            logger.experiment.id = "foo"
            logger._comet_config.offline_directory = None
            logger._project_name = "bar"
            logger.experiment.get_key.return_value = "SOME_KEY"
    
        if logger_class == NeptuneLogger:
            logger._retrieve_run_data = Mock()
            logger._run_short_id = "foo"
            logger._run_name = "bar"
    
        if logger_class == MLFlowLogger:
            logger = mock_mlflow_run_creation(logger, experiment_id="foo", run_id="bar")
    
        model = CustomModel()
        trainer = Trainer(
            default_root_dir=tmp_path,
            max_epochs=1,
            logger=logger,
            limit_train_batches=1,
            limit_val_batches=1,
            log_every_n_steps=1,
        )
>       trainer.fit(model)

.../tests_pytorch/loggers/test_all.py:131: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../pytorch_lightning/trainer/trainer.py:582: in fit
    call._call_and_handle_interrupt(
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../pytorch_lightning/trainer/call.py:49: in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../pytorch_lightning/trainer/trainer.py:628: in _fit_impl
    self._run(model, ckpt_path=ckpt_path, weights_only=weights_only)
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../pytorch_lightning/trainer/trainer.py:1037: in _run
    call._call_setup_hook(self)  # allow user to set up LightningModule in accelerator environment
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../pytorch_lightning/trainer/call.py:102: in _call_setup_hook
    if hasattr(logger, "experiment"):
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../lightning_fabric/loggers/logger.py:118: in experiment
    return fn(self)
           ^^^^^^^^
.../pytorch-lightning/pytorch-lightning/.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:260: in experiment
    return self.logger_impl.experiment
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
pytorch_lightning_enterprise/loggers/logger.py:197: in experiment
    ???
pytorch_lightning_enterprise/loggers/neptune.py:368: in experiment
    ???
pytorch_lightning_enterprise/loggers/logger.py:197: in experiment
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf5800800>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:379: ModuleNotFoundError
tests/tests_pytorch/loggers/test_neptune.py::test_neptune_offline

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.001s run time
neptune_mock = <module 'neptune'>

    def test_neptune_offline(neptune_mock):
        neptune_mock.init_run.return_value.exists.return_value = False
    
        logger = NeptuneLogger(mode="offline")
>       created_run_mock = logger.run
                           ^^^^^^^^^^

loggers/test_neptune.py:77: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../lightning_fabric/loggers/logger.py:118: in experiment
    return fn(self)
           ^^^^^^^^
../../.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:265: in run
    return self.logger_impl.run
           ^^^^^^^^^^^^^^^^^^^^
pytorch_lightning_enterprise/loggers/logger.py:197: in experiment
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf61db6b0>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:379: ModuleNotFoundError
tests/tests_pytorch/loggers/test_neptune.py::test_neptune_online

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.002s run time
neptune_mock = <module 'neptune'>

    def test_neptune_online(neptune_mock):
        neptune_mock.init_run.return_value.exists.return_value = True
        neptune_mock.init_run.return_value.__getitem__.side_effect = _fetchable_paths
    
        logger = NeptuneLogger(api_key="test", project="project")
>       created_run_mock = logger.run
                           ^^^^^^^^^^

loggers/test_neptune.py:60: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../lightning_fabric/loggers/logger.py:118: in experiment
    return fn(self)
           ^^^^^^^^
../../.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:265: in run
    return self.logger_impl.run
           ^^^^^^^^^^^^^^^^^^^^
pytorch_lightning_enterprise/loggers/logger.py:197: in experiment
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf62ba3c0>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:379: ModuleNotFoundError
tests/tests_pytorch/loggers/test_neptune.py::test_neptune_pickling

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.002s run time
neptune_mock = <module 'neptune'>

    def test_neptune_pickling(neptune_mock):
        neptune_mock.init_run.return_value.exists.return_value = True
        neptune_mock.init_run.return_value.__getitem__.side_effect = _fetchable_paths
    
        unpickleable_run = neptune_mock.init_run()
        with pytest.raises(pickle.PicklingError):
            pickle.dumps(unpickleable_run)
        neptune_mock.init_run.reset_mock()
    
>       logger = NeptuneLogger(run=unpickleable_run)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

loggers/test_neptune.py:108: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:222: in __init__
    self.logger_impl = EnterpriseNeptuneLogger(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf612f170>
api_key = None, project = None, name = None
run = <MagicMock spec='RunType' id='140453854715376'>
log_model_checkpoints = True, prefix = 'training', neptune_run_kwargs = {}
Handler = <class 'tests_pytorch.loggers.conftest.neptune_mock.<locals>.RunType'>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:256: ModuleNotFoundError
tests/tests_pytorch/loggers/test_neptune.py::test_online_with_custom_run

Flake rate in main: 3.85% (Passed 25 times, Failed 1 times)

Stack Traces | 0.003s run time
neptune_mock = <module 'neptune'>

    def test_online_with_custom_run(neptune_mock):
        neptune_mock.init_run.return_value.exists.return_value = True
        neptune_mock.init_run.return_value.__getitem__.side_effect = _fetchable_paths
    
        created_run = neptune_mock.init_run()
        neptune_mock.init_run.reset_mock()
    
>       logger = NeptuneLogger(run=created_run)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

loggers/test_neptune.py:92: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:222: in __init__
    self.logger_impl = EnterpriseNeptuneLogger(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf628d9d0>
api_key = None, project = None, name = None
run = <MagicMock spec='RunType' id='140453855423488'>
log_model_checkpoints = True, prefix = 'training', neptune_run_kwargs = {}
Handler = <class 'tests_pytorch.loggers.conftest.neptune_mock.<locals>.RunType'>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:256: ModuleNotFoundError
tests/tests_pytorch/loggers/test_neptune.py::test_online_with_wrong_kwargs

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.004s run time
neptune_mock = <module 'neptune'>

    def test_online_with_wrong_kwargs(neptune_mock):
        """Tests combinations of kwargs together with `run` kwarg which makes some of other parameters unavailable in
        init."""
        run = neptune_mock.init_run()
    
        with pytest.raises(ValueError, match="Run parameter expected to be of type `neptune.Run`*"):
            NeptuneLogger(run="some string")
    
        with pytest.raises(ValueError, match="When an already initialized run object is provided*"):
            NeptuneLogger(run=run, project="redundant project")
    
        with pytest.raises(ValueError, match="When an already initialized run object is provided*"):
            NeptuneLogger(run=run, api_key="redundant api key")
    
        with pytest.raises(ValueError, match="When an already initialized run object is provided*"):
            NeptuneLogger(run=run, name="redundant api name")
    
        with pytest.raises(ValueError, match="When an already initialized run object is provided*"):
            NeptuneLogger(run=run, foo="random **kwarg")
    
        # this should work
>       NeptuneLogger(run=run)

loggers/test_neptune.py:139: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../pytorch_lightning/loggers/neptune.py:222: in __init__
    self.logger_impl = EnterpriseNeptuneLogger(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytorch_lightning_enterprise.loggers.neptune.NeptuneLogger object at 0x7fbdf61e3740>
api_key = None, project = None, name = None
run = <MagicMock spec='RunType' id='140453854417536'>
log_model_checkpoints = True, prefix = 'training', neptune_run_kwargs = {}
Handler = <class 'tests_pytorch.loggers.conftest.neptune_mock.<locals>.RunType'>

>   ???
E   ModuleNotFoundError: No module named 'lightning'

pytorch_lightning_enterprise/loggers/neptune.py:256: ModuleNotFoundError
tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py::test_bitsandbytes_precision_cuda_required

Flake rate in main: 4.00% (Passed 24 times, Failed 1 times)

Stack Traces | 0.004s run time
monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x14e9cce60>

    def test_bitsandbytes_precision_cuda_required(monkeypatch):
        monkeypatch.setattr(lightning.fabric.plugins.precision.bitsandbytes, "_BITSANDBYTES_AVAILABLE", True)
        monkeypatch.setitem(sys.modules, "bitsandbytes", Mock())
        with pytest.raises(RuntimeError, match="Bitsandbytes is only supported on CUDA GPUs"):
>           _AcceleratorConnector(accelerator="cpu", plugins=BitsandbytesPrecision(mode="int8"))
                                                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

trainer/connectors/test_accelerator_connector.py:973: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../.venv/lib/python3.12.../plugins/precision/bitsandbytes.py:63: in __init__
    self.bitsandbytes_impl = EnterpriseBitsandbytesPrecision(mode=mode, dtype=dtype, ignore_modules=ignore_modules)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../plugins/precision/bitsandbytes.py:65: in __init__
    ???
.../plugins/precision/bitsandbytes.py:190: in _import_bitsandbytes
    ???
../../.venv/lib/python3.12.../lightning_utilities/core/imports.py:200: in __bool__
    self._check_available()
../../.venv/lib/python3.12.../lightning_utilities/core/imports.py:166: in _check_available
    self._check_requirement()
../../.venv/lib/python3.12.../lightning_utilities/core/imports.py:146: in _check_requirement
    self.available = module_available(module)
                     ^^^^^^^^^^^^^^^^^^^^^^^^
../../.venv/lib/python3.12.../lightning_utilities/core/imports.py:59: in module_available
    if not package_available(module_names[0]):
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../.venv/lib/python3.12.../lightning_utilities/core/imports.py:41: in package_available
    return find_spec(package_name) is not None
           ^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

name = 'bitsandbytes', package = None

>   ???
E   ValueError: bitsandbytes.__spec__ is not set

<frozen importlib.util>:108: ValueError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@github-actions github-actions bot added the docs Documentation related label Nov 14, 2025
@deependujha
Copy link
Collaborator

seems mistakenly submodule got deleted.

@github-actions github-actions bot added the ci Continuous Integration label Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci Continuous Integration dependencies Pull requests that update a dependency file docs Documentation related fabric lightning.fabric.Fabric pl Generic label for PyTorch Lightning package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants