Skip to content

Commit e7f05a6

Browse files
authored
Merge branch 'master' into bugfix/SWA_scheduler_step
2 parents 1581527 + 7f91c5e commit e7f05a6

File tree

11 files changed

+51
-51
lines changed

11 files changed

+51
-51
lines changed

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -170,6 +170,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
170170

171171
### Fixed
172172

173+
- Sanitize `None` params during pruning ([#6836](https://github.com/PyTorchLightning/pytorch-lightning/pull/6836))
174+
175+
173176
- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))
174177

175178

@@ -200,6 +203,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
200203
- Enforce an epoch scheduler interval when using SWA ([#6588](https://github.com/PyTorchLightning/pytorch-lightning/pull/6588))
201204

202205

206+
- Fixed an issue with `IterableDataset` when `__len__` is not defined ([#6828](https://github.com/PyTorchLightning/pytorch-lightning/pull/6828))
207+
208+
203209
## [1.2.6] - 2021-03-30
204210

205211
### Changed

docs/source/advanced/tpu.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,8 +64,7 @@ To get a TPU on colab, follow these steps:
6464

6565
.. code-block::
6666
67-
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
68-
!python pytorch-xla-env-setup.py --version 1.7 --apt-packages libomp5 libopenblas-dev
67+
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
6968
7069
5. Once the above is done, install PyTorch Lightning (v 0.7.0+).
7170

docs/source/common/lightning_module.rst

Lines changed: 0 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -912,30 +912,6 @@ use_amp
912912
~~~~~~~
913913
True if using Automatic Mixed Precision (AMP)
914914

915-
------------
916-
917-
use_ddp
918-
~~~~~~~
919-
True if using ddp
920-
921-
------------
922-
923-
use_ddp2
924-
~~~~~~~~
925-
True if using ddp2
926-
927-
------------
928-
929-
use_dp
930-
~~~~~~
931-
True if using dp
932-
933-
------------
934-
935-
use_tpu
936-
~~~~~~~
937-
True if using TPUs
938-
939915
--------------
940916

941917
automatic_optimization

docs/source/starter/introduction_guide.rst

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -572,9 +572,7 @@ Next, install the required xla library (adds support for PyTorch on TPUs)
572572

573573
.. code-block:: shell
574574
575-
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
576-
577-
!python pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev
575+
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
578576
579577
In distributed training (multiple GPUs and multiple TPU cores) each GPU or TPU core will run a copy
580578
of this program. This means that without taking any care you will download the dataset N times which

pytorch_lightning/accelerators/accelerator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -480,7 +480,7 @@ def connect_precision_plugin(self, plugin: PrecisionPlugin) -> None:
480480
)
481481
self.setup_precision_plugin(plugin)
482482

483-
def save_checkpoint(self, checkpoint: Dict[str, Any], filepath) -> None:
483+
def save_checkpoint(self, checkpoint: Dict[str, Any], filepath: str) -> None:
484484
"""Save model/training states as a checkpoint file through state-dump and file-write.
485485
486486
Args:

pytorch_lightning/callbacks/finetuning.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ def finetune_function(self, pl_module, current_epoch, optimizer, optimizer_idx):
7777
# When `current_epoch` is 10, feature_extractor will start training.
7878
if current_epoch == self._unfreeze_at_epoch:
7979
self.unfreeze_and_add_param_group(
80-
module=pl_module.feature_extractor,
80+
modules=pl_module.feature_extractor,
8181
optimizer=optimizer,
8282
train_bn=True,
8383
)

pytorch_lightning/callbacks/pruning.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -422,7 +422,9 @@ def sanitize_parameters_to_prune(
422422
current_modules = [m for m in pl_module.modules() if not isinstance(m, _MODULE_CONTAINERS)]
423423

424424
if parameters_to_prune is None:
425-
parameters_to_prune = [(m, p) for p in parameters for m in current_modules if hasattr(m, p)]
425+
parameters_to_prune = [
426+
(m, p) for p in parameters for m in current_modules if getattr(m, p, None) is not None
427+
]
426428
elif (
427429
isinstance(parameters_to_prune, (list, tuple)) and len(parameters_to_prune) > 0
428430
and all(len(p) == 2 for p in parameters_to_prune)

pytorch_lightning/plugins/training_type/training_type_plugin.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
# limitations under the License.
1414
import contextlib
1515
from abc import ABC, abstractmethod
16-
from typing import Any, Callable, Dict, Generator, Iterable, Optional, Tuple, TYPE_CHECKING, Union
16+
from typing import Any, Callable, Dict, Generator, Iterable, Optional, Tuple, TYPE_CHECKING, TypeVar, Union
1717

1818
import torch
1919
from torch.nn import Module
@@ -30,6 +30,8 @@
3030
if TYPE_CHECKING:
3131
from pytorch_lightning.trainer.trainer import Trainer
3232

33+
TBroadcast = TypeVar("T")
34+
3335

3436
class TrainingTypePlugin(Plugin, ABC):
3537
"""A Plugin to change the behaviour of the training, validation and test-loop."""
@@ -88,7 +90,7 @@ def barrier(self, name: Optional[str] = None) -> None:
8890
"""Forces all possibly joined processes to wait for each other"""
8991

9092
@abstractmethod
91-
def broadcast(self, obj: object, src: int = 0) -> object:
93+
def broadcast(self, obj: TBroadcast, src: int = 0) -> TBroadcast:
9294
"""Broadcasts an object to all processes"""
9395

9496
@abstractmethod

pytorch_lightning/trainer/trainer.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -866,7 +866,7 @@ def validate(
866866
self.validating = True
867867

868868
# If you supply a datamodule you can't supply val_dataloaders
869-
if val_dataloaders and datamodule:
869+
if val_dataloaders is not None and datamodule:
870870
raise MisconfigurationException(
871871
'You cannot pass both `trainer.validate(val_dataloaders=..., datamodule=...)`'
872872
)
@@ -928,7 +928,7 @@ def test(
928928
self.testing = True
929929

930930
# If you supply a datamodule you can't supply test_dataloaders
931-
if test_dataloaders and datamodule:
931+
if test_dataloaders is not None and datamodule:
932932
raise MisconfigurationException('You cannot pass both `trainer.test(test_dataloaders=..., datamodule=...)`')
933933

934934
model_provided = model is not None
@@ -1024,7 +1024,7 @@ def predict(
10241024
self.state = TrainerState.PREDICTING
10251025
self.predicting = True
10261026

1027-
if dataloaders and datamodule:
1027+
if dataloaders is not None and datamodule:
10281028
raise MisconfigurationException(
10291029
'You cannot pass dataloaders to trainer.predict if you supply a datamodule.'
10301030
)

tests/callbacks/test_pruning.py

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ def __init__(self):
3636
self.layer = Sequential(
3737
OrderedDict([
3838
("mlp_1", nn.Linear(32, 32)),
39-
("mlp_2", nn.Linear(32, 32)),
39+
("mlp_2", nn.Linear(32, 32, bias=False)),
4040
("mlp_3", nn.Linear(32, 2)),
4141
])
4242
)
@@ -85,7 +85,10 @@ def train_with_pruning_callback(
8585
if parameters_to_prune:
8686
pruning_kwargs["parameters_to_prune"] = [(model.layer.mlp_1, "weight"), (model.layer.mlp_2, "weight")]
8787
else:
88-
pruning_kwargs["parameter_names"] = ["weight"]
88+
if isinstance(pruning_fn, str) and pruning_fn.endswith("_structured"):
89+
pruning_kwargs["parameter_names"] = ["weight"]
90+
else:
91+
pruning_kwargs["parameter_names"] = ["weight", "bias"]
8992
if isinstance(pruning_fn, str) and pruning_fn.endswith("_structured"):
9093
pruning_kwargs["pruning_dim"] = 0
9194
if pruning_fn == "ln_structured":
@@ -249,14 +252,14 @@ def test_multiple_pruning_callbacks(tmpdir, caplog, make_pruning_permanent: bool
249252
actual = [m for m in actual if m.startswith("Applied")]
250253
assert actual == [
251254
"Applied `L1Unstructured`. Pruned: 0/1122 (0.00%) -> 544/1122 (48.48%)",
252-
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=32, bias=True).weight` with amount=0.5. Pruned: 0 (0.00%) -> 506 (49.41%)", # noqa: E501
253-
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=2, bias=True).weight` with amount=0.5. Pruned: 0 (0.00%) -> 38 (59.38%)", # noqa: E501
255+
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=32, bias=True).weight` with amount=0.5. Pruned: 0 (0.00%) -> 500 (48.83%)", # noqa: E501
256+
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=2, bias=True).weight` with amount=0.5. Pruned: 0 (0.00%) -> 44 (68.75%)", # noqa: E501
254257
"Applied `RandomUnstructured`. Pruned: 544/1122 (48.48%) -> 680/1122 (60.61%)",
255-
"Applied `RandomUnstructured` to `Linear(in_features=32, out_features=32, bias=True).weight` with amount=0.25. Pruned: 506 (49.41%) -> 633 (61.82%)", # noqa: E501
256-
"Applied `RandomUnstructured` to `Linear(in_features=32, out_features=2, bias=True).weight` with amount=0.25. Pruned: 38 (59.38%) -> 47 (73.44%)", # noqa: E501
258+
"Applied `RandomUnstructured` to `Linear(in_features=32, out_features=32, bias=True).weight` with amount=0.25. Pruned: 500 (48.83%) -> 635 (62.01%)", # noqa: E501
259+
"Applied `RandomUnstructured` to `Linear(in_features=32, out_features=2, bias=True).weight` with amount=0.25. Pruned: 44 (68.75%) -> 45 (70.31%)", # noqa: E501
257260
"Applied `L1Unstructured`. Pruned: 680/1122 (60.61%) -> 884/1122 (78.79%)",
258-
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=32, bias=True).weight` with amount=0.5. Pruned: 633 (61.82%) -> 828 (80.86%)", # noqa: E501
259-
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=2, bias=True).weight` with amount=0.5. Pruned: 47 (73.44%) -> 56 (87.50%)", # noqa: E501
261+
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=32, bias=True).weight` with amount=0.5. Pruned: 635 (62.01%) -> 830 (81.05%)", # noqa: E501
262+
"Applied `L1Unstructured` to `Linear(in_features=32, out_features=2, bias=True).weight` with amount=0.5. Pruned: 45 (70.31%) -> 54 (84.38%)", # noqa: E501
260263
]
261264

262265
filepath = str(tmpdir / "foo.ckpt")

0 commit comments

Comments
 (0)