Fixed `TypeError` on 1.7.6 when `torch.distributed` unavailable (#14809)

* Fixed TypeError on 1.7.6 when distributed unavailable
* changelog
This commit is contained in:
otaj 2022-09-20 22:35:40 +02:00 committed by lexierule
parent e1a4370835
commit 085b5b38e8
2 changed files with 2 additions and 1 deletions

View File

@ -13,6 +13,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Break HPU Graphs into two parts (forward + backward as one and optimizer as another) for better performance ([#14656](https://github.com/Lightning-AI/lightning/pull/14656))
- Fixed torchscript error with ensembles of LightningModules ([#14657](https://github.com/Lightning-AI/lightning/pull/14657), [#14724](https://github.com/Lightning-AI/lightning/pull/14724))
- Fixed an issue with `TensorBoardLogger.finalize` creating a new experiment when none was created during the Trainer's execution ([#14762](https://github.com/Lightning-AI/lightning/pull/14762))
- Fixed `TypeError` on import when `torch.distributed` is not available ([#14809](https://github.com/Lightning-AI/lightning/pull/14809))
## [1.7.6] - 2022-09-13

View File

@ -31,7 +31,7 @@ from typing_extensions import Protocol, runtime_checkable
if torch.distributed.is_available():
from torch._C._distributed_c10d import ProcessGroup
else:
ProcessGroup = ... # type: ignore[assignment,misc]
ProcessGroup = Any # type: ignore[assignment,misc]
_NUMBER = Union[int, float]
_METRIC = Union[Metric, Tensor, _NUMBER]