Update trainer profiler typehint to use `Profiler` instead of the deprecated `BaseProfiler` (#13084)

* Fix trainer profiler typehint

* Remove unused import of deprecated BaseProfiler

* Update CHANGELOG.md

Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
This commit is contained in:
mads-oestergaard 2022-05-23 12:09:47 +02:00 committed by GitHub
parent 874ae50870
commit 11e289ad9f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 4 additions and 2 deletions

View File

@ -226,6 +226,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Fixed an issue wrt unnecessary usage of habana mixed precision package for fp32 types ([#13028](https://github.com/PyTorchLightning/pytorch-lightning/pull/13028))
- Fixed issue where the CLI could not pass a `Profiler` to the `Trainer` ([#13084](https://github.com/PyTorchLightning/pytorch-lightning/pull/13084))
-

View File

@ -56,7 +56,6 @@ from pytorch_lightning.plugins import (
from pytorch_lightning.plugins.environments.slurm_environment import SLURMEnvironment
from pytorch_lightning.profiler import (
AdvancedProfiler,
BaseProfiler,
PassThroughProfiler,
Profiler,
PyTorchProfiler,
@ -171,7 +170,7 @@ class Trainer(
weights_save_path: Optional[str] = None, # TODO: Remove in 1.8
num_sanity_val_steps: int = 2,
resume_from_checkpoint: Optional[Union[Path, str]] = None,
profiler: Optional[Union[BaseProfiler, str]] = None,
profiler: Optional[Union[Profiler, str]] = None,
benchmark: Optional[bool] = None,
deterministic: Union[bool, _LITERAL_WARN] = False,
reload_dataloaders_every_n_epochs: int = 0,