diff --git a/docs/source-pytorch/common/trainer.rst b/docs/source-pytorch/common/trainer.rst index 759292a3bd..1eb3f270fa 100644 --- a/docs/source-pytorch/common/trainer.rst +++ b/docs/source-pytorch/common/trainer.rst @@ -262,9 +262,6 @@ Example:: # Training with GPU Accelerator using total number of gpus available on the system Trainer(accelerator="gpu") -.. warning:: Passing training strategies (e.g., ``"ddp"``) to ``accelerator`` has been deprecated in v1.5.0 - and will be removed in v1.7.0. Please use the ``strategy`` argument instead. - accumulate_grad_batches ^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/src/pytorch_lightning/trainer/trainer.py b/src/pytorch_lightning/trainer/trainer.py index 35606d7128..f6789314d7 100644 --- a/src/pytorch_lightning/trainer/trainer.py +++ b/src/pytorch_lightning/trainer/trainer.py @@ -172,10 +172,6 @@ class Trainer: accelerator: Supports passing different accelerator types ("cpu", "gpu", "tpu", "ipu", "hpu", "mps, "auto") as well as custom accelerator instances. - .. deprecated:: v1.5 - Passing training strategies (e.g., 'ddp') to ``accelerator`` has been deprecated in v1.5.0 - and will be removed in v1.7.0. Please use the ``strategy`` argument instead. - accumulate_grad_batches: Accumulates grads every k batches or as set up in the dict. Default: ``None``.