diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py index 9c87836b44..58d045044d 100644 --- a/pytorch_lightning/core/lightning.py +++ b/pytorch_lightning/core/lightning.py @@ -1313,7 +1313,7 @@ class LightningModule( By default, Lightning calls ``step()`` and ``zero_grad()`` as shown in the example once per optimizer. - .. tip:: With `Trainer(enable_pl_optimizer=True)`, you can user `optimizer.step()` directly + .. tip:: With ``Trainer(enable_pl_optimizer=True)``, you can use ``optimizer.step()`` directly and it will handle zero_grad, accumulated gradients, AMP, TPU and more automatically for you. Warning: