diff --git a/pytorch_lightning/core/__init__.py b/pytorch_lightning/core/__init__.py index c05f27e8d3..c29fea079f 100644 --- a/pytorch_lightning/core/__init__.py +++ b/pytorch_lightning/core/__init__.py @@ -94,7 +94,7 @@ Which you can train by doing: trainer = pl.Trainer() model = LitModel() - trainer.fit(model, train_loader) + trainer.fit(model, train_loader) ---------- diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py index 5ff64156e1..36b3ec3229 100644 --- a/pytorch_lightning/core/lightning.py +++ b/pytorch_lightning/core/lightning.py @@ -860,9 +860,9 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod Override to init DDP in your own way or with your own wrapper. The only requirements are that: - 1. On a validation batch the call goes to ``model.validation_step``. - 2. On a training batch the call goes to ``model.training_step``. - 3. On a testing batch, the call goes to ``model.test_step``.+ + 1. On a validation batch, the call goes to ``model.validation_step``. + 2. On a training batch, the call goes to ``model.training_step``. + 3. On a testing batch, the call goes to ``model.test_step``. Args: model: the :class:`LightningModule` currently being optimized.