diff --git a/docs/source/starter/lightning_lite.rst b/docs/source/starter/lightning_lite.rst index 860bd60511..470ebb6a93 100644 --- a/docs/source/starter/lightning_lite.rst +++ b/docs/source/starter/lightning_lite.rst @@ -69,7 +69,6 @@ The ``run`` function contains custom training loop used to train ``MyModel`` on run(args) - ---------- @@ -124,7 +123,7 @@ Here are five required steps to convert to :class:`~pytorch_lightning.lite.Light Lite(...).run(args) -That's all. You can now train on any kind of device and scale your training. +That's all. You can now train on any kind of device and scale your training. Check out `this `_ full MNIST training example with LightningLite. :class:`~pytorch_lightning.lite.LightningLite` takes care of device management, so you don't have to. You should remove any device-specific logic within your code.