From 064101d04fe40ec91ef6d527eef469447c4dab0b Mon Sep 17 00:00:00 2001 From: William Falcon Date: Sun, 1 Mar 2020 22:35:56 -0500 Subject: [PATCH] docs (#1001) --- docs/source/tutorial_mnist.rst | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/source/tutorial_mnist.rst b/docs/source/tutorial_mnist.rst index c74c1b27e5..d5a084c599 100644 --- a/docs/source/tutorial_mnist.rst +++ b/docs/source/tutorial_mnist.rst @@ -67,7 +67,7 @@ Every research project requires the same core ingredients: The Model ---------- +^^^^^^^^^ The LightningModule provides the structure on how to organize these 5 ingredients. Let's first start with the model. In this case we'll design @@ -131,7 +131,7 @@ EXACTLY the same as you would a PyTorch Module. torch.Size([1, 10]) Data ----- +^^^^ The Lightning Module organizes your dataloaders and data processing as well. Here's the PyTorch code for loading MNIST @@ -174,7 +174,8 @@ under the `train_dataloader` method. This is great because if you run into a pro to figure out how they prepare their training data you can just look in the `train_dataloader` method. Optimizer ---------- +^^^^^^^^^ + Next we choose what optimizer to use for training our system. In PyTorch we do it as follows: @@ -195,7 +196,7 @@ If you don't define this, Lightning will automatically use `Adam(self.parameters return Adam(self.parameters(), lr=1e-3) Training step -------------- +^^^^^^^^^^^^^ The training step is what happens inside the training loop. @@ -449,6 +450,7 @@ Notice the epoch is MUCH faster! Validation loop --------------- + For most cases, we stop training the model when the performance on a validation split of the data reaches a minimum.