From 130de22fd75253330320473d8081fd60698c3d64 Mon Sep 17 00:00:00 2001 From: Nathan Raw Date: Thu, 15 Oct 2020 15:04:54 -0600 Subject: [PATCH] Fix broken trainer flags nb (#4159) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * :pencil: add newline * Created using Colaboratory * Edited using Colaboratory * Apply suggestions from code review Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com> * typo * Apply suggestions from code review Co-authored-by: Adrian Wälchli Co-authored-by: Jirka Borovec Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com> Co-authored-by: edenlightning <66261195+edenlightning@users.noreply.github.com> Co-authored-by: Adrian Wälchli --- notebooks/05-trainer-flags-overview.ipynb | 9 ++++++--- notebooks/README.md | 2 +- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/notebooks/05-trainer-flags-overview.ipynb b/notebooks/05-trainer-flags-overview.ipynb index 4e6c53caf0..ad75bf739e 100644 --- a/notebooks/05-trainer-flags-overview.ipynb +++ b/notebooks/05-trainer-flags-overview.ipynb @@ -74,7 +74,7 @@ "from torchvision.datasets.mnist import MNIST\n", "from torchvision import transforms" ], - "execution_count": 2, + "execution_count": null, "outputs": [] }, { @@ -1644,7 +1644,9 @@ "\n", "2. Iteratively until convergence or maximum number of tries max_trials (default 25) has been reached:\n", "* Call fit() method of trainer. This evaluates steps_per_trial (default 3) number of training steps. Each training step can trigger an OOM error if the tensors (training batch, weights, gradients ect.) allocated during the steps have a too large memory footprint.\n", - "* If an OOM error is encountered, decrease the batch size, or else -> increase it. How much the batch size is increased/decreased is determined by the chosen strategy.\n", + " * If an OOM error is encountered, decrease the batch size\n", + " * Else increase it.\n", + "* How much the batch size is increased/decreased is determined by the chosen stratrgy.\n", "\n", "3. The found batch size is saved to model.hparams.batch_size\n", "\n", @@ -2152,6 +2154,7 @@ "By default Lightning will save a checkpoint in the working directory, which will be updated every epoch.\n", "\n", "### Automatic saving\n", + "By default Lightning will save a checkpoint in the end of the first epoch in the working directory, which will be updated every epoch." ] }, { @@ -2570,7 +2573,7 @@ "Lightning has built in integration with various loggers such as TensorBoard, wandb, commet, etc.\n", "\n", "\n", - "You can pass any metrics you want to logn during training, like loss, to TrainResult.log, such as loss or image output. Similarly, pass in to EvalReuslt.log anything you want to log during validation step.\n", + "You can pass any metrics you want to log during training to `self.log`, such as loss or accuracy. Similarly, pass in to self.log any metric you want to log during validation step.\n", "\n", "These values will be passed in to the logger of your choise. simply pass in any supported logger to logger trainer flag.\n", "\n", diff --git a/notebooks/README.md b/notebooks/README.md index 4dc5dc4b37..695e1a038c 100644 --- a/notebooks/README.md +++ b/notebooks/README.md @@ -10,4 +10,4 @@ You can easily run any of the official notebooks by clicking the 'Open in Colab' | __Datamodules__ | Learn about DataModules and train a dataset-agnostic model on MNIST and CIFAR10.| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/02-datamodules.ipynb)| | __GAN__ | Train a GAN on the MNIST Dataset. Learn how to use multiple optimizers in Lightning. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/03-basic-gan.ipynb) | | __BERT__ | Fine-tune HuggingFace Transformers models on the GLUE Benchmark | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb) | -| __Trainer Flags__ | Overview of the available Lightning `Trainer` flags | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb) | \ No newline at end of file +| __Trainer Flags__ | Overview of the available Lightning `Trainer` flags | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb) |