Fix broken trainer flags nb (#4159)
* 📝 add newline
* Created using Colaboratory
* Edited using Colaboratory
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* typo
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: edenlightning <66261195+edenlightning@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
This commit is contained in:
parent
b45b57cc58
commit
130de22fd7
|
@ -74,7 +74,7 @@
|
|||
"from torchvision.datasets.mnist import MNIST\n",
|
||||
"from torchvision import transforms"
|
||||
],
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
|
@ -1644,7 +1644,9 @@
|
|||
"\n",
|
||||
"2. Iteratively until convergence or maximum number of tries max_trials (default 25) has been reached:\n",
|
||||
"* Call fit() method of trainer. This evaluates steps_per_trial (default 3) number of training steps. Each training step can trigger an OOM error if the tensors (training batch, weights, gradients ect.) allocated during the steps have a too large memory footprint.\n",
|
||||
"* If an OOM error is encountered, decrease the batch size, or else -> increase it. How much the batch size is increased/decreased is determined by the chosen strategy.\n",
|
||||
" * If an OOM error is encountered, decrease the batch size\n",
|
||||
" * Else increase it.\n",
|
||||
"* How much the batch size is increased/decreased is determined by the chosen stratrgy.\n",
|
||||
"\n",
|
||||
"3. The found batch size is saved to model.hparams.batch_size\n",
|
||||
"\n",
|
||||
|
@ -2152,6 +2154,7 @@
|
|||
"By default Lightning will save a checkpoint in the working directory, which will be updated every epoch.\n",
|
||||
"\n",
|
||||
"### Automatic saving\n",
|
||||
"By default Lightning will save a checkpoint in the end of the first epoch in the working directory, which will be updated every epoch."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -2570,7 +2573,7 @@
|
|||
"Lightning has built in integration with various loggers such as TensorBoard, wandb, commet, etc.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"You can pass any metrics you want to logn during training, like loss, to TrainResult.log, such as loss or image output. Similarly, pass in to EvalReuslt.log anything you want to log during validation step.\n",
|
||||
"You can pass any metrics you want to log during training to `self.log`, such as loss or accuracy. Similarly, pass in to self.log any metric you want to log during validation step.\n",
|
||||
"\n",
|
||||
"These values will be passed in to the logger of your choise. simply pass in any supported logger to logger trainer flag.\n",
|
||||
"\n",
|
||||
|
|
|
@ -10,4 +10,4 @@ You can easily run any of the official notebooks by clicking the 'Open in Colab'
|
|||
| __Datamodules__ | Learn about DataModules and train a dataset-agnostic model on MNIST and CIFAR10.| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/02-datamodules.ipynb)|
|
||||
| __GAN__ | Train a GAN on the MNIST Dataset. Learn how to use multiple optimizers in Lightning. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/03-basic-gan.ipynb) |
|
||||
| __BERT__ | Fine-tune HuggingFace Transformers models on the GLUE Benchmark | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb) |
|
||||
| __Trainer Flags__ | Overview of the available Lightning `Trainer` flags | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb) |
|
||||
| __Trainer Flags__ | Overview of the available Lightning `Trainer` flags | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/PytorchLightning/pytorch-lightning/blob/master/notebooks/05-trainer-flags-overview.ipynb) |
|
||||
|
|
Loading…
Reference in New Issue