Fix broken trainer flags nb (#4159)

* 📝 add newline

* Created using Colaboratory

* Edited using Colaboratory

* Apply suggestions from code review

Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>

* typo

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: edenlightning <66261195+edenlightning@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
This commit is contained in:
Nathan Raw 2020-10-15 15:04:54 -06:00 committed by GitHub
parent b45b57cc58
commit 130de22fd7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 7 additions and 4 deletions

View File

@ -74,7 +74,7 @@
"from torchvision.datasets.mnist import MNIST\n", "from torchvision.datasets.mnist import MNIST\n",
"from torchvision import transforms" "from torchvision import transforms"
], ],
"execution_count": 2, "execution_count": null,
"outputs": [] "outputs": []
}, },
{ {
@ -1644,7 +1644,9 @@
"\n", "\n",
"2. Iteratively until convergence or maximum number of tries max_trials (default 25) has been reached:\n", "2. Iteratively until convergence or maximum number of tries max_trials (default 25) has been reached:\n",
"* Call fit() method of trainer. This evaluates steps_per_trial (default 3) number of training steps. Each training step can trigger an OOM error if the tensors (training batch, weights, gradients ect.) allocated during the steps have a too large memory footprint.\n", "* Call fit() method of trainer. This evaluates steps_per_trial (default 3) number of training steps. Each training step can trigger an OOM error if the tensors (training batch, weights, gradients ect.) allocated during the steps have a too large memory footprint.\n",
"* If an OOM error is encountered, decrease the batch size, or else -> increase it. How much the batch size is increased/decreased is determined by the chosen strategy.\n", " * If an OOM error is encountered, decrease the batch size\n",
" * Else increase it.\n",
"* How much the batch size is increased/decreased is determined by the chosen stratrgy.\n",
"\n", "\n",
"3. The found batch size is saved to model.hparams.batch_size\n", "3. The found batch size is saved to model.hparams.batch_size\n",
"\n", "\n",
@ -2152,6 +2154,7 @@
"By default Lightning will save a checkpoint in the working directory, which will be updated every epoch.\n", "By default Lightning will save a checkpoint in the working directory, which will be updated every epoch.\n",
"\n", "\n",
"### Automatic saving\n", "### Automatic saving\n",
"By default Lightning will save a checkpoint in the end of the first epoch in the working directory, which will be updated every epoch."
] ]
}, },
{ {
@ -2570,7 +2573,7 @@
"Lightning has built in integration with various loggers such as TensorBoard, wandb, commet, etc.\n", "Lightning has built in integration with various loggers such as TensorBoard, wandb, commet, etc.\n",
"\n", "\n",
"\n", "\n",
"You can pass any metrics you want to logn during training, like loss, to TrainResult.log, such as loss or image output. Similarly, pass in to EvalReuslt.log anything you want to log during validation step.\n", "You can pass any metrics you want to log during training to `self.log`, such as loss or accuracy. Similarly, pass in to self.log any metric you want to log during validation step.\n",
"\n", "\n",
"These values will be passed in to the logger of your choise. simply pass in any supported logger to logger trainer flag.\n", "These values will be passed in to the logger of your choise. simply pass in any supported logger to logger trainer flag.\n",
"\n", "\n",