added module properties docs

This commit is contained in:
William Falcon 2019-06-28 18:42:53 -04:00
parent e00d097c12
commit 0bdb8533c6
3 changed files with 28 additions and 4 deletions

View File

@ -0,0 +1,25 @@
A LightningModule has the following properties which you can access at any time
---
#### current_epoch
The current epoch
---
#### dtype
Current dtype
---
#### global_step
Total training batches seen across all epochs
---
#### gradient_clip
The current gradient clip value
---
#### on_gpu
True if your model is currently running on GPUs. Useful to set flags around the LightningModule for different CPU vs GPU behavior.
---
#### Trainer
Last resort access to any state the trainer has. Changing certain properties here could affect your training run.

View File

@ -11,9 +11,9 @@
- [GPU cluster Trainer](https://github.com/williamFalcon/pytorch-lightning/blob/master/examples/new_project_templates/trainer_gpu_cluster_template.py)
###### Quick start examples
- [CPU example](https://williamfalcon.github.io/pytorch-lightning/Examples/#CPU-hyperparameter-search)
- [Hyperparameter search on single GPU](https://williamfalcon.github.io/pytorch-lightning/Examples/#Hyperparameter-search-on-a-single-or-multiple-GPUs)
- [Hyperparameter search on multiple GPUs on same node](https://williamfalcon.github.io/pytorch-lightning/Examples/#Hyperparameter-search-on-a-single-or-multiple-GPUs)
- [CPU example](https://williamfalcon.github.io/pytorch-lightning/Examples/#cpu-hyperparameter-search)
- [Hyperparameter search on single GPU](https://williamfalcon.github.io/pytorch-lightning/Examples/#hyperparameter-search-on-a-single-or-multiple-gpus)
- [Hyperparameter search on multiple GPUs on same node](https://williamfalcon.github.io/pytorch-lightning/Examples/#hyperparameter-search-on-a-single-or-multiple-gpus)
- [Hyperparameter search on a SLURM HPC cluster](https://williamfalcon.github.io/pytorch-lightning/Examples/#Hyperparameter search on a SLURM HPC cluster)

View File

@ -24,7 +24,6 @@ class LightningModule(GradInformation, ModelIO, OptimizerConfig, ModelHooks):
self.fast_dev_run = hparams.fast_dev_run
self.overfit = hparams.overfit
self.gradient_clip = hparams.gradient_clip
self.num = 2
self.trainer = None
self.from_lightning = True