added module properties docs
This commit is contained in:
parent
801c090376
commit
c59853450c
12
README.md
12
README.md
|
@ -92,12 +92,12 @@ def validation_end(self, outputs):
|
|||
- [Model saving](https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/#model-saving)
|
||||
- [Model loading](https://williamfalcon.github.io/pytorch-lightning/LightningModule/methods/#load-from-metrics)
|
||||
|
||||
######Computing cluster (SLURM)
|
||||
###### Computing cluster (SLURM)
|
||||
|
||||
- [Running grid search on a cluster](https://williamfalcon.github.io/pytorch-lightning/Trainer/SLURM%20Managed%20Cluster#running-grid-search-on-a-cluster)
|
||||
- [Walltime auto-resubmit](https://williamfalcon.github.io/pytorch-lightning/Trainer/SLURM%20Managed%20Cluster#walltime-auto-resubmit)
|
||||
|
||||
######Debugging
|
||||
###### Debugging
|
||||
|
||||
- [Fast dev run](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#fast-dev-run)
|
||||
- [Inspect gradient norms](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#inspect-gradient-norms)
|
||||
|
@ -107,7 +107,7 @@ def validation_end(self, outputs):
|
|||
- [Pring which gradients are nan](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#print-which-gradients-are-nan)
|
||||
|
||||
|
||||
######Distributed training
|
||||
###### Distributed training
|
||||
|
||||
- [16-bit mixed precision](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#16-bit-mixed-precision)
|
||||
- [Multi-GPU](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#Multi-GPU)
|
||||
|
@ -116,7 +116,7 @@ def validation_end(self, outputs):
|
|||
- [Self-balancing architecture](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#self-balancing-architecture)
|
||||
|
||||
|
||||
######Experiment Logging
|
||||
###### Experiment Logging
|
||||
|
||||
- [Display metrics in progress bar](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#display-metrics-in-progress-bar)
|
||||
- Log arbitrary metrics
|
||||
|
@ -126,7 +126,7 @@ def validation_end(self, outputs):
|
|||
- [Snapshot code for a training run](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#snapshot-code-for-a-training-run)
|
||||
- [Write logs file to csv every k batches](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#write-logs-file-to-csv-every-k-batches)
|
||||
|
||||
######Training loop
|
||||
###### Training loop
|
||||
|
||||
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
|
||||
- [Anneal Learning rate](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#anneal-learning-rate)
|
||||
|
@ -136,7 +136,7 @@ def validation_end(self, outputs):
|
|||
- [Use multiple optimizers (like GANs)](https://williamfalcon.github.io/pytorch-lightning/Pytorch-Lightning/LightningModule/#configure_optimizers)
|
||||
- [Set how much of the training set to check (1-100%)](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#set-how-much-of-the-training-set-to-check)
|
||||
|
||||
######Validation loop
|
||||
###### Validation loop
|
||||
|
||||
- [Check validation every n epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#check-validation-every-n-epochs)
|
||||
- [Set how much of the validation set to check](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#set-how-much-of-the-validation-set-to-check)
|
||||
|
|
|
@ -24,12 +24,12 @@ To start a new project define these two files.
|
|||
- [Model saving](https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/#model-saving)
|
||||
- [Model loading](https://williamfalcon.github.io/pytorch-lightning/LightningModule/methods/#load-from-metrics)
|
||||
|
||||
######Computing cluster (SLURM)
|
||||
###### Computing cluster (SLURM)
|
||||
|
||||
- [Running grid search on a cluster](https://williamfalcon.github.io/pytorch-lightning/Trainer/SLURM%20Managed%20Cluster#running-grid-search-on-a-cluster)
|
||||
- [Walltime auto-resubmit](https://williamfalcon.github.io/pytorch-lightning/Trainer/SLURM%20Managed%20Cluster#walltime-auto-resubmit)
|
||||
|
||||
######Debugging
|
||||
###### Debugging
|
||||
|
||||
- [Fast dev run](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#fast-dev-run)
|
||||
- [Inspect gradient norms](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#inspect-gradient-norms)
|
||||
|
@ -39,7 +39,7 @@ To start a new project define these two files.
|
|||
- [Pring which gradients are nan](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#print-which-gradients-are-nan)
|
||||
|
||||
|
||||
######Distributed training
|
||||
###### Distributed training
|
||||
|
||||
- [16-bit mixed precision](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#16-bit-mixed-precision)
|
||||
- [Multi-GPU](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#Multi-GPU)
|
||||
|
@ -48,7 +48,7 @@ To start a new project define these two files.
|
|||
- [Self-balancing architecture](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#self-balancing-architecture)
|
||||
|
||||
|
||||
######Experiment Logging
|
||||
###### Experiment Logging
|
||||
|
||||
- [Display metrics in progress bar](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#display-metrics-in-progress-bar)
|
||||
- Log arbitrary metrics
|
||||
|
@ -58,7 +58,7 @@ To start a new project define these two files.
|
|||
- [Snapshot code for a training run](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#snapshot-code-for-a-training-run)
|
||||
- [Write logs file to csv every k batches](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#write-logs-file-to-csv-every-k-batches)
|
||||
|
||||
######Training loop
|
||||
###### Training loop
|
||||
|
||||
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
|
||||
- [Anneal Learning rate](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#anneal-learning-rate)
|
||||
|
|
Loading…
Reference in New Issue