lightning/docs/index.md

86 lines
6.5 KiB
Markdown
Raw Normal View History

2019-06-28 22:44:44 +00:00
###### New project Quick Start
2019-07-28 12:28:15 +00:00
To start a new project you define two files, a LightningModule and a Trainer file.
2019-07-28 12:26:58 +00:00
A separate trainer file allows to run many LightningModules. Each LightningModule has the core
2019-07-28 12:28:15 +00:00
logic to a particular research project.
For example, one lightningModule could be an image classifier, the other
2019-07-28 12:26:58 +00:00
one could be a seq-2-seq model, both (optionally) ran by the same trainer file.
2019-06-27 18:29:44 +00:00
2019-07-28 12:27:09 +00:00
1. [MNIST LightningModule](https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#minimal-example)
2019-07-28 12:26:58 +00:00
2. [Trainer](https://williamfalcon.github.io/pytorch-lightning/Trainer/)
2019-07-25 16:40:09 +00:00
- [Basic CPU Trainer Template](https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/examples/new_project_templates/single_cpu_template.py)
- [Multi-GPU Trainer Template](https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/examples/new_project_templates/single_gpu_node_template.py)
- [GPU cluster Trainer Template](https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/examples/new_project_templates/multi_node_cluster_template.py)
2019-06-27 00:07:28 +00:00
2019-06-28 22:45:58 +00:00
###### Docs shortcuts
2019-06-28 22:44:44 +00:00
- [LightningModule](LightningModule/RequiredTrainerInterface/)
- [Trainer](Trainer/)
2019-06-27 00:15:18 +00:00
###### Quick start examples
2019-06-28 23:00:01 +00:00
- [CPU example](examples/Examples/#cpu-hyperparameter-search)
- [Hyperparameter search on single GPU](examples/Examples/#hyperparameter-search-on-a-single-or-multiple-gpus)
- [Hyperparameter search on multiple GPUs on same node](examples/Examples/#hyperparameter-search-on-a-single-or-multiple-gpus)
- [Hyperparameter search on a SLURM HPC cluster](examples/Examples/#Hyperparameter search on a SLURM HPC cluster)
2019-06-26 23:18:41 +00:00
2019-06-28 18:53:43 +00:00
###### Checkpointing
2019-06-28 21:45:56 +00:00
- [Model saving](https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/#model-saving)
- [Model loading](https://williamfalcon.github.io/pytorch-lightning/LightningModule/methods/#load-from-metrics)
2019-06-26 23:18:41 +00:00
2019-06-28 22:49:18 +00:00
###### Computing cluster (SLURM)
2019-06-28 18:53:43 +00:00
2019-06-28 21:49:56 +00:00
- [Running grid search on a cluster](https://williamfalcon.github.io/pytorch-lightning/Trainer/SLURM%20Managed%20Cluster#running-grid-search-on-a-cluster)
- [Walltime auto-resubmit](https://williamfalcon.github.io/pytorch-lightning/Trainer/SLURM%20Managed%20Cluster#walltime-auto-resubmit)
2019-06-28 18:53:43 +00:00
2019-06-28 22:49:18 +00:00
###### Debugging
2019-06-28 18:53:43 +00:00
- [Fast dev run](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#fast-dev-run)
- [Inspect gradient norms](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#inspect-gradient-norms)
- [Log GPU usage](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#Log-gpu-usage)
- [Make model overfit on subset of data](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#make-model-overfit-on-subset-of-data)
- [Print the parameter count by layer](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#print-the-parameter-count-by-layer)
- [Pring which gradients are nan](https://williamfalcon.github.io/pytorch-lightning/Trainer/debugging/#print-which-gradients-are-nan)
2019-06-28 22:49:18 +00:00
###### Distributed training
2019-06-28 18:53:43 +00:00
- [16-bit mixed precision](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#16-bit-mixed-precision)
- [Multi-GPU](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#Multi-GPU)
- [Multi-node](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#Multi-node)
- [Single GPU](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#single-gpu)
- [Self-balancing architecture](https://williamfalcon.github.io/pytorch-lightning/Trainer/Distributed%20training/#self-balancing-architecture)
2019-06-28 22:49:18 +00:00
###### Experiment Logging
2019-06-28 18:53:43 +00:00
- [Display metrics in progress bar](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#display-metrics-in-progress-bar)
- [Log metric row every k batches](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#log-metric-row-every-k-batches)
2019-06-28 22:48:09 +00:00
- [Process position](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#process-position)
2019-07-28 11:59:16 +00:00
- [Tensorboard support](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#tensorboard-support)
2019-06-28 18:53:43 +00:00
- [Save a snapshot of all hyperparameters](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#save-a-snapshot-of-all-hyperparameters)
- [Snapshot code for a training run](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#snapshot-code-for-a-training-run)
- [Write logs file to csv every k batches](https://williamfalcon.github.io/pytorch-lightning/Trainer/Logging/#write-logs-file-to-csv-every-k-batches)
2019-06-28 22:49:18 +00:00
###### Training loop
2019-06-28 18:53:43 +00:00
- [Accumulate gradients](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#accumulated-gradients)
- [Force training for min or max epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-training-for-min-or-max-epochs)
- [Force disable early stop](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#force-disable-early-stop)
2019-06-28 22:01:53 +00:00
- [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping)
2019-07-28 11:59:16 +00:00
- [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/)
2019-07-28 14:00:53 +00:00
- [Learning rate scheduling](https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#configure_optimizers)
2019-08-01 14:21:20 +00:00
- [Use multiple optimizers (like GANs)](https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#configure_optimizers)
2019-06-28 18:53:43 +00:00
- [Set how much of the training set to check (1-100%)](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#set-how-much-of-the-training-set-to-check)
2019-07-28 12:04:28 +00:00
###### Validation loop
2019-06-28 18:53:43 +00:00
- [Check validation every n epochs](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#check-validation-every-n-epochs)
2019-07-28 11:59:16 +00:00
- [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/)
2019-06-28 18:53:43 +00:00
- [Set how much of the validation set to check](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#set-how-much-of-the-validation-set-to-check)
- [Set how much of the test set to check](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#set-how-much-of-the-test-set-to-check)
- [Set validation check frequency within 1 training epoch](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#set-validation-check-frequency-within-1-training-epoch)
- [Set the number of validation sanity steps](https://williamfalcon.github.io/pytorch-lightning/Trainer/Validation%20loop/#set-the-number-of-validation-sanity-steps)