added docs page

This commit is contained in:
William Falcon 2019-06-26 19:58:33 -04:00
parent 1f2b9c9222
commit f24bb8deaa
1 changed files with 21 additions and 51 deletions

View File

@ -33,26 +33,8 @@ Keras is too abstract for researchers. Lightning abstracts the full training loo
Because you want to use best practices and get gpu training, multi-node training, checkpointing, mixed-precision, etc... for free. Because you want to use best practices and get gpu training, multi-node training, checkpointing, mixed-precision, etc... for free.
To use lightning do 2 things: To use lightning do 2 things:
1. Define a model with the lightning interface. 1. [Define a trainer](https://github.com/williamFalcon/pytorch-lightning/blob/master/docs/source/examples/basic_trainer.py) (which will run ALL your models).
2. Feed this model to the lightning trainer. 2. [Define a model](https://github.com/williamFalcon/pytorch-lightning/blob/master/docs/source/examples/example_model.py).
*Example model definition*
```python
from pytorch_lightning import Trainer
from pytorch_lightning.utils.pt_callbacks import EarlyStopping, ModelCheckpoint
# 1 - look at the this page for the interface (https://williamfalcon.github.io/pytorch-lightning/)
model = MyModel()
# 2 - feed to trainer
trainer = Trainer(
checkpoint_callback=ModelCheckpoint(...),
early_stop_callback=EarlyStopping(...),
gpus=[0,1]
)
trainer.fit(model)
```
## What are some key lightning features? ## What are some key lightning features?
@ -91,25 +73,6 @@ Trainer(use_amp=True, amp_level='O2')
Trainer(gpus=[0, 1, 2, 3]) Trainer(gpus=[0, 1, 2, 3])
``` ```
- Run grid-search on cluster
```python
from test_tube import Experiment, SlurmCluster, HyperOptArgumentParser
def training_fx(hparams, cluster, _):
# hparams are local params
model = MyModel()
trainer = Trainer(...)
trainer.fit(model)
# grid search number of layers
parser = HyperOptArgumentParser(strategy='grid_search')
parser.opt_list('--layers', default=5, type=int, options=[1, 5, 10, 20, 50])
hyperparams = parser.parse_args()
cluster = SlurmCluster(hyperparam_optimizer=hyperparams)
cluster.optimize_parallel_cluster_gpu(training_fx)
```
- Automatic checkpointing - Automatic checkpointing
```python ```python
# do 3 things: # do 3 things:
@ -134,18 +97,25 @@ exp = Experiment(...)
Trainer(experiment=exp) Trainer(experiment=exp)
``` ```
- Run grid-search on cluster
```python
from test_tube import Experiment, SlurmCluster, HyperOptArgumentParser
10. Log training details (through test-tube). def training_fx(hparams, cluster, _):
11. Run training on multiple GPUs (through test-tube). # hparams are local params
12. Run training on a GPU cluster managed by SLURM (through test-tube). model = MyModel()
13. Distribute memory-bound models on multiple GPUs. trainer = Trainer(...)
14. Give your model hyperparameters parsed from the command line OR a JSON file. trainer.fit(model)
15. Run your model in a dev environment where nothing logs.
# grid search number of layers
parser = HyperOptArgumentParser(strategy='grid_search')
parser.opt_list('--layers', default=5, type=int, options=[1, 5, 10, 20, 50])
hyperparams = parser.parse_args()
cluster = SlurmCluster(hyperparam_optimizer=hyperparams)
cluster.optimize_parallel_cluster_gpu(training_fx)
```
## Usage
To use lightning do 2 things:
1. [Define a trainer](https://github.com/williamFalcon/pytorch-lightning/blob/master/docs/source/examples/basic_trainer.py) (which will run ALL your models).
2. [Define a model](https://github.com/williamFalcon/pytorch-lightning/blob/master/docs/source/examples/example_model.py).
#### Quick demo #### Quick demo
Run the following demo to see how it works: Run the following demo to see how it works: