From db7997c38c38d6f295b84292e65667071208bb94 Mon Sep 17 00:00:00 2001 From: William Falcon Date: Thu, 8 Oct 2020 07:05:26 -0400 Subject: [PATCH] docs --- docs/source/community_examples.rst | 2 -- docs/source/hyperparameters.rst | 42 ++++++++++++++++++++++-------- docs/source/introduction_guide.rst | 4 +-- docs/source/new-project.rst | 4 +-- 4 files changed, 35 insertions(+), 17 deletions(-) diff --git a/docs/source/community_examples.rst b/docs/source/community_examples.rst index 05db763113..470c8e2dd8 100644 --- a/docs/source/community_examples.rst +++ b/docs/source/community_examples.rst @@ -4,8 +4,6 @@ Community Examples - `Contextual Emotion Detection (DoubleDistilBert) `_. - `Cotatron: Transcription-Guided Speech Encoder `_. - `FasterRCNN object detection + Hydra `_. -- `Hyperparameter optimization with Optuna `_. -- `Hyperparameter optimization with Ray Tune `_. - `Image Inpainting using Partial Convolutions `_. - `MNIST on TPU `_. - `NER (transformers, TPU) `_. diff --git a/docs/source/hyperparameters.rst b/docs/source/hyperparameters.rst index a32efecede..e85a64e58d 100644 --- a/docs/source/hyperparameters.rst +++ b/docs/source/hyperparameters.rst @@ -12,6 +12,37 @@ Hyperparameters Lightning has utilities to interact seamlessly with the command line ``ArgumentParser`` and plays well with the hyperparameter optimization framework of your choice. +------------ + +Lightning-Grid +-------------- +Lightning has a native solution for doing sweeps and training models at scale called Lightning-Grid. +Grid lets you launch sweeps from your laptop on the cloud provider of your choice. We've designed Grid to +work for Lightning users without needing to make ANY changes to their code. + +To use grid, take your regular command: + +.. code-block:: bash + + python my_model.py --learning_rate 1e-6 --layers 2 --gpus 4 + +And change it to use the grid train command: + +.. code-block:: bash + + grid train --grid_gpus 4 my_model.py --learning_rate 'uniform(1e-6, 1e-1, 20)' --layers '[2, 4, 8, 16]' + +The above command will launch (20 * 4) experiments each running on 4 GPUs (320 GPUs!) - by making ZERO changes to +your code. + +The `uniform` command is part of our new expressive syntax which lets you construct hyperparameter combinations +using over 20+ distributions, lists, etc. Of course, you can also configure all of this using yamls which +can be dynamically assembled at runtime. + +Grid is in private early-access now but you can request access at `grid.ai `_. + +.. hint:: Grid supports the search strategy of your choice! (and much more than just sweeps) + ---------- ArgumentParser @@ -291,14 +322,3 @@ and now we can train MNIST or the GAN using the command line interface! $ python main.py --model_name gan --encoder_layers 24 $ python main.py --model_name mnist --layer_1_dim 128 - ----------- - -Hyperparameter Optimization -^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Lightning is fully compatible with the hyperparameter optimization libraries! -Here are some useful ones: - -- `Hydra `_ -- `Optuna `_ -- `Ray Tune `_ diff --git a/docs/source/introduction_guide.rst b/docs/source/introduction_guide.rst index 7d2dd5c9c8..055a414f2f 100644 --- a/docs/source/introduction_guide.rst +++ b/docs/source/introduction_guide.rst @@ -49,7 +49,7 @@ Or conda. conda install pytorch-lightning -c conda-forge - +------------- The research ============ @@ -486,7 +486,7 @@ Once your training starts, you can view the logs by using your favorite logger o tensorboard --logdir ./lightning_logs -Which will generate automatic tensorboard logs. +Which will generate automatic tensorboard logs (or with the logger of your choice). .. figure:: /_images/mnist_imgs/mnist_tb.png :alt: mnist CPU bar diff --git a/docs/source/new-project.rst b/docs/source/new-project.rst index fa2b89b04d..de570a7676 100644 --- a/docs/source/new-project.rst +++ b/docs/source/new-project.rst @@ -645,8 +645,8 @@ Lightning has many tools for debugging. Here is an example of just a few of them .. code-block:: python - # run validation every 20% of a training epoch - trainer = pl.Trainer(val_check_interval=0.2) + # run validation every 25% of a training epoch + trainer = pl.Trainer(val_check_interval=0.25) .. code-block:: python