diff --git a/README.md b/README.md index 8c4b7ea128..d45bae33ea 100644 --- a/README.md +++ b/README.md @@ -87,49 +87,6 @@ Get started with our [3 steps guide](https://pytorch-lightning.readthedocs.io/en --- -## [PyTorch Lightning Masterclass](https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2) -### [New lessons weekly!](https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2) - -
-
-

From PyTorch to PyTorch Lightning

- - From PyTorch to PyTorch Lightning - -
-
-

Converting a VAE to PyTorch Lightning

- - From PyTorch to PyTorch Lightning - -
-
- ---- - -## Key Features - -* Scale your models to run on any hardware (CPU, GPUs, TPUs) without changing your model -* Making code more readable by decoupling the research code from the engineering -* Easier to reproduce -* Less error prone by automating most of the training loop and tricky engineering -* Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate -* Lightning has out-of-the-box integration with the popular logging/visualizing frameworks ([Tensorboard](https://pytorch.org/docs/stable/tensorboard.html), [MLFlow](https://mlflow.org/), [Neptune.ai](https://neptune.ai/), [Comet.ml](https://www.comet.ml/site/), [Wandb](https://www.wandb.com/)). -* [Tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs. -* Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch). - -### Lightning automates 40+ parts of DL/ML research -- GPU training -- Distributed GPU (cluster) training -- TPU training -- EarlyStopping -- Logging/Visualizing -- Checkpointing -- Experiment management -- [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) - ---- - ## How To Use #### Setup step: Install @@ -214,6 +171,50 @@ trainer = Trainer(tpu_cores=8) trainer = Trainer(tpu_cores=[1]) ``` +---- + + +## [PyTorch Lightning Masterclass](https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2) +### [New lessons weekly!](https://www.youtube.com/watch?v=DbESHcCoWbM&list=PLaMu-SDt_RB5NUm67hU2pdE75j6KaIOv2) + +
+
+

From PyTorch to PyTorch Lightning

+ + From PyTorch to PyTorch Lightning + +
+
+

Converting a VAE to PyTorch Lightning

+ + From PyTorch to PyTorch Lightning + +
+
+ +--- + +## Key Features + +* Scale your models to run on any hardware (CPU, GPUs, TPUs) without changing your model +* Making code more readable by decoupling the research code from the engineering +* Easier to reproduce +* Less error prone by automating most of the training loop and tricky engineering +* Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate +* Lightning has out-of-the-box integration with the popular logging/visualizing frameworks ([Tensorboard](https://pytorch.org/docs/stable/tensorboard.html), [MLFlow](https://mlflow.org/), [Neptune.ai](https://neptune.ai/), [Comet.ml](https://www.comet.ml/site/), [Wandb](https://www.wandb.com/)). +* [Tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs. +* Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch). + +### Lightning automates 40+ parts of DL/ML research +- GPU training +- Distributed GPU (cluster) training +- TPU training +- EarlyStopping +- Logging/Visualizing +- Checkpointing +- Experiment management +- [Full list here](https://pytorch-lightning.readthedocs.io/en/latest/#common-use-cases) + --- ### Docs