Update README.md

This commit is contained in:
William Falcon 2020-09-21 16:42:38 -04:00 committed by GitHub
parent 6309a0dbb8
commit 6aad8adfa3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 14 additions and 15 deletions

View File

@ -38,15 +38,23 @@
###### *Codecov is > 90%+ but build delays may show less
## PyTorch Lightning is just organized PyTorch
Lightning disentangles PyTorch code to decouple the science from the engineering.
![PT to PL](/docs/source/_images/general/pl_quick_start_full_compressed.gif)
Lightning disentangles PyTorch code to decouple the science from the engineering
by organizing it into 4 categories:
---
1. Research code (the LightningModule).
2. Engineering code (you delete, and is handled by the Trainer).
3. Non-essential research code (logging, etc... this goes in Callbacks).
4. Data (use PyTorch Dataloaders or organize them into a LightningDataModule).
## Lightning Philosophy
Lightning is designed with these principles in mind:
1. Enable maximal flexibility.
2. Abstract away unecessary boilerplate.
3. Systems should be self-contained (ie: optimizers, computation code, etc).
4. Deep learning code should be organized into 4 distinct categories
- Research code (the LightningModule).
- Engineering code (you delete, and is handled by the Trainer).
- Non-essential research code (logging, etc... this goes in Callbacks).
- Data (use PyTorch Dataloaders or organize them into a LightningDataModule).
Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
@ -170,15 +178,6 @@ trainer = Trainer(tpu_cores=8)
trainer = Trainer(tpu_cores=[1])
```
----
## Lightning philosophy
- Enable maximal flexibility.
- Abstract away unecessary boilerplate.
- Systems should be self-contained (ie: optimizers, computation code, etc).
- Deep learning code should be organized into 4 distinct categories (data, system, engineering, non-critical code).
---
## Key Features