
**The lightweight PyTorch wrapper for high-performance AI research.
Scale your models, not the boilerplate.**
---
Website •
Key Features •
How To Use •
Docs •
Examples •
Community •
Grid AI •
Licence
[](https://pypi.org/project/pytorch-lightning/)
[](https://badge.fury.io/py/pytorch-lightning)
[](https://pepy.tech/project/pytorch-lightning)
[](https://anaconda.org/conda-forge/pytorch-lightning)
[](https://hub.docker.com/r/pytorchlightning/pytorch_lightning)
[](https://codecov.io/gh/PyTorchLightning/pytorch-lightning)
[](https://pytorch-lightning.readthedocs.io/en/stable/)
[](https://join.slack.com/t/pytorch-lightning/shared_invite/zt-f6bl2l0l-JYMK3tbAgAmGRrlNr00f1A)
[](https://forums.pytorchlightning.ai/)
[](https://github.com/PytorchLightning/pytorch-lightning/blob/master/LICENSE)
[](https://shields.io/)
###### *Codecov is > 90%+ but build delays may show less
---
## NEWS
[Dec 2020 - Read about how Facebook uses Lightning to standardize deep learning across research and production teams](https://ai.facebook.com/blog/reengineering-facebook-ais-deep-learning-platforms-for-interoperability)
---
## PyTorch Lightning is just organized PyTorch
Lightning disentangles PyTorch code to decouple the science from the engineering.

---
## Lightning Philosophy
Lightning is designed with these principles in mind:
Principle 1: Enable maximal flexibility.
Principle 2: Abstract away unnecessary boilerplate, but make it accessible when needed.
Principle 3: Systems should be self-contained (ie: optimizers, computation code, etc).
Principle 4: Deep learning code should be organized into 4 distinct categories.
- Research code (the LightningModule).
- Engineering code (you delete, and is handled by the Trainer).
- Non-essential research code (logging, etc... this goes in Callbacks).
- Data (use PyTorch Dataloaders or organize them into a LightningDataModule).
Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
Get started with our [2 step guide](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html)
---
## Inference
Lightning is also designed for the fast inference AI researchers and production teams need to scale up things like BERT and self-supervised learning.
Lightning can automatically export to ONNX or TorchScript for those cases.
---
## Continuous Integration