From 49b2424e6ef9230a6b16a4b085bec063714ac95c Mon Sep 17 00:00:00 2001 From: William Falcon Date: Tue, 9 Jun 2020 07:43:33 -0400 Subject: [PATCH] Update README.md --- README.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 4630848ff4..42eafb88a5 100644 --- a/README.md +++ b/README.md @@ -116,9 +116,17 @@ The rest of the code is automated by the [Trainer](https://pytorch-lightning.rea ![PT to PL](docs/source/_images/lightning_module/pt_trainer.png) ## Testing Rigour -All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). +All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). -In fact, we also train a few models using a vanilla PyTorch loop and compare with the same model trained using the Trainer to make sure we achieve the EXACT same results. [Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks). +For every PR we test all combinations of: +- PyTorch 1.3, 1.4, 1.5 +- Python 3.6, 3.7, 3.8 +- Linux, OSX, Windows +- Multiple GPUs + +**How does performance compare with vanilla PyTorch?** +We have tests to ensure we get the EXACT same results in under 600 ms difference per epoch. In reality, lightning adds about a 300 ms overhead per epoch. +[Check out the parity tests here](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/benchmarks). Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts. @@ -329,7 +337,7 @@ Lightning has out-of-the-box integration with the popular logging/visualizing fr ## Running speed -Migrating to lightning does not mean compromising on speed! You can expect an overhead of about 600 ms per epoch comparing to pure PyTorch. +Migrating to lightning does not mean compromising on speed! You can expect an overhead of about 300 ms per epoch compared with pure PyTorch. ## Examples