From 6f634a1f0a74a51127e237496dbce9e2aaf41642 Mon Sep 17 00:00:00 2001 From: nithin varghese <39826756+humandotlearning@users.noreply.github.com> Date: Tue, 25 Aug 2020 14:52:58 +0530 Subject: [PATCH] README.md typo correction (#3147) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a4ec496fb5..1fa5bc541e 100644 --- a/README.md +++ b/README.md @@ -112,7 +112,7 @@ Get started with our [3 steps guide](https://pytorch-lightning.readthedocs.io/en * Less error prone by automating most of the training loop and tricky engineering * Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate * Lightning has out-of-the-box integration with the popular logging/visualizing frameworks ([Tensorboard](https://pytorch.org/docs/stable/tensorboard.html), [MLFlow](https://mlflow.org/), [Neptune.ai](https://neptune.ai/), [Comet.ml](https://www.comet.ml/site/), [Wandb](https://www.wandb.com/)). -* [Tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). We test every combination og PyTorch and Python supported versions, every OS, multi GPUs and even TPUs. +* [Tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests). We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs. * Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch). ### Lightning automates 40+ parts of DL/ML research