From 598e1accb59714264ec75672a407e59151e62cba Mon Sep 17 00:00:00 2001 From: William Falcon Date: Thu, 1 Aug 2019 10:11:26 -0400 Subject: [PATCH] updated docs --- README.md | 2 +- docs/LightningModule/RequiredTrainerInterface.md | 6 +++--- docs/LightningModule/methods.md | 2 +- docs/Trainer/Logging.md | 2 +- docs/Trainer/hooks.md | 2 +- docs/Trainer/index.md | 2 +- docs/index.md | 2 +- mkdocs.yml | 4 ++-- tests/README.md | 2 +- 9 files changed, 12 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 0196245aa2..b9ecec319b 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@

- Pytorch Lightning + PyTorch Lightning

The Keras for ML researchers using PyTorch. More control. Less boilerplate. diff --git a/docs/LightningModule/RequiredTrainerInterface.md b/docs/LightningModule/RequiredTrainerInterface.md index da230ea611..abc015caf0 100644 --- a/docs/LightningModule/RequiredTrainerInterface.md +++ b/docs/LightningModule/RequiredTrainerInterface.md @@ -300,7 +300,7 @@ def tng_dataloader(self) Called by lightning during training loop. Make sure to use the @ptl.data_loader decorator, this ensures not calling this function until the data are needed. ##### Return -Pytorch DataLoader +PyTorch DataLoader **Example** @@ -327,7 +327,7 @@ def tng_dataloader(self) Called by lightning during validation loop. Make sure to use the @ptl.data_loader decorator, this ensures not calling this function until the data are needed. ##### Return -Pytorch DataLoader +PyTorch DataLoader **Example** @@ -355,7 +355,7 @@ def test_dataloader(self) Called by lightning during test loop. Make sure to use the @ptl.data_loader decorator, this ensures not calling this function until the data are needed. ##### Return -Pytorch DataLoader +PyTorch DataLoader **Example** diff --git a/docs/LightningModule/methods.md b/docs/LightningModule/methods.md index d57c695034..cb96ea7a1d 100644 --- a/docs/LightningModule/methods.md +++ b/docs/LightningModule/methods.md @@ -31,7 +31,7 @@ y_hat = pretrained_model(x) | Param | description | |---|---| -| weights_path | Path to a pytorch checkpoint | +| weights_path | Path to a PyTorch checkpoint | | tags_csv | Path to meta_tags.csv file generated by the test-tube Experiment | | on_gpu | if True, puts model on GPU. Make sure to use transforms option if model devices have changed | | map_location | A dictionary mapping saved weight GPU devices to new GPU devices | diff --git a/docs/Trainer/Logging.md b/docs/Trainer/Logging.md index d3004cb2e3..2edbab16f1 100644 --- a/docs/Trainer/Logging.md +++ b/docs/Trainer/Logging.md @@ -52,7 +52,7 @@ Trainer(experiment=exp) --- ### Tensorboard support -The experiment object is a strict subclass of Pytorch SummaryWriter. However, this class +The experiment object is a strict subclass of PyTorch SummaryWriter. However, this class also snapshots every detail about the experiment (data folder paths, code, hyperparams), and allows you to visualize it using tensorboard. ``` {.python} diff --git a/docs/Trainer/hooks.md b/docs/Trainer/hooks.md index c7a4bfbea0..dd08b30b45 100644 --- a/docs/Trainer/hooks.md +++ b/docs/Trainer/hooks.md @@ -5,7 +5,7 @@ There are cases when you might want to do something different at different parts To enable a hook, simply override the method in your LightningModule and the trainer will call it at the correct time. **Contributing** If there's a hook you'd like to add, simply: -1. Fork PytorchLightning. +1. Fork PyTorchLightning. 2. Add the hook [here](https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/root_module/hooks.py). 3. Add the correct place in the [Trainer](https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/models/trainer.py) where it should be called. diff --git a/docs/Trainer/index.md b/docs/Trainer/index.md index 128f12452d..19c10d4940 100644 --- a/docs/Trainer/index.md +++ b/docs/Trainer/index.md @@ -64,7 +64,7 @@ But of course the fun is in all the advanced things it can do: - [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping) - [Hooks](hooks) - [Learning rate scheduling](https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#configure_optimizers) -- [Use multiple optimizers (like GANs)](https://williamfalcon.github.io/pytorch-lightning/Pytorch-Lightning/LightningModule/#configure_optimizers) +- [Use multiple optimizers (like GANs)](https://williamfalcon.github.io/pytorch-lightning/PyTorch-Lightning/LightningModule/#configure_optimizers) - [Set how much of the training set to check (1-100%)](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#set-how-much-of-the-training-set-to-check) **Validation loop** diff --git a/docs/index.md b/docs/index.md index 9c62b00354..973e744cc7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -71,7 +71,7 @@ one could be a seq-2-seq model, both (optionally) ran by the same trainer file. - [Gradient Clipping](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#gradient-clipping) - [Hooks](https://williamfalcon.github.io/pytorch-lightning/Trainer/hooks/) - [Learning rate scheduling](https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#configure_optimizers) -- [Use multiple optimizers (like GANs)](https://williamfalcon.github.io/pytorch-lightning/Pytorch-Lightning/LightningModule/#configure_optimizers) +- [Use multiple optimizers (like GANs)](https://williamfalcon.github.io/pytorch-lightning/PyTorch-Lightning/LightningModule/#configure_optimizers) - [Set how much of the training set to check (1-100%)](https://williamfalcon.github.io/pytorch-lightning/Trainer/Training%20Loop/#set-how-much-of-the-training-set-to-check) ###### Validation loop diff --git a/mkdocs.yml b/mkdocs.yml index 5675a64e01..539714d4b5 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,10 +1,10 @@ -site_name: Pytorch lightning Documentation +site_name: PyTorch lightning Documentation theme: name: 'material' docs_dir: docs repo_url: https://github.com/williamFalcon/pytorch-lightning site_dir: 'site' -site_description: 'Documentation for Pytorch LightningModule, the researcher version of keras.' +site_description: 'Documentation for PyTorch LightningModule, the researcher version of keras.' dev_addr: '0.0.0.0:8000' #google_analytics: ['UA-aasd', 'sitename'] diff --git a/tests/README.md b/tests/README.md index 20f783bf7e..f7a85b51a0 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,4 +1,4 @@ -# Pytorch-Lightning Tests +# PyTorch-Lightning Tests ## Running tests The automatic travis tests ONLY run CPU-based tests. Although these cover most of the use cases,