diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index b0c79796d1..effdee498e 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -66,16 +66,17 @@ A lot of good work has already been done in project mechanics (requirements/base 1. If you find a bug please submit a github issue. -- Make sure the title explains the issue. -- Describe your setup, what you are trying to do, expected vs. actual behaviour. Please add configs and code samples. -- Add details on how to reproduce the issue - a minimal test case is always best, colab is also great. - Note, that the sample code shall be minimal and if needed with publicly available data. + - Make sure the title explains the issue. + - Describe your setup, what you are trying to do, expected vs. actual behaviour. Please add configs and code samples. + - Add details on how to reproduce the issue - a minimal test case is always best, colab is also great. + Note, that the sample code shall be minimal and if needed with publicly available data. + +2. Try to fix it or recommend a solution. We highly recommend to use test-driven approach: -2. Try to fix it or recommend a solution... - We highly recommend to use test-driven approach: - Convert your minimal code example to a unit/integration test with assert on expected results. - Start by debugging the issue... You can run just this particular test in your IDE and draft a fix. - Verify that your test case fails on the master branch and only passes with the fix applied. + 3. Submit a PR! _**Note**, even if you do not find the solution, sending a PR with a test covering the issue is a valid contribution and we can help you or finish it with you :]_ @@ -84,15 +85,14 @@ _**Note**, even if you do not find the solution, sending a PR with a test coveri 1. Submit a github issue - describe what is the motivation of such feature (adding the use case or an example is helpful). 2. Let's discuss to determine the feature scope. -3. Submit a PR! - We recommend test driven approach to adding new features as well: +3. Submit a PR! We recommend test driven approach to adding new features as well: -- Write a test for the functionality you want to add. -- Write the functional code until the test passes. + - Write a test for the functionality you want to add. + - Write the functional code until the test passes. 4. Add/update the relevant tests! -- [This PR](https://github.com/PyTorchLightning/pytorch-lightning/pull/2671) is a good example for adding a new metric, and [this one for a new logger](https://github.com/PyTorchLightning/pytorch-lightning/pull/2721) +- [This PR](https://github.com/PyTorchLightning/pytorch-lightning/pull/2671) is a good example for adding a new metric, and [this one for a new logger](https://github.com/PyTorchLightning/pytorch-lightning/pull/2721). ### Test cases: @@ -194,6 +194,7 @@ Note: if your computer does not have multi-GPU nor TPU these tests are skipped. This is useful if you do not test against all required dependency versions. **Docker:** Another option is utilize the [pytorch lightning cuda base docker image](https://hub.docker.com/repository/docker/pytorchlightning/pytorch_lightning/tags?page=1&name=cuda). You can then run: + ```bash python -m pytest pytorch_lightning tests pl_examples -v --flake8 ``` @@ -233,13 +234,13 @@ We welcome any useful contribution! For your convenience here's a recommended wo 3. **How to rebase my PR?** We recommend creating a PR in separate branch other than `master`, especially if you plan submitting several changes and do not want to wait until the first one is resolved (we can work on them in parallel). - + First, make sure you have set [upstream](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/configuring-a-remote-for-a-fork) by running: ```bash git remote add upstream https://github.com/PyTorchLightning/pytorch-lightning.git ``` - + You'll know its set up right if you run `git remote -v` and see something similar to this: ```bash @@ -248,8 +249,9 @@ We welcome any useful contribution! For your convenience here's a recommended wo upstream https://github.com/PyTorchLightning/pytorch-lightning.git (fetch) upstream https://github.com/PyTorchLightning/pytorch-lightning.git (push) ``` - + Now you can update your master with upstream's master by running: + ```bash git fetch --all --prune git checkout master @@ -257,14 +259,16 @@ We welcome any useful contribution! For your convenience here's a recommended wo ``` Finally, checkout your feature branch and rebase it with master before pushing up your feature branch: + ```bash git checkout my-PR-branch git rebase master # follow git instructions to resolve conflicts git push -f ``` - + Eventually, you can perform the rebasing directly from upstream after setting it up: + ```bash git fetch --all --prune git rebase upstream/master @@ -302,6 +306,7 @@ plrebase (){ ``` Now, you can: - - clone your fork and set up upstream by running `plclone` from your terminal - - fetch upstream and update your local master branch with it by running `plfetch` - - rebase your feature branch (after running `plfetch`) by running `plrebase your-branch-name` + +- clone your fork and set up upstream by running `plclone` from your terminal +- fetch upstream and update your local master branch with it by running `plfetch` +- rebase your feature branch (after running `plfetch`) by running `plrebase your-branch-name` diff --git a/docs/source/apex.rst b/docs/source/amp.rst similarity index 99% rename from docs/source/apex.rst rename to docs/source/amp.rst index 2867cdfa98..e65ff94283 100644 --- a/docs/source/apex.rst +++ b/docs/source/amp.rst @@ -2,7 +2,7 @@ from pytorch_lightning.trainer.trainer import Trainer -.. _16-bit: +.. _amp: 16-bit training ================= diff --git a/docs/source/datamodules.rst b/docs/source/datamodules.rst index 7a618b2344..998ccf49c5 100644 --- a/docs/source/datamodules.rst +++ b/docs/source/datamodules.rst @@ -1,4 +1,4 @@ -.. _data-modules: +.. _datamodules: LightningDataModule =================== diff --git a/docs/source/early_stopping.rst b/docs/source/early_stopping.rst index 66b5a9d131..643a2d5160 100644 --- a/docs/source/early_stopping.rst +++ b/docs/source/early_stopping.rst @@ -3,14 +3,14 @@ from pytorch_lightning.trainer.trainer import Trainer from pytorch_lightning.callbacks.early_stopping import EarlyStopping -.. _early-stopping: +.. _early_stopping: Early stopping ============== Stopping an epoch early ----------------------- -You can stop an epoch early by overriding :meth:`~pytorch_lightning.core.lightning.LightningModule.on_batch_start` to return `-1` when some condition is met. +You can stop an epoch early by overriding :meth:`~pytorch_lightning.core.lightning.LightningModule.on_batch_start` to return ``-1`` when some condition is met. If you do this repeatedly, for every epoch you had originally requested, then this will stop your entire run. diff --git a/docs/source/experiment_logging.rst b/docs/source/experiment_logging.rst index bf90382298..0229514879 100644 --- a/docs/source/experiment_logging.rst +++ b/docs/source/experiment_logging.rst @@ -3,7 +3,7 @@ from pytorch_lightning.trainer.trainer import Trainer from pytorch_lightning.core.lightning import LightningModule -.. _experiment-logging: +.. _experiment_logging: Experiment Logging ================== diff --git a/docs/source/experiment_reporting.rst b/docs/source/experiment_reporting.rst index f19e6fd8e4..9b06f6df09 100644 --- a/docs/source/experiment_reporting.rst +++ b/docs/source/experiment_reporting.rst @@ -2,7 +2,7 @@ from pytorch_lightning.trainer.trainer import Trainer -.. _experiment-reporting: +.. _experiment_reporting: Experiment Reporting ===================== diff --git a/docs/source/fast_training.rst b/docs/source/fast_training.rst index 29f671c8ea..5dc7d6019b 100644 --- a/docs/source/fast_training.rst +++ b/docs/source/fast_training.rst @@ -2,7 +2,7 @@ from pytorch_lightning.trainer.trainer import Trainer -.. _fast-training: +.. _fast_training: Fast Training ============= diff --git a/docs/source/governance.rst b/docs/source/governance.rst index 769e6f0056..75417bdf6a 100644 --- a/docs/source/governance.rst +++ b/docs/source/governance.rst @@ -1,6 +1,6 @@ .. _governance: -Pytorch Lightning Governance | Persons of interest +PyTorch Lightning Governance | Persons of interest ================================================== Leads diff --git a/docs/source/hooks.rst b/docs/source/hooks.rst index 857aa2daa7..d2f86531c1 100644 --- a/docs/source/hooks.rst +++ b/docs/source/hooks.rst @@ -22,15 +22,15 @@ Hooks lifecycle Training set-up ^^^^^^^^^^^^^^^ -- :meth:`~pytorch_lightning.core.lightning.LightningModule.prepare_data` -- :meth:`~pytorch_lightning.core.lightning.LightningModule.setup` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.prepare_data` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.setup` - :meth:`~pytorch_lightning.core.lightning.LightningModule.init_ddp_connection` - :meth:`~pytorch_lightning.trainer.optimizers.TrainerOptimizersMixin.init_optimizers` - :meth:`~pytorch_lightning.core.lightning.LightningModule.configure_apex` - :meth:`~pytorch_lightning.core.lightning.LightningModule.configure_ddp` -- :meth:`~pytorch_lightning.core.lightning.LightningModule.train_dataloader` -- :meth:`~pytorch_lightning.core.lightning.LightningModule.test_dataloader` -- :meth:`~pytorch_lightning.core.lightning.LightningModule.val_dataloader` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.train_dataloader` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.test_dataloader` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.val_dataloader` - :meth:`~pytorch_lightning.core.lightning.LightningModule.summarize` - :meth:`~pytorch_lightning.trainer.training_io.TrainerIOMixin.restore_weights` @@ -89,5 +89,8 @@ Test loop General hooks ------------- -.. automodule:: pytorch_lightning.core.hooks +.. autoclass:: pytorch_lightning.core.hooks.ModelHooks + :noindex: + +.. autoclass:: pytorch_lightning.core.hooks.DataHooks :noindex: diff --git a/docs/source/index.rst b/docs/source/index.rst index d37498e9e9..90a69a447c 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -11,7 +11,7 @@ PyTorch Lightning Documentation :name: start :caption: Getting started - new-project + new_project converting .. toctree:: @@ -27,7 +27,7 @@ PyTorch Lightning Documentation :name: pl_docs :caption: Lightning API - lightning-module + lightning_module trainer .. toctree:: @@ -84,7 +84,7 @@ PyTorch Lightning Documentation :name: Common Use Cases :caption: Common Use Cases - apex + amp slurm child_modules debugging diff --git a/docs/source/introduction_guide.rst b/docs/source/introduction_guide.rst index 32a9861cd7..56dc6b7efe 100644 --- a/docs/source/introduction_guide.rst +++ b/docs/source/introduction_guide.rst @@ -4,7 +4,7 @@ from pytorch_lightning.core.datamodule import LightningDataModule from pytorch_lightning.trainer.trainer import Trainer -.. _introduction-guide: +.. _introduction_guide: ######################### Step-by-step walk-through @@ -510,7 +510,7 @@ Which will generate automatic tensorboard logs. | -But you can also use any of the `number of other loggers `_ we support. +But you can also use any of the :ref:`number of other loggers ` we support. Train on CPU @@ -562,7 +562,7 @@ Or multiple nodes trainer = Trainer(gpus=8, num_nodes=4, distributed_backend='ddp') trainer.fit(model, train_loader) -Refer to the `distributed computing guide for more details `_. +Refer to the :ref:`distributed computing guide for more details `. train on TPUs ^^^^^^^^^^^^^ @@ -967,7 +967,7 @@ you could do your own: return model Every single part of training is configurable this way. -For a full list look at `LightningModule `_. +For a full list look at :ref:`LightningModule `. ---------------- @@ -1102,7 +1102,7 @@ would be the particular system and how it's trained (ie: A GAN or VAE or GPT). loss = perceptual_loss(x1, x2, x) + CE(out, x) -In Lightning, this code is organized into a :ref:`lightning-module`. +In Lightning, this code is organized into a :ref:`lightning_module`. Engineering code ================ @@ -1167,6 +1167,6 @@ spread all over files. This code gets specially complicated once you start doing multi-gpu training or needing info about the data to build your models. -In Lightning this code is organized inside a :ref:`data-modules`. +In Lightning this code is organized inside a :ref:`datamodules`. .. note:: DataModules are optional but encouraged, otherwise you can use standard DataModules diff --git a/docs/source/lightning-module.rst b/docs/source/lightning_module.rst similarity index 99% rename from docs/source/lightning-module.rst rename to docs/source/lightning_module.rst index 3c9c5c3d2b..49a1b0ac79 100644 --- a/docs/source/lightning-module.rst +++ b/docs/source/lightning_module.rst @@ -1,6 +1,7 @@ .. role:: hidden :class: hidden-section -.. _lightning-module: + +.. _lightning_module: LightningModule =============== diff --git a/docs/source/multi_gpu.rst b/docs/source/multi_gpu.rst index 4e9de9bad4..f7c2208821 100644 --- a/docs/source/multi_gpu.rst +++ b/docs/source/multi_gpu.rst @@ -4,7 +4,7 @@ from pytorch_lightning.trainer.trainer import Trainer from pytorch_lightning.core.lightning import LightningModule -.. _multi-gpu-training: +.. _multi_gpu: Multi-GPU training ================== @@ -213,7 +213,7 @@ Lightning sets these for you automatically, there's NO NEED to do this yourself. However, when using a cluster, Lightning will NOT set these flags (and you should not either). SLURM will set these for you. -For more details see the `SLURM cluster guide `_. +For more details see the :ref:`SLURM cluster guide `. ---------- diff --git a/docs/source/multiple_loaders.rst b/docs/source/multiple_loaders.rst index b4c43d73d7..77856f134f 100644 --- a/docs/source/multiple_loaders.rst +++ b/docs/source/multiple_loaders.rst @@ -68,8 +68,8 @@ option of passing in multiple dataloaders back from each call. See the following for more details: -- :meth:`~pytorch_lightning.core.LightningModule.val_dataloader` -- :meth:`~pytorch_lightning.core.LightningModule.test_dataloader` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.val_dataloader` +- :meth:`~pytorch_lightning.core.datamodule.LightningDataModule.test_dataloader` .. testcode:: diff --git a/docs/source/new-project.rst b/docs/source/new_project.rst similarity index 85% rename from docs/source/new-project.rst rename to docs/source/new_project.rst index 18c4cc5e28..97f2208f9f 100644 --- a/docs/source/new-project.rst +++ b/docs/source/new_project.rst @@ -11,7 +11,7 @@ import pytorch_lightning as pl from torch.utils.data import random_split -.. _3-steps: +.. _new_project: #################### Lightning in 2 steps @@ -130,7 +130,7 @@ You can use your :class:`~pytorch_lightning.core.LightningModule` just like a Py model.anything_you_can_do_with_pytorch() -More details in :ref:`lightning-module` docs. +More details in :ref:`lightning_module` docs. ---------- @@ -187,7 +187,7 @@ Optional features TrainResult/EvalResult ====================== Instead of returning the loss you can also use :class:`~pytorch_lightning.core.step_result.TrainResult` and :class:`~pytorch_lightning.core.step_result.EvalResult`, plain Dict objects that give you options for logging on every step and/or at the end of the epoch. -It also allows logging to the progress bar (by setting prog_bar=True). Read more in :ref:`result`. +It also allows logging to the progress bar (by setting prog_bar=True). Read more in :ref:`results`. .. code-block:: @@ -333,7 +333,7 @@ the :class:`~pytorch_lightning.trainer.Trainer`: # test trainer.test(datamodule=dm) -DataModules are specifically useful for building models based on data. Read more on :ref:`data-modules`. +DataModules are specifically useful for building models based on data. Read more on :ref:`datamodules`. ---------- @@ -418,21 +418,21 @@ Advanced Lightning Features Once you define and train your first Lightning model, you might want to try other cool features like - :ref:`loggers` -- `Automatic checkpointing `_ -- `Automatic early stopping `_ -- `Add custom callbacks `_ (self-contained programs that can be reused across projects) -- `Dry run mode `_ (Hit every line of your code once to see if you have bugs, instead of waiting hours to crash on validation ;) -- `Automatically overfit your model for a sanity test `_ -- `Automatic truncated-back-propagation-through-time `_ -- `Automatically scale your batch size `_ -- `Automatically find a good learning rate `_ -- `Load checkpoints directly from S3 `_ -- `Profile your code for speed/memory bottlenecks `_ -- `Scale to massive compute clusters `_ -- `Use multiple dataloaders per train/val/test loop `_ -- `Use multiple optimizers to do Reinforcement learning or even GANs `_ +- :ref:`Automatic checkpointing ` +- :ref:`Automatic early stopping ` +- :ref:`Add custom callbacks ` (self-contained programs that can be reused across projects) +- :ref:`Dry run mode ` (Hit every line of your code once to see if you have bugs, instead of waiting hours to crash on validation :) +- :ref:`Automatically overfit your model for a sanity test ` +- :ref:`Automatic truncated-back-propagation-through-time ` +- :ref:`Automatically scale your batch size ` +- :ref:`Automatically find a good learning rate ` +- :ref:`Load checkpoints directly from S3 ` +- :ref:`Profile your code for speed/memory bottlenecks ` +- :ref:`Scale to massive compute clusters ` +- :ref:`Use multiple dataloaders per train/val/test loop ` +- :ref:`Use multiple optimizers to do Reinforcement learning or even GANs ` -Or read our :ref:`introduction-guide` to learn more! +Or read our :ref:`introduction_guide` to learn more! ------------- diff --git a/docs/source/production_inference.rst b/docs/source/production_inference.rst index f81ffe5c8b..82962113b3 100644 --- a/docs/source/production_inference.rst +++ b/docs/source/production_inference.rst @@ -1,4 +1,4 @@ -.. _production-inference: +.. _production_inference: Inference in Production ======================= diff --git a/docs/source/results.rst b/docs/source/results.rst index d3a372825a..8647d2ec71 100644 --- a/docs/source/results.rst +++ b/docs/source/results.rst @@ -1,4 +1,4 @@ -.. _result: +.. _results: Result ====== diff --git a/docs/source/single_gpu.rst b/docs/source/single_gpu.rst index 3bee7a847b..14e0486fa7 100644 --- a/docs/source/single_gpu.rst +++ b/docs/source/single_gpu.rst @@ -2,7 +2,7 @@ from pytorch_lightning.trainer.trainer import Trainer -.. _single-gpu: +.. _single_gpu: Single GPU Training =================== diff --git a/docs/source/test_set.rst b/docs/source/test_set.rst index 0c93b1444d..24e1907ac6 100644 --- a/docs/source/test_set.rst +++ b/docs/source/test_set.rst @@ -1,4 +1,4 @@ -.. _test-set: +.. _test_set: Test set ======== diff --git a/docs/source/training_tricks.rst b/docs/source/training_tricks.rst index 1e63107169..b0d9d2654c 100644 --- a/docs/source/training_tricks.rst +++ b/docs/source/training_tricks.rst @@ -2,7 +2,7 @@ from pytorch_lightning.trainer.trainer import Trainer -.. _training-tricks: +.. _training_tricks: Training Tricks ================ diff --git a/docs/source/weights_loading.rst b/docs/source/weights_loading.rst index d14e26b38d..f046892379 100644 --- a/docs/source/weights_loading.rst +++ b/docs/source/weights_loading.rst @@ -4,7 +4,7 @@ from pytorch_lightning.trainer.trainer import Trainer from pytorch_lightning.core.lightning import LightningModule -.. _weights-loading: +.. _weights_loading: Saving and loading weights ========================== diff --git a/pytorch_lightning/callbacks/progress.py b/pytorch_lightning/callbacks/progress.py index 8b9ab5cd56..9bffc9883a 100644 --- a/pytorch_lightning/callbacks/progress.py +++ b/pytorch_lightning/callbacks/progress.py @@ -138,7 +138,7 @@ class ProgressBarBase(Callback): """ You should provide a way to enable the progress bar. The :class:`~pytorch_lightning.trainer.trainer.Trainer` will call this in e.g. pre-training - routines like the `learning rate finder `_ to temporarily enable and + routines like the :ref:`learning rate finder ` to temporarily enable and disable the main progress bar. """ raise NotImplementedError diff --git a/pytorch_lightning/core/lightning.py b/pytorch_lightning/core/lightning.py index 957f6f68f9..5b37aae978 100644 --- a/pytorch_lightning/core/lightning.py +++ b/pytorch_lightning/core/lightning.py @@ -335,7 +335,7 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod return result See Also: - See the :ref:`multi-gpu-training` guide for more details. + See the :ref:`multi_gpu` guide for more details. """ def training_epoch_end( @@ -531,7 +531,7 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod return result See Also: - See the :ref:`multi-gpu-training` guide for more details. + See the :ref:`multi_gpu` guide for more details. """ def validation_end(self, outputs): @@ -732,7 +732,7 @@ class LightningModule(ABC, DeviceDtypeModuleMixin, GradInformation, ModelIO, Mod return result See Also: - See the :ref:`multi-gpu-training` guide for more details. + See the :ref:`multi_gpu` guide for more details. """ def test_end(self, outputs): diff --git a/pytorch_lightning/trainer/__init__.py b/pytorch_lightning/trainer/__init__.py index ab72446648..f889325826 100644 --- a/pytorch_lightning/trainer/__init__.py +++ b/pytorch_lightning/trainer/__init__.py @@ -261,7 +261,7 @@ Example:: trainer = Trainer(auto_lr_find='my_lr_arg') .. note:: - See the `learning rate finder guide `_ + See the :ref:`learning rate finder guide `. benchmark ^^^^^^^^^ @@ -402,8 +402,8 @@ Example:: .. note:: this option does not apply to TPU. TPUs use ```ddp``` by default (over each core) See Also: - - `Multi-GPU training guide `_ - - `Multi-node (SLURM) guide `_ + - :ref:`Multi-GPU training guide `. + - :ref:`Multi-node (SLURM) guide `. early_stop_callback ^^^^^^^^^^^^^^^^^^^ @@ -499,7 +499,7 @@ Example:: trainer = Trainer(gpus=[1, 4], num_nodes=4) See Also: - - `Multi-GPU training guide `_ + - :ref:`Multi-GPU training guide `. gradient_clip_val ^^^^^^^^^^^^^^^^^ @@ -584,7 +584,7 @@ Writes logs to disk this often. logger ^^^^^^ -`Logger `_ (or iterable collection of loggers) for experiment tracking. +:ref:`Logger ` (or iterable collection of loggers) for experiment tracking. .. testcode:: @@ -820,7 +820,7 @@ profiler ^^^^^^^^ To profile individual steps during training and assist in identifying bottlenecks. -See the `profiler documentation `_. for more details. +See the :ref:`profiler documentation `. for more details. .. testcode::