diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 1d47028bfe..7bec2d8763 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -225,7 +225,7 @@ git push -f #### How to add new tests? -We are using [pytest](https://docs.pytest.org/en/stable/) in Pytorch Lightning. +We are using [pytest](https://docs.pytest.org/en/stable/) in PyTorch Lightning. Here are tutorials: diff --git a/.github/ISSUE_TEMPLATE/documentation.md b/.github/ISSUE_TEMPLATE/documentation.md index 9336d4bd35..8f94ee921e 100644 --- a/.github/ISSUE_TEMPLATE/documentation.md +++ b/.github/ISSUE_TEMPLATE/documentation.md @@ -30,4 +30,4 @@ ______________________________________________________________________ - [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch. -- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. +- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 77f5bac403..0d506dd923 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -38,4 +38,4 @@ ______________________________________________________________________ - [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch. -- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. +- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra. diff --git a/.github/ISSUE_TEMPLATE/refactor.md b/.github/ISSUE_TEMPLATE/refactor.md index 7df1c30026..159a4ce8d6 100644 --- a/.github/ISSUE_TEMPLATE/refactor.md +++ b/.github/ISSUE_TEMPLATE/refactor.md @@ -34,4 +34,4 @@ ______________________________________________________________________ - [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch. -- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra. +- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra. diff --git a/.github/stale.yml b/.github/stale.yml index 51b57c0798..a1fb9abfc9 100644 --- a/.github/stale.yml +++ b/.github/stale.yml @@ -14,7 +14,7 @@ issues: markComment: > This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. - Thank you for your contributions, Pytorch Lightning Team! + Thank you for your contributions, PyTorch Lightning Team! # Comment to post when closing a stale issue. Set to `false` to disable closeComment: false diff --git a/dockers/base-xla/Dockerfile b/dockers/base-xla/Dockerfile index 13da7c2208..977aee878f 100644 --- a/dockers/base-xla/Dockerfile +++ b/dockers/base-xla/Dockerfile @@ -77,7 +77,7 @@ ENV \ RUN pip --version && \ pip config set global.cache-dir false && \ conda remove pytorch torchvision && \ - # Install Pytorch XLA + # Install PyTorch XLA py_version=${PYTHON_VERSION/./} && \ gsutil cp "gs://tpu-pytorch/wheels/torch-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \ gsutil cp "gs://tpu-pytorch/wheels/torch_xla-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \ diff --git a/docs/source-app/get_started/training_with_apps.rst b/docs/source-app/get_started/training_with_apps.rst index a7061cae56..f509ba4cf0 100644 --- a/docs/source-app/get_started/training_with_apps.rst +++ b/docs/source-app/get_started/training_with_apps.rst @@ -8,7 +8,7 @@ Evolve a model into an ML system **Required background:** Basic Python familiarity and complete the :ref:`build_model` guide. -**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing Pytorch Lightning scripts. +**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing PyTorch Lightning scripts. .. join_slack:: :align: left @@ -50,7 +50,7 @@ Inside the ``app.py`` file, add the following code. .. literalinclude:: ../code_samples/convert_pl_to_app/app.py -This App runs the Pytorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out! +This App runs the PyTorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out! ---- diff --git a/docs/source-pytorch/deploy/production_advanced_2.rst b/docs/source-pytorch/deploy/production_advanced_2.rst index 5f6fe58d6e..ea5ca9fd24 100644 --- a/docs/source-pytorch/deploy/production_advanced_2.rst +++ b/docs/source-pytorch/deploy/production_advanced_2.rst @@ -34,7 +34,7 @@ can save or directly use. It is recommended that you install the latest supported version of PyTorch to use this feature without limitations. -Once you have the exported model, you can run it in Pytorch or C++ runtime: +Once you have the exported model, you can run it in PyTorch or C++ runtime: .. code-block:: python diff --git a/docs/source-pytorch/ecosystem/asr_nlp_tts.rst b/docs/source-pytorch/ecosystem/asr_nlp_tts.rst index b624696886..abec585df6 100644 --- a/docs/source-pytorch/ecosystem/asr_nlp_tts.rst +++ b/docs/source-pytorch/ecosystem/asr_nlp_tts.rst @@ -48,7 +48,7 @@ so that each can be configured from .yaml or the Hydra CLI. .. note:: Every NeMo model has an example configuration file and a corresponding script that contains all configurations needed for training. -The end result of using NeMo, Pytorch Lightning, and Hydra is that +The end result of using NeMo, PyTorch Lightning, and Hydra is that NeMo models all have the same look and feel. This makes it easy to do Conversational AI research across multiple domains. NeMo models are also fully compatible with the PyTorch ecosystem. diff --git a/examples/pl_domain_templates/computer_vision_fine_tuning.py b/examples/pl_domain_templates/computer_vision_fine_tuning.py index fedd837de0..b33d63eb65 100644 --- a/examples/pl_domain_templates/computer_vision_fine_tuning.py +++ b/examples/pl_domain_templates/computer_vision_fine_tuning.py @@ -150,7 +150,7 @@ class CatDogImageDataModule(LightningDataModule): return self.__dataloader(train=False) -# --- Pytorch-lightning module --- +# --- PyTorch Lightning module --- class TransferLearningModel(LightningModule): diff --git a/src/lightning_app/components/python/tracer.py b/src/lightning_app/components/python/tracer.py index 5605eee6b6..ed692c7f3e 100644 --- a/src/lightning_app/components/python/tracer.py +++ b/src/lightning_app/components/python/tracer.py @@ -79,7 +79,7 @@ class TracerPythonScript(LightningWork): This callback has a reference to the work and on every batch end, we are capturing the trainer ``global_step`` and ``best_model_path``. - Even more interesting, this component works for ANY Pytorch Lightning script and + Even more interesting, this component works for ANY PyTorch Lightning script and its state can be used in real time in a UI. .. literalinclude:: ../../../../examples/app_components/python/component_tracer.py diff --git a/src/pytorch_lightning/CHANGELOG.md b/src/pytorch_lightning/CHANGELOG.md index b2c7ca54e6..1c3a3b9d5a 100644 --- a/src/pytorch_lightning/CHANGELOG.md +++ b/src/pytorch_lightning/CHANGELOG.md @@ -1800,7 +1800,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). - `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146)) - Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) -- Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) +- Deprecated `PyTorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) - Deprecated `@auto_move_data` in favor of `trainer.predict` ([#6993](https://github.com/PyTorchLightning/pytorch-lightning/pull/6993)) - Deprecated `Callback.on_load_checkpoint(checkpoint)` in favor of `Callback.on_load_checkpoint(trainer, pl_module, checkpoint)` ([#7253](https://github.com/PyTorchLightning/pytorch-lightning/pull/7253)) - Deprecated metrics in favor of `torchmetrics` ( @@ -2386,7 +2386,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). [#4737](https://github.com/PyTorchLightning/pytorch-lightning/pull/4737), [#4773](https://github.com/PyTorchLightning/pytorch-lightning/pull/4773)) - Added `experiment_id` to the NeptuneLogger ([#3462](https://github.com/PyTorchLightning/pytorch-lightning/pull/3462)) -- Added `Pytorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568)) +- Added `PyTorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568)) - Added `all_gather` method to `LightningModule` which allows gradient based tensor synchronizations for use-cases such as negative sampling. ([#5012](https://github.com/PyTorchLightning/pytorch-lightning/pull/5012)) - Enabled `self.log` in most functions ([#4969](https://github.com/PyTorchLightning/pytorch-lightning/pull/4969)) - Added changeable extension variable for `ModelCheckpoint` ([#4977](https://github.com/PyTorchLightning/pytorch-lightning/pull/4977)) diff --git a/src/pytorch_lightning/overrides/distributed.py b/src/pytorch_lightning/overrides/distributed.py index 8048d83252..f09a7b9e3a 100644 --- a/src/pytorch_lightning/overrides/distributed.py +++ b/src/pytorch_lightning/overrides/distributed.py @@ -41,7 +41,7 @@ def _find_tensors( # In manual_optimization, we need to call reducer prepare_for_backward. -# Note: Keep track of Pytorch DDP and update if there is a change +# Note: Keep track of PyTorch DDP and update if there is a change # https://github.com/pytorch/pytorch/blob/v1.7.1/torch/nn/parallel/distributed.py#L626-L638 def prepare_for_backward(model: DistributedDataParallel, output: Any) -> None: # `prepare_for_backward` is `DistributedDataParallel` specific. diff --git a/src/pytorch_lightning/strategies/fully_sharded_native.py b/src/pytorch_lightning/strategies/fully_sharded_native.py index 6290164a16..553f010763 100644 --- a/src/pytorch_lightning/strategies/fully_sharded_native.py +++ b/src/pytorch_lightning/strategies/fully_sharded_native.py @@ -85,7 +85,7 @@ class DDPFullyShardedNativeStrategy(ParallelStrategy): `For more information: https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/`. .. warning:: ``DDPFullyShardedNativeStrategy`` is in beta and subject to change. The interface can - bring breaking changes and new features with the next release of Pytorch. + bring breaking changes and new features with the next release of PyTorch. Defaults have been set and options have been exposed, but may require configuration based on your level of memory/speed efficiency. We suggest having a look at this tutorial for diff --git a/src/pytorch_lightning/trainer/trainer.py b/src/pytorch_lightning/trainer/trainer.py index d10225fea2..561fe799f1 100644 --- a/src/pytorch_lightning/trainer/trainer.py +++ b/src/pytorch_lightning/trainer/trainer.py @@ -256,7 +256,7 @@ class Trainer( deterministic: If ``True``, sets whether PyTorch operations must use deterministic algorithms. Set to ``"warn"`` to use deterministic algorithms whenever possible, throwing warnings on operations - that don't support deterministic mode (requires Pytorch 1.11+). If not set, defaults to ``False``. + that don't support deterministic mode (requires PyTorch 1.11+). If not set, defaults to ``False``. Default: ``None``. devices: Will be mapped to either `gpus`, `tpu_cores`, `num_processes` or `ipus`, diff --git a/tests/tests_pytorch/helpers/datasets.py b/tests/tests_pytorch/helpers/datasets.py index 2366145004..3443020d45 100644 --- a/tests/tests_pytorch/helpers/datasets.py +++ b/tests/tests_pytorch/helpers/datasets.py @@ -23,7 +23,7 @@ from torch.utils.data import Dataset, IterableDataset class MNIST(Dataset): - """Customized `MNIST `_ dataset for testing Pytorch Lightning without the + """Customized `MNIST `_ dataset for testing PyTorch Lightning without the torchvision dependency. Part of the code was copied from