Fix PyTorch spelling errors (#13774)

* Fix PyTorch spelling errors

* more
This commit is contained in:
Adrian Wälchli 2022-07-25 18:51:16 +02:00 committed by GitHub
parent 227871982d
commit a8d7b4476c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 18 additions and 18 deletions

View File

@ -225,7 +225,7 @@ git push -f
#### How to add new tests?
We are using [pytest](https://docs.pytest.org/en/stable/) in Pytorch Lightning.
We are using [pytest](https://docs.pytest.org/en/stable/) in PyTorch Lightning.
Here are tutorials:

View File

@ -30,4 +30,4 @@ ______________________________________________________________________
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.

View File

@ -38,4 +38,4 @@ ______________________________________________________________________
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.

View File

@ -34,4 +34,4 @@ ______________________________________________________________________
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts): Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
- [**Lightning Transformers**](https://github.com/Lightning-AI/lightning-transformers): Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.

2
.github/stale.yml vendored
View File

@ -14,7 +14,7 @@ issues:
markComment: >
This issue has been automatically marked as stale because it hasn't had any recent activity.
This issue will be closed in 7 days if no further activity occurs.
Thank you for your contributions, Pytorch Lightning Team!
Thank you for your contributions, PyTorch Lightning Team!
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@ -77,7 +77,7 @@ ENV \
RUN pip --version && \
pip config set global.cache-dir false && \
conda remove pytorch torchvision && \
# Install Pytorch XLA
# Install PyTorch XLA
py_version=${PYTHON_VERSION/./} && \
gsutil cp "gs://tpu-pytorch/wheels/torch-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \
gsutil cp "gs://tpu-pytorch/wheels/torch_xla-${XLA_VERSION}-cp${py_version}-cp${py_version}m-linux_x86_64.whl" . && \

View File

@ -8,7 +8,7 @@ Evolve a model into an ML system
**Required background:** Basic Python familiarity and complete the :ref:`build_model` guide.
**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing Pytorch Lightning scripts.
**Goal:** We'll walk you through the two key steps to build your first Lightning App from your existing PyTorch Lightning scripts.
.. join_slack::
:align: left
@ -50,7 +50,7 @@ Inside the ``app.py`` file, add the following code.
.. literalinclude:: ../code_samples/convert_pl_to_app/app.py
This App runs the Pytorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out!
This App runs the PyTorch Lightning script contained in the ``train.py`` file using the powerful :class:`~lightning_app.components.python.tracer.TracerPythonScript` component. This is really worth checking out!
----

View File

@ -34,7 +34,7 @@ can save or directly use.
It is recommended that you install the latest supported version of PyTorch to use this feature without limitations.
Once you have the exported model, you can run it in Pytorch or C++ runtime:
Once you have the exported model, you can run it in PyTorch or C++ runtime:
.. code-block:: python

View File

@ -48,7 +48,7 @@ so that each can be configured from .yaml or the Hydra CLI.
.. note:: Every NeMo model has an example configuration file and a corresponding script that contains all configurations needed for training.
The end result of using NeMo, Pytorch Lightning, and Hydra is that
The end result of using NeMo, PyTorch Lightning, and Hydra is that
NeMo models all have the same look and feel. This makes it easy to do Conversational AI research
across multiple domains. NeMo models are also fully compatible with the PyTorch ecosystem.

View File

@ -150,7 +150,7 @@ class CatDogImageDataModule(LightningDataModule):
return self.__dataloader(train=False)
# --- Pytorch-lightning module ---
# --- PyTorch Lightning module ---
class TransferLearningModel(LightningModule):

View File

@ -79,7 +79,7 @@ class TracerPythonScript(LightningWork):
This callback has a reference to the work and on every batch end, we are capturing the
trainer ``global_step`` and ``best_model_path``.
Even more interesting, this component works for ANY Pytorch Lightning script and
Even more interesting, this component works for ANY PyTorch Lightning script and
its state can be used in real time in a UI.
.. literalinclude:: ../../../../examples/app_components/python/component_tracer.py

View File

@ -1800,7 +1800,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))
- Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945))
- Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621))
- Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))
- Deprecated `PyTorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349))
- Deprecated `@auto_move_data` in favor of `trainer.predict` ([#6993](https://github.com/PyTorchLightning/pytorch-lightning/pull/6993))
- Deprecated `Callback.on_load_checkpoint(checkpoint)` in favor of `Callback.on_load_checkpoint(trainer, pl_module, checkpoint)` ([#7253](https://github.com/PyTorchLightning/pytorch-lightning/pull/7253))
- Deprecated metrics in favor of `torchmetrics` (
@ -2386,7 +2386,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
[#4737](https://github.com/PyTorchLightning/pytorch-lightning/pull/4737),
[#4773](https://github.com/PyTorchLightning/pytorch-lightning/pull/4773))
- Added `experiment_id` to the NeptuneLogger ([#3462](https://github.com/PyTorchLightning/pytorch-lightning/pull/3462))
- Added `Pytorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568))
- Added `PyTorch Geometric` integration example with Lightning ([#4568](https://github.com/PyTorchLightning/pytorch-lightning/pull/4568))
- Added `all_gather` method to `LightningModule` which allows gradient based tensor synchronizations for use-cases such as negative sampling. ([#5012](https://github.com/PyTorchLightning/pytorch-lightning/pull/5012))
- Enabled `self.log` in most functions ([#4969](https://github.com/PyTorchLightning/pytorch-lightning/pull/4969))
- Added changeable extension variable for `ModelCheckpoint` ([#4977](https://github.com/PyTorchLightning/pytorch-lightning/pull/4977))

View File

@ -41,7 +41,7 @@ def _find_tensors(
# In manual_optimization, we need to call reducer prepare_for_backward.
# Note: Keep track of Pytorch DDP and update if there is a change
# Note: Keep track of PyTorch DDP and update if there is a change
# https://github.com/pytorch/pytorch/blob/v1.7.1/torch/nn/parallel/distributed.py#L626-L638
def prepare_for_backward(model: DistributedDataParallel, output: Any) -> None:
# `prepare_for_backward` is `DistributedDataParallel` specific.

View File

@ -85,7 +85,7 @@ class DDPFullyShardedNativeStrategy(ParallelStrategy):
`For more information: https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/`.
.. warning:: ``DDPFullyShardedNativeStrategy`` is in beta and subject to change. The interface can
bring breaking changes and new features with the next release of Pytorch.
bring breaking changes and new features with the next release of PyTorch.
Defaults have been set and options have been exposed, but may require configuration
based on your level of memory/speed efficiency. We suggest having a look at this tutorial for

View File

@ -256,7 +256,7 @@ class Trainer(
deterministic: If ``True``, sets whether PyTorch operations must use deterministic algorithms.
Set to ``"warn"`` to use deterministic algorithms whenever possible, throwing warnings on operations
that don't support deterministic mode (requires Pytorch 1.11+). If not set, defaults to ``False``.
that don't support deterministic mode (requires PyTorch 1.11+). If not set, defaults to ``False``.
Default: ``None``.
devices: Will be mapped to either `gpus`, `tpu_cores`, `num_processes` or `ipus`,

View File

@ -23,7 +23,7 @@ from torch.utils.data import Dataset, IterableDataset
class MNIST(Dataset):
"""Customized `MNIST <http://yann.lecun.com/exdb/mnist/>`_ dataset for testing Pytorch Lightning without the
"""Customized `MNIST <http://yann.lecun.com/exdb/mnist/>`_ dataset for testing PyTorch Lightning without the
torchvision dependency.
Part of the code was copied from