diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 811014fcd8..7d60a27225 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -322,7 +322,7 @@ export PACKAGE_LIGHTNING=1 # <- this is the magic to use your version (not main lightning run app app.py --cloud ``` -By seting `PACKAGE_LIGHTNING=1`, lightning packages the lightning source code in your local directory in addition to your app source code and uploads them to the cloud. +By setting `PACKAGE_LIGHTNING=1`, lightning packages the lightning source code in your local directory in addition to your app source code and uploads them to the cloud. ### Bonus Workflow Tip diff --git a/.github/ISSUE_TEMPLATE/4_documentation.yaml b/.github/ISSUE_TEMPLATE/4_documentation.yaml index 0f7d96b823..ab513b9f04 100644 --- a/.github/ISSUE_TEMPLATE/4_documentation.yaml +++ b/.github/ISSUE_TEMPLATE/4_documentation.yaml @@ -9,7 +9,7 @@ body: placeholder: | For typos and doc fixes, please go ahead and: - - For a simlpe typo or fix, please send directly a PR (no ned to create an issue) + - For a simlpe typo or fix, please send directly a PR (no need to create an issue) - If you are not sure about the proper solution, please describe here your finding... validations: diff --git a/.github/actions/pip-wheels/action.yml b/.github/actions/pip-wheels/action.yml index fe399cab40..5a1efce18c 100644 --- a/.github/actions/pip-wheels/action.yml +++ b/.github/actions/pip-wheels/action.yml @@ -32,11 +32,11 @@ runs: run: | import os fp = 'requirements.dump' - with open(fp) as fo: - lines = [ln.strip() for ln in fo.readlines()] + with open(fp) as fopen: + lines = [ln.strip() for ln in fopen.readlines()] lines = [ln.split('+')[0] for ln in lines if '-e ' not in ln] - with open(fp, 'w') as fw: - fw.writelines([ln + os.linesep for ln in lines]) + with open(fp, 'w') as fopen: + fopen.writelines([ln + os.linesep for ln in lines]) shell: python - name: Dump wheels diff --git a/.github/workflows/README.md b/.github/workflows/README.md index 221098490f..8ab76df5d9 100644 --- a/.github/workflows/README.md +++ b/.github/workflows/README.md @@ -36,7 +36,7 @@ Brief description of all our automation tools used for boosting development perf | workflow file | action | | ------------------------------------------------------------------------------- | ------------------------------------------------------------------------ | | .github/workflows/docs-build.yml | Run doctest, linkcheck and full HTML build. | -| .github/workflows/ci-rtfd.yml | Append link to the PR describtion with temporaty ReadTheDocs build docs. | +| .github/workflows/ci-rtfd.yml | Append link to the PR description with temporaty ReadTheDocs build docs. | | .github/workflows/ci-check-md-links.yml
.github/markdown.links.config.json | Validate links in markdown files. | ## Code Quality @@ -68,12 +68,12 @@ The published Docker Hub project is https://hub.docker.com/r/pytorchlightning/py ## Bots -| workflow file | action | -| ---------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | -| .github/mergify.yml | Label PRs as conflicts or ready, and request reviews if needed. | -| .github/stale.yml | Close inactive issues/PRs sometimes after adding the "won't fix" label to them. | -| .github/workflows/probot-auto-cc.yml
.github/lightning-probot.yml | Notify maintainers of interest depending on labels added to an issue We utilize lightning-probot forked from PyTorch’s probot. | -| .github/workflows/probot-check-group.yml
.github/checkgroup.yml | Checks whether the relevant jobs were successfully run based on the changed files in the PR | -| .pre-commit-config.yaml | It applies a set of linters and formatters and can be registered with your local dev. If needed [bot](https://pre-commit.ci/) pushc chnges to each PRs. | -| .github/workflows/labeler-pr.yml, .github/labeler.yml | Integration of https://github.com/actions/labeler | -| .github/workflows/labeler-issue.yml | Parse user provided `lightning` version and set it as label. | +| workflow file | action | +| ---------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | +| .github/mergify.yml | Label PRs as conflicts or ready, and request reviews if needed. | +| .github/stale.yml | Close inactive issues/PRs sometimes after adding the "won't fix" label to them. | +| .github/workflows/probot-auto-cc.yml
.github/lightning-probot.yml | Notify maintainers of interest depending on labels added to an issue We utilize lightning-probot forked from PyTorch’s probot. | +| .github/workflows/probot-check-group.yml
.github/checkgroup.yml | Checks whether the relevant jobs were successfully run based on the changed files in the PR | +| .pre-commit-config.yaml | It applies a set of linters and formatters and can be registered with your local dev. If needed [bot](https://pre-commit.ci/) pushc changes to each PRs. | +| .github/workflows/labeler-pr.yml, .github/labeler.yml | Integration of https://github.com/actions/labeler | +| .github/workflows/labeler-issue.yml | Parse user provided `lightning` version and set it as label. | diff --git a/.github/workflows/call-clear-cache.yml b/.github/workflows/call-clear-cache.yml index bcb55f1465..4edb9b4b4c 100644 --- a/.github/workflows/call-clear-cache.yml +++ b/.github/workflows/call-clear-cache.yml @@ -4,7 +4,7 @@ on: workflow_dispatch: inputs: pattern: - description: "patter for cleaning cache" + description: "pattern for cleaning cache" default: "pip-|conda" required: false type: string diff --git a/.github/workflows/release-pkg.yml b/.github/workflows/release-pkg.yml index 5ac531455d..f55c359c66 100644 --- a/.github/workflows/release-pkg.yml +++ b/.github/workflows/release-pkg.yml @@ -83,11 +83,11 @@ jobs: run: | import json, os fname = "versions.json" - with open(fname, encoding="utf-8") as fo: - vers = json.load(fo) + with open(fname, encoding="utf-8") as fopen: + vers = json.load(fopen) vers["lightning_version"] = os.getenv('TAG') - with open(fname, "w", encoding="utf-8") as fw: - json.dump(vers, fw, indent=2) + with open(fname, "w", encoding="utf-8") as fopen: + json.dump(vers, fopen, indent=2) shell: python - run: cat versions.json - name: GIT commit & push @@ -136,7 +136,7 @@ jobs: matrix: name: ["APP", "FABRIC", "PYTORCH", "LIGHTNING"] steps: - - uses: actions/checkout@v3 # needed for local action bellow + - uses: actions/checkout@v3 # needed for local action below - uses: actions/download-artifact@v3 with: name: dist-packages-${{ github.sha }} @@ -161,7 +161,7 @@ jobs: matrix: name: ["APP", "FABRIC", "PYTORCH", "LIGHTNING"] steps: - - uses: actions/checkout@v3 # needed for local action bellow + - uses: actions/checkout@v3 # needed for local action below - uses: actions/download-artifact@v3 with: name: dist-packages-${{ github.sha }} diff --git a/.github/workflows/tpu-tests.yml b/.github/workflows/tpu-tests.yml index dbde248111..4946bd43b6 100644 --- a/.github/workflows/tpu-tests.yml +++ b/.github/workflows/tpu-tests.yml @@ -102,13 +102,13 @@ jobs: run: | import os fname = f'tests/tests_${{ matrix.pkg-name }}/run_tpu_tests.sh' - with open(fname) as fo: - data = fo.read() + with open(fname) as fopen: + data = fopen.read() data = data.replace('{PYTORCH_VERSION}', os.environ["XLA_VER"]) data = data.replace('{RUNTIME}', '${{ matrix.runtime }}') print(data) - with open(fname, "w") as fw: - fw.write(data) + with open(fname, "w") as fopen: + fopen.write(data) shell: python - name: Create node diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index d3d03b47b7..1177bebbe5 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -29,10 +29,11 @@ repos: - id: trailing-whitespace # ignoring Wills's wild changes exclude: README.md + - id: check-json - id: check-yaml + - id: check-toml - id: check-docstring-first - id: check-executables-have-shebangs - - id: check-toml - id: check-case-conflict - id: check-added-large-files args: ["--maxkb=350", "--enforce-all"] @@ -56,6 +57,13 @@ repos: args: [--py38-plus] name: Upgrade code + - repo: https://github.com/codespell-project/codespell + rev: v2.2.5 + hooks: + - id: codespell + additional_dependencies: [tomli] + #args: ["--write-changes"] # uncomment if you want to get automatic fixing + - repo: https://github.com/PyCQA/docformatter rev: v1.7.3 hooks: diff --git a/README.md b/README.md index fae2bc459c..1a9010a2bc 100644 --- a/README.md +++ b/README.md @@ -561,7 +561,7 @@ ______________________________________________________________________ ###### Self-supervised Learning - [CPC transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#cpc-transforms) -- [Moco v2 tranforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#moco-v2-transforms) +- [Moco v2 transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#moco-v2-transforms) - [SimCLR transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#simclr-transforms) ###### Convolutional Architectures diff --git a/dockers/docs/Dockerfile b/dockers/docs/Dockerfile index 09332da0bb..46882c2dc5 100644 --- a/dockers/docs/Dockerfile +++ b/dockers/docs/Dockerfile @@ -56,7 +56,7 @@ RUN \ COPY requirements/docs.txt docs.txt -# NOTE: exlude specific `botocore` bellow as is seems to be corrupted version +# NOTE: exclude specific `botocore` below as is seems to be corrupted version RUN \ wget https://bootstrap.pypa.io/get-pip.py --progress=bar:force:noscroll --no-check-certificate && \ python get-pip.py && \ diff --git a/dockers/release/Dockerfile b/dockers/release/Dockerfile index 5e965e9a7f..c17a54630d 100644 --- a/dockers/release/Dockerfile +++ b/dockers/release/Dockerfile @@ -39,7 +39,7 @@ RUN \ mv lightning-*/ lightning ; \ rm *.zip ; \ fi && \ - # otherwise there is collision with folder name ans pkg name on Pypi + # otherwise there is collision with folder name and pkg name on Pypi cd lightning && \ pip install setuptools==59.5.0 && \ pip install '.[extra,loggers,strategies]' --no-cache-dir && \ diff --git a/docs/source-app/examples/model_server_app/load_testing.rst b/docs/source-app/examples/model_server_app/load_testing.rst index 97345dec4c..9ea347bd0d 100644 --- a/docs/source-app/examples/model_server_app/load_testing.rst +++ b/docs/source-app/examples/model_server_app/load_testing.rst @@ -11,7 +11,7 @@ We are going to use a python performance testing tool called `Locust `. diff --git a/docs/source-app/levels/intermediate/run_lightning_work_in_parallel.rst b/docs/source-app/levels/intermediate/run_lightning_work_in_parallel.rst index 615ab98059..7a561893b2 100644 --- a/docs/source-app/levels/intermediate/run_lightning_work_in_parallel.rst +++ b/docs/source-app/levels/intermediate/run_lightning_work_in_parallel.rst @@ -1,7 +1,7 @@ ############################################## Level 6: Run a Lightning component in parallel ############################################## -**Audience:** Users who want to run a Lightning Component in parallel (asynchroneously). +**Audience:** Users who want to run a Lightning Component in parallel (asynchronously). **Prereqs:** You must have finished :doc:`Level 5 `. diff --git a/docs/source-app/levels/intermediate/share_variables_between_lightning_components.rst b/docs/source-app/levels/intermediate/share_variables_between_lightning_components.rst index b0ce06dae2..81bbfbf718 100644 --- a/docs/source-app/levels/intermediate/share_variables_between_lightning_components.rst +++ b/docs/source-app/levels/intermediate/share_variables_between_lightning_components.rst @@ -104,10 +104,10 @@ Example Continuous deployment: Every time a model saves a checkpoint, we redeplo Send a large variable between two components ******************************************** For large variables such as arrays, tensors, embeddings and so on, use Payload to enable -transfering them across components. +transferring them across components. .. lit_tabs:: - :descriptions: Let's define a component to simulate generating embeddings (from a DB, feature store, etc...); This component simulates a server that will use the embeddings.; Run the component to generate the embeddings; Simulate embeddings as an array. Here you would query a DB, load from a feature store or disk or even use a neural network to extract the embedding.; Allow the embeddings to be transfered efficiently by wrapping them in the Payload object.; Pass the variable to the EmbeddingServer (just the pointer).; The data gets transfered once you use the .value attribute in the other component. + :descriptions: Let's define a component to simulate generating embeddings (from a DB, feature store, etc...); This component simulates a server that will use the embeddings.; Run the component to generate the embeddings; Simulate embeddings as an array. Here you would query a DB, load from a feature store or disk or even use a neural network to extract the embedding.; Allow the embeddings to be transferred efficiently by wrapping them in the Payload object.; Pass the variable to the EmbeddingServer (just the pointer).; The data gets transferred once you use the .value attribute in the other component. :code_files: ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; :highlights: 5-13; 15-19; 28; 12; 13; 29; 18 :enable_run: true diff --git a/docs/source-app/testing.rst b/docs/source-app/testing.rst index ed2d6db92e..348d3c9b3a 100644 --- a/docs/source-app/testing.rst +++ b/docs/source-app/testing.rst @@ -112,7 +112,7 @@ These methods will skew your tests, so use them when needed. The Test ^^^^^^^^ -We provide ``application_testing`` as a helper funtion to get your application up and running for testing. It uses ``click``\ 's invocation tooling underneath. +We provide ``application_testing`` as a helper function to get your application up and running for testing. It uses ``click``\ 's invocation tooling underneath. .. code-block:: diff --git a/docs/source-app/workflows/add_web_ui/integrate_any_javascript_framework.rst b/docs/source-app/workflows/add_web_ui/integrate_any_javascript_framework.rst index f1da660b09..66625289a6 100644 --- a/docs/source-app/workflows/add_web_ui/integrate_any_javascript_framework.rst +++ b/docs/source-app/workflows/add_web_ui/integrate_any_javascript_framework.rst @@ -55,7 +55,7 @@ Once it's imported, use it inside your app, this example uses it inside a React ************************ Update the Lightning app ************************ -Use `updateLightningState` to update the lightning app. Here we update a vairable called counter. +Use `updateLightningState` to update the lightning app. Here we update a variable called counter. .. code-block:: :emphasize-lines: 11 diff --git a/docs/source-app/workflows/add_web_ui/react/create_react_template.rst b/docs/source-app/workflows/add_web_ui/react/create_react_template.rst index 4e48f39178..a5626bc32d 100644 --- a/docs/source-app/workflows/add_web_ui/react/create_react_template.rst +++ b/docs/source-app/workflows/add_web_ui/react/create_react_template.rst @@ -32,7 +32,7 @@ Run this command to set up a react-ui template for a component: lightning init react-ui -If everything was succesful, run the example_app.py listed in the output of the command: +If everything was successful, run the example_app.py listed in the output of the command: .. code:: bash diff --git a/docs/source-app/workflows/build_lightning_component/from_scratch_component_content.rst b/docs/source-app/workflows/build_lightning_component/from_scratch_component_content.rst index a42be9b739..b5828e6952 100644 --- a/docs/source-app/workflows/build_lightning_component/from_scratch_component_content.rst +++ b/docs/source-app/workflows/build_lightning_component/from_scratch_component_content.rst @@ -119,7 +119,7 @@ To implement a LightningWork, simply subclass ``LightningWork`` and define the r sleep(2.0) print(f"{i} LightningWork: work that is long running or may never end (like a server)") -A LightningWork must always be attached to a LightningFlow and explicitely asked to ``run()``: +A LightningWork must always be attached to a LightningFlow and explicitly asked to ``run()``: .. code:: python :emphasize-lines: 13, 16 diff --git a/docs/source-app/workflows/run_app_on_cloud/cloud_files.rst b/docs/source-app/workflows/run_app_on_cloud/cloud_files.rst index 190255fef8..8e8f01a00f 100644 --- a/docs/source-app/workflows/run_app_on_cloud/cloud_files.rst +++ b/docs/source-app/workflows/run_app_on_cloud/cloud_files.rst @@ -62,8 +62,8 @@ We recommend your app contain the following files: .. code:: bash . - ├── .lightning (auto-generated- conatins Lightning configuration) + ├── .lightning (auto-generated- contains Lightning configuration) ├── .lightningignore (contains files not to upload to the cloud) ├── app.py ├── README.md (optional- a markdown description of your app) - └── requirements.txt (optional- conatins all your app dependencies) + └── requirements.txt (optional- contains all your app dependencies) diff --git a/docs/source-app/workflows/run_app_on_cloud/index_content.rst b/docs/source-app/workflows/run_app_on_cloud/index_content.rst index b0f67570d7..737b4365df 100644 --- a/docs/source-app/workflows/run_app_on_cloud/index_content.rst +++ b/docs/source-app/workflows/run_app_on_cloud/index_content.rst @@ -59,7 +59,7 @@ You can run Lightning Apps for free on the Public Lightning cloud with a single :tag: Intermediate .. displayitem:: - :header: Manage App Dependancies + :header: Manage App Dependencies :description: Configure your python requirements or use a custom docker image :col_css: col-md-4 :button_link: ../../glossary/build_config/build_config.html @@ -68,7 +68,7 @@ You can run Lightning Apps for free on the Public Lightning cloud with a single .. displayitem:: :header: Share Files Between Works - :description: Learn more about data transfering + :description: Learn more about data transferring :col_css: col-md-4 :button_link: ../../glossary/storage/storage.html :height: 150 diff --git a/docs/source-app/workflows/run_work_in_parallel.rst b/docs/source-app/workflows/run_work_in_parallel.rst index 4f31583926..b87e653afc 100644 --- a/docs/source-app/workflows/run_work_in_parallel.rst +++ b/docs/source-app/workflows/run_work_in_parallel.rst @@ -1,7 +1,7 @@ ############################# Run LightningWork in parallel ############################# -**Audience:** Users who want to run a LightningWork in parallel (asynchroneously). +**Audience:** Users who want to run a LightningWork in parallel (asynchronously). **Prereqs:** You must have finished the :doc:`Basic levels <../levels/basic/index>`. diff --git a/docs/source-app/workflows/run_work_once_content.rst b/docs/source-app/workflows/run_work_once_content.rst index 0b79010321..355c2ed2ba 100644 --- a/docs/source-app/workflows/run_work_once_content.rst +++ b/docs/source-app/workflows/run_work_once_content.rst @@ -13,7 +13,7 @@ You can turn caching on or off: # Run only when the input arguments change (default) work = MyWork(cache_calls=True) - # Run everytime regardless of whether input arguments change or not + # Run every time regardless of whether input arguments change or not work = MyWork(cache_calls=False) To better understand this, imagine that every day you want to sequentially download and process some data and then train a model on that data. diff --git a/docs/source-fabric/guide/callbacks.rst b/docs/source-fabric/guide/callbacks.rst index 87fd58e15b..2a9a0e482d 100644 --- a/docs/source-fabric/guide/callbacks.rst +++ b/docs/source-fabric/guide/callbacks.rst @@ -44,7 +44,7 @@ Here is how that gets done in Fabric: optimizer.step() # Let a callback add some arbitrary processing at the appropriate place - # Give the callback access to some varibles + # Give the callback access to some variables fabric.call("on_train_batch_end", loss=loss, output=...) diff --git a/docs/source-pytorch/benchmarking/benchmarks.rst b/docs/source-pytorch/benchmarking/benchmarks.rst index 3ceb187c22..b3c434ea79 100644 --- a/docs/source-pytorch/benchmarking/benchmarks.rst +++ b/docs/source-pytorch/benchmarking/benchmarks.rst @@ -9,7 +9,7 @@ Time comparison --------------- We have set regular benchmarking against PyTorch vanilla training loop on with RNN and simple MNIST classifier as per of out CI. -In average for simple MNIST CNN classifier we are only about 0.06s slower per epoch, see detail chart bellow. +In average for simple MNIST CNN classifier we are only about 0.06s slower per epoch, see detail chart below. .. figure:: ../_static/images/benchmarks/figure-parity-times.png :alt: Speed parity to vanilla PT, created on 2020-12-16 diff --git a/docs/source-pytorch/cli/lightning_cli_advanced_3.rst b/docs/source-pytorch/cli/lightning_cli_advanced_3.rst index cc6ce6d7b0..e63844ae66 100644 --- a/docs/source-pytorch/cli/lightning_cli_advanced_3.rst +++ b/docs/source-pytorch/cli/lightning_cli_advanced_3.rst @@ -290,7 +290,7 @@ An example of a model that uses two optimizers is the following: cli = MyLightningCLI(MyModel, auto_configure_optimizers=False) -Note the type ``Callable[[Iterable], Optimizer]``, which denotes a function that receives a singe argument, some +Note the type ``Callable[[Iterable], Optimizer]``, which denotes a function that receives a single argument, some learnable parameters, and returns an optimizer instance. With this, from the command line it is possible to select the class and init arguments for each of the optimizers, as follows: diff --git a/docs/source-pytorch/common/hyperparameters.rst b/docs/source-pytorch/common/hyperparameters.rst index ce356277bd..096bba8ccb 100644 --- a/docs/source-pytorch/common/hyperparameters.rst +++ b/docs/source-pytorch/common/hyperparameters.rst @@ -4,7 +4,7 @@ Configure hyperparameters from the CLI -------------------------------------- You can use any CLI tool you want with Lightning. -For beginners, we recommand using Python's built-in argument parser. +For beginners, we recommend using Python's built-in argument parser. ---- diff --git a/docs/source-pytorch/common/index.rst b/docs/source-pytorch/common/index.rst index 03647c70b9..48f1946ac6 100644 --- a/docs/source-pytorch/common/index.rst +++ b/docs/source-pytorch/common/index.rst @@ -198,7 +198,7 @@ How-to Guides .. displayitem:: :header: Use a pretrained model - :description: Improve results with transer learning on pretrained models + :description: Improve results with transfer learning on pretrained models :button_link: ../advanced/pretrained.html :col_css: col-md-4 :height: 180 diff --git a/docs/source-pytorch/debug/debugging_intermediate.rst b/docs/source-pytorch/debug/debugging_intermediate.rst index 9895223372..b8e188ed19 100644 --- a/docs/source-pytorch/debug/debugging_intermediate.rst +++ b/docs/source-pytorch/debug/debugging_intermediate.rst @@ -62,7 +62,7 @@ You can keep an eye on the gradient norm by logging it in your LightningModule: This will plot the 2-norm of each layer to your experiment manager. If you notice the norm is going up, there's a good chance your gradients will explode. -One technique to stop exploding gradients is to clip the gradient when the norm is above a certain threashold: +One technique to stop exploding gradients is to clip the gradient when the norm is above a certain threshold: .. testcode:: diff --git a/docs/source-pytorch/ecosystem/metrics.rst b/docs/source-pytorch/ecosystem/metrics.rst index 8ec155f7f1..c44651f76d 100644 --- a/docs/source-pytorch/ecosystem/metrics.rst +++ b/docs/source-pytorch/ecosystem/metrics.rst @@ -66,7 +66,7 @@ The example below shows how to use the class-based interface: acc = metric.compute() print(f"Accuracy on all data: {acc}") - # Reseting internal state such that metric ready for new data + # Resetting internal state such that metric ready for new data metric.reset() ------------ diff --git a/docs/source-pytorch/extensions/strategy.rst b/docs/source-pytorch/extensions/strategy.rst index 624954aad6..e8cde20528 100644 --- a/docs/source-pytorch/extensions/strategy.rst +++ b/docs/source-pytorch/extensions/strategy.rst @@ -34,7 +34,7 @@ Built-in strategies can be selected in two ways. 1. Pass the shorthand name to the ``strategy`` Trainer argument 2. Import a Strategy from :mod:`lightning.pytorch.strategies`, instantiate it and pass it to the ``strategy`` Trainer argument -The latter allows you to configure further options on the specifc strategy. +The latter allows you to configure further options on the specific strategy. Here are some examples: .. code-block:: python diff --git a/docs/source-pytorch/glossary/index.rst b/docs/source-pytorch/glossary/index.rst index d632ecca93..e98bb050e9 100644 --- a/docs/source-pytorch/glossary/index.rst +++ b/docs/source-pytorch/glossary/index.rst @@ -147,7 +147,7 @@ Glossary .. displayitem:: :header: Half precision - :description: Using different numerical formats to save memory and run fatser + :description: Using different numerical formats to save memory and run faster :col_css: col-md-12 :button_link: ../common/precision.html :height: 100 @@ -196,7 +196,7 @@ Glossary .. displayitem:: :header: Log - :description: Outpus or results used for visualization and tracking + :description: Outputs or results used for visualization and tracking :col_css: col-md-12 :button_link: ../visualize/loggers.html :height: 100 @@ -301,7 +301,7 @@ Glossary .. displayitem:: :header: Style guide - :description: Best practices to improve readability and reproducability + :description: Best practices to improve readability and reproducibility :col_css: col-md-12 :button_link: ../starter/style_guide.html :height: 100 diff --git a/docs/source-pytorch/integrations/strategies/colossalai.rst b/docs/source-pytorch/integrations/strategies/colossalai.rst index 18005a12a9..92fd6ad4d3 100644 --- a/docs/source-pytorch/integrations/strategies/colossalai.rst +++ b/docs/source-pytorch/integrations/strategies/colossalai.rst @@ -54,7 +54,7 @@ See a full example of a benchmark with the a `GPT-2 model