enable codespell for docs & fixing +TPU (#18629)

* precommit/codespell

* run

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* disable

* more fixing

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Apply suggestions from code review

* more fixing

* json

* note

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Jirka Borovec 2023-09-26 17:54:44 +02:00 committed by GitHub
parent 894952d33e
commit 358336268f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 95 additions and 77 deletions

View File

@ -322,7 +322,7 @@ export PACKAGE_LIGHTNING=1 # <- this is the magic to use your version (not main
lightning run app app.py --cloud
```
By seting `PACKAGE_LIGHTNING=1`, lightning packages the lightning source code in your local directory in addition to your app source code and uploads them to the cloud.
By setting `PACKAGE_LIGHTNING=1`, lightning packages the lightning source code in your local directory in addition to your app source code and uploads them to the cloud.
### Bonus Workflow Tip

View File

@ -9,7 +9,7 @@ body:
placeholder: |
For typos and doc fixes, please go ahead and:
- For a simlpe typo or fix, please send directly a PR (no ned to create an issue)
- For a simlpe typo or fix, please send directly a PR (no need to create an issue)
- If you are not sure about the proper solution, please describe here your finding...
validations:

View File

@ -32,11 +32,11 @@ runs:
run: |
import os
fp = 'requirements.dump'
with open(fp) as fo:
lines = [ln.strip() for ln in fo.readlines()]
with open(fp) as fopen:
lines = [ln.strip() for ln in fopen.readlines()]
lines = [ln.split('+')[0] for ln in lines if '-e ' not in ln]
with open(fp, 'w') as fw:
fw.writelines([ln + os.linesep for ln in lines])
with open(fp, 'w') as fopen:
fopen.writelines([ln + os.linesep for ln in lines])
shell: python
- name: Dump wheels

View File

@ -36,7 +36,7 @@ Brief description of all our automation tools used for boosting development perf
| workflow file | action |
| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| .github/workflows/docs-build.yml | Run doctest, linkcheck and full HTML build. |
| .github/workflows/ci-rtfd.yml | Append link to the PR describtion with temporaty ReadTheDocs build docs. |
| .github/workflows/ci-rtfd.yml | Append link to the PR description with temporaty ReadTheDocs build docs. |
| .github/workflows/ci-check-md-links.yml <br> .github/markdown.links.config.json | Validate links in markdown files. |
## Code Quality
@ -68,12 +68,12 @@ The published Docker Hub project is https://hub.docker.com/r/pytorchlightning/py
## Bots
| workflow file | action |
| ---------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| .github/mergify.yml | Label PRs as conflicts or ready, and request reviews if needed. |
| .github/stale.yml | Close inactive issues/PRs sometimes after adding the "won't fix" label to them. |
| .github/workflows/probot-auto-cc.yml <br> .github/lightning-probot.yml | Notify maintainers of interest depending on labels added to an issue We utilize lightning-probot forked from PyTorchs probot. |
| .github/workflows/probot-check-group.yml <br> .github/checkgroup.yml | Checks whether the relevant jobs were successfully run based on the changed files in the PR |
| .pre-commit-config.yaml | It applies a set of linters and formatters and can be registered with your local dev. If needed [bot](https://pre-commit.ci/) pushc chnges to each PRs. |
| .github/workflows/labeler-pr.yml, .github/labeler.yml | Integration of https://github.com/actions/labeler |
| .github/workflows/labeler-issue.yml | Parse user provided `lightning` version and set it as label. |
| workflow file | action |
| ---------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| .github/mergify.yml | Label PRs as conflicts or ready, and request reviews if needed. |
| .github/stale.yml | Close inactive issues/PRs sometimes after adding the "won't fix" label to them. |
| .github/workflows/probot-auto-cc.yml <br> .github/lightning-probot.yml | Notify maintainers of interest depending on labels added to an issue We utilize lightning-probot forked from PyTorchs probot. |
| .github/workflows/probot-check-group.yml <br> .github/checkgroup.yml | Checks whether the relevant jobs were successfully run based on the changed files in the PR |
| .pre-commit-config.yaml | It applies a set of linters and formatters and can be registered with your local dev. If needed [bot](https://pre-commit.ci/) pushc changes to each PRs. |
| .github/workflows/labeler-pr.yml, .github/labeler.yml | Integration of https://github.com/actions/labeler |
| .github/workflows/labeler-issue.yml | Parse user provided `lightning` version and set it as label. |

View File

@ -4,7 +4,7 @@ on:
workflow_dispatch:
inputs:
pattern:
description: "patter for cleaning cache"
description: "pattern for cleaning cache"
default: "pip-|conda"
required: false
type: string

View File

@ -83,11 +83,11 @@ jobs:
run: |
import json, os
fname = "versions.json"
with open(fname, encoding="utf-8") as fo:
vers = json.load(fo)
with open(fname, encoding="utf-8") as fopen:
vers = json.load(fopen)
vers["lightning_version"] = os.getenv('TAG')
with open(fname, "w", encoding="utf-8") as fw:
json.dump(vers, fw, indent=2)
with open(fname, "w", encoding="utf-8") as fopen:
json.dump(vers, fopen, indent=2)
shell: python
- run: cat versions.json
- name: GIT commit & push
@ -136,7 +136,7 @@ jobs:
matrix:
name: ["APP", "FABRIC", "PYTORCH", "LIGHTNING"]
steps:
- uses: actions/checkout@v3 # needed for local action bellow
- uses: actions/checkout@v3 # needed for local action below
- uses: actions/download-artifact@v3
with:
name: dist-packages-${{ github.sha }}
@ -161,7 +161,7 @@ jobs:
matrix:
name: ["APP", "FABRIC", "PYTORCH", "LIGHTNING"]
steps:
- uses: actions/checkout@v3 # needed for local action bellow
- uses: actions/checkout@v3 # needed for local action below
- uses: actions/download-artifact@v3
with:
name: dist-packages-${{ github.sha }}

View File

@ -102,13 +102,13 @@ jobs:
run: |
import os
fname = f'tests/tests_${{ matrix.pkg-name }}/run_tpu_tests.sh'
with open(fname) as fo:
data = fo.read()
with open(fname) as fopen:
data = fopen.read()
data = data.replace('{PYTORCH_VERSION}', os.environ["XLA_VER"])
data = data.replace('{RUNTIME}', '${{ matrix.runtime }}')
print(data)
with open(fname, "w") as fw:
fw.write(data)
with open(fname, "w") as fopen:
fopen.write(data)
shell: python
- name: Create node

View File

@ -29,10 +29,11 @@ repos:
- id: trailing-whitespace
# ignoring Wills's wild changes
exclude: README.md
- id: check-json
- id: check-yaml
- id: check-toml
- id: check-docstring-first
- id: check-executables-have-shebangs
- id: check-toml
- id: check-case-conflict
- id: check-added-large-files
args: ["--maxkb=350", "--enforce-all"]
@ -56,6 +57,13 @@ repos:
args: [--py38-plus]
name: Upgrade code
- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
hooks:
- id: codespell
additional_dependencies: [tomli]
#args: ["--write-changes"] # uncomment if you want to get automatic fixing
- repo: https://github.com/PyCQA/docformatter
rev: v1.7.3
hooks:

View File

@ -561,7 +561,7 @@ ______________________________________________________________________
###### Self-supervised Learning
- [CPC transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#cpc-transforms)
- [Moco v2 tranforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#moco-v2-transforms)
- [Moco v2 transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#moco-v2-transforms)
- [SimCLR transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#simclr-transforms)
###### Convolutional Architectures

View File

@ -56,7 +56,7 @@ RUN \
COPY requirements/docs.txt docs.txt
# NOTE: exlude specific `botocore` bellow as is seems to be corrupted version
# NOTE: exclude specific `botocore` below as is seems to be corrupted version
RUN \
wget https://bootstrap.pypa.io/get-pip.py --progress=bar:force:noscroll --no-check-certificate && \
python get-pip.py && \

View File

@ -39,7 +39,7 @@ RUN \
mv lightning-*/ lightning ; \
rm *.zip ; \
fi && \
# otherwise there is collision with folder name ans pkg name on Pypi
# otherwise there is collision with folder name and pkg name on Pypi
cd lightning && \
pip install setuptools==59.5.0 && \
pip install '.[extra,loggers,strategies]' --no-cache-dir && \

View File

@ -11,7 +11,7 @@ We are going to use a python performance testing tool called `Locust <https://gi
.. literalinclude:: ./locust_component.py
Finally, once the component is done, we need to crate a ``locustfile.py`` file which defines the format of the request to send to your model server.
Finally, once the component is done, we need to create a ``locustfile.py`` file which defines the format of the request to send to your model server.
The endpoint to hit has the following format: ``/v2/models/{MODEL_NAME}/versions/{VERSION}/infer``.

View File

@ -58,7 +58,7 @@ Install this App by typing:
lightning install app lightning/quick-start
Verify the App was succesfully installed:
Verify the App was successfully installed:
.. code-block:: bash

View File

@ -29,7 +29,7 @@ for more granular control.
├── requirements.txt # Requirements specific to the 'train' work
└── train.py # Source file for the LightningWork
The requirements.txt file must be located in the same directry as the source file of the LightningWork.
The requirements.txt file must be located in the same directory as the source file of the LightningWork.
When the LightningWork starts up, it will pick up the requirements file if present and install all listed packages.
.. note::

View File

@ -9,7 +9,7 @@ Build Configuration (Intermediate)
**Level:** Intermediate
When a LightningWork machine starts up in the cloud, it uses a lightweight operating system with essential packages pre-installed.
If you need to install additional system packages or run other configuration steps before your code executes on that machine, it is possible to do so by createing a custom
If you need to install additional system packages or run other configuration steps before your code executes on that machine, it is possible to do so by creating a custom
:class:`~lightning.app.utilities.packaging.build_config.BuildConfig`:
1. Subclass :class:`~lightning.app.utilities.packaging.build_config.BuildConfig`:

View File

@ -285,7 +285,7 @@ Frequently Asked Questions
- **Q: Is it possible to reference any file using the Lightning lit:// path notation?**
Yes, but only files for which the app has write permissions can be copied from Work to Work (apps don't run with root priviliges).
Yes, but only files for which the app has write permissions can be copied from Work to Work (apps don't run with root privileges).
- **Q: Can I access the Lightning Storage in my UI (StreamLit, Web, ...)?**

View File

@ -12,4 +12,4 @@ export PACKAGE_LIGHTNING=1 # <- this is the magic to use your version (not main
lightning run app app.py --cloud
```
By seting `PACKAGE_LIGHTNING=1`, lightning packages the lightning source code in your local directory in addition to your app source code and uploads them to the cloud.
By setting `PACKAGE_LIGHTNING=1`, lightning packages the lightning source code in your local directory in addition to your app source code and uploads them to the cloud.

View File

@ -1,7 +1,7 @@
###############################
Level: Start dynamic components
###############################
**Audience:** Users who want to run a Lightning Component in parallel (asynchroneously).
**Audience:** Users who want to run a Lightning Component in parallel (asynchronously).
**Prereqs:** You must have finished the :doc:`Basic levels <../basic/index>`.

View File

@ -1,7 +1,7 @@
##############################################
Level 6: Run a Lightning component in parallel
##############################################
**Audience:** Users who want to run a Lightning Component in parallel (asynchroneously).
**Audience:** Users who want to run a Lightning Component in parallel (asynchronously).
**Prereqs:** You must have finished :doc:`Level 5 <debug_a_lightning_app>`.

View File

@ -104,10 +104,10 @@ Example Continuous deployment: Every time a model saves a checkpoint, we redeplo
Send a large variable between two components
********************************************
For large variables such as arrays, tensors, embeddings and so on, use Payload to enable
transfering them across components.
transferring them across components.
.. lit_tabs::
:descriptions: Let's define a component to simulate generating embeddings (from a DB, feature store, etc...); This component simulates a server that will use the embeddings.; Run the component to generate the embeddings; Simulate embeddings as an array. Here you would query a DB, load from a feature store or disk or even use a neural network to extract the embedding.; Allow the embeddings to be transfered efficiently by wrapping them in the Payload object.; Pass the variable to the EmbeddingServer (just the pointer).; The data gets transfered once you use the .value attribute in the other component.
:descriptions: Let's define a component to simulate generating embeddings (from a DB, feature store, etc...); This component simulates a server that will use the embeddings.; Run the component to generate the embeddings; Simulate embeddings as an array. Here you would query a DB, load from a feature store or disk or even use a neural network to extract the embedding.; Allow the embeddings to be transferred efficiently by wrapping them in the Payload object.; Pass the variable to the EmbeddingServer (just the pointer).; The data gets transferred once you use the .value attribute in the other component.
:code_files: ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py; ./scripts/toy_payload.py;
:highlights: 5-13; 15-19; 28; 12; 13; 29; 18
:enable_run: true

View File

@ -112,7 +112,7 @@ These methods will skew your tests, so use them when needed.
The Test
^^^^^^^^
We provide ``application_testing`` as a helper funtion to get your application up and running for testing. It uses ``click``\ 's invocation tooling underneath.
We provide ``application_testing`` as a helper function to get your application up and running for testing. It uses ``click``\ 's invocation tooling underneath.
.. code-block::

View File

@ -55,7 +55,7 @@ Once it's imported, use it inside your app, this example uses it inside a React
************************
Update the Lightning app
************************
Use `updateLightningState` to update the lightning app. Here we update a vairable called counter.
Use `updateLightningState` to update the lightning app. Here we update a variable called counter.
.. code-block::
:emphasize-lines: 11

View File

@ -32,7 +32,7 @@ Run this command to set up a react-ui template for a component:
lightning init react-ui
If everything was succesful, run the example_app.py listed in the output of the command:
If everything was successful, run the example_app.py listed in the output of the command:
.. code:: bash

View File

@ -119,7 +119,7 @@ To implement a LightningWork, simply subclass ``LightningWork`` and define the r
sleep(2.0)
print(f"{i} LightningWork: work that is long running or may never end (like a server)")
A LightningWork must always be attached to a LightningFlow and explicitely asked to ``run()``:
A LightningWork must always be attached to a LightningFlow and explicitly asked to ``run()``:
.. code:: python
:emphasize-lines: 13, 16

View File

@ -62,8 +62,8 @@ We recommend your app contain the following files:
.. code:: bash
.
├── .lightning (auto-generated- conatins Lightning configuration)
├── .lightning (auto-generated- contains Lightning configuration)
├── .lightningignore (contains files not to upload to the cloud)
├── app.py
├── README.md (optional- a markdown description of your app)
└── requirements.txt (optional- conatins all your app dependencies)
└── requirements.txt (optional- contains all your app dependencies)

View File

@ -59,7 +59,7 @@ You can run Lightning Apps for free on the Public Lightning cloud with a single
:tag: Intermediate
.. displayitem::
:header: Manage App Dependancies
:header: Manage App Dependencies
:description: Configure your python requirements or use a custom docker image
:col_css: col-md-4
:button_link: ../../glossary/build_config/build_config.html
@ -68,7 +68,7 @@ You can run Lightning Apps for free on the Public Lightning cloud with a single
.. displayitem::
:header: Share Files Between Works
:description: Learn more about data transfering
:description: Learn more about data transferring
:col_css: col-md-4
:button_link: ../../glossary/storage/storage.html
:height: 150

View File

@ -1,7 +1,7 @@
#############################
Run LightningWork in parallel
#############################
**Audience:** Users who want to run a LightningWork in parallel (asynchroneously).
**Audience:** Users who want to run a LightningWork in parallel (asynchronously).
**Prereqs:** You must have finished the :doc:`Basic levels <../levels/basic/index>`.

View File

@ -13,7 +13,7 @@ You can turn caching on or off:
# Run only when the input arguments change (default)
work = MyWork(cache_calls=True)
# Run everytime regardless of whether input arguments change or not
# Run every time regardless of whether input arguments change or not
work = MyWork(cache_calls=False)
To better understand this, imagine that every day you want to sequentially download and process some data and then train a model on that data.

View File

@ -44,7 +44,7 @@ Here is how that gets done in Fabric:
optimizer.step()
# Let a callback add some arbitrary processing at the appropriate place
# Give the callback access to some varibles
# Give the callback access to some variables
fabric.call("on_train_batch_end", loss=loss, output=...)

View File

@ -9,7 +9,7 @@ Time comparison
---------------
We have set regular benchmarking against PyTorch vanilla training loop on with RNN and simple MNIST classifier as per of out CI.
In average for simple MNIST CNN classifier we are only about 0.06s slower per epoch, see detail chart bellow.
In average for simple MNIST CNN classifier we are only about 0.06s slower per epoch, see detail chart below.
.. figure:: ../_static/images/benchmarks/figure-parity-times.png
:alt: Speed parity to vanilla PT, created on 2020-12-16

View File

@ -290,7 +290,7 @@ An example of a model that uses two optimizers is the following:
cli = MyLightningCLI(MyModel, auto_configure_optimizers=False)
Note the type ``Callable[[Iterable], Optimizer]``, which denotes a function that receives a singe argument, some
Note the type ``Callable[[Iterable], Optimizer]``, which denotes a function that receives a single argument, some
learnable parameters, and returns an optimizer instance. With this, from the command line it is possible to select the
class and init arguments for each of the optimizers, as follows:

View File

@ -4,7 +4,7 @@ Configure hyperparameters from the CLI
--------------------------------------
You can use any CLI tool you want with Lightning.
For beginners, we recommand using Python's built-in argument parser.
For beginners, we recommend using Python's built-in argument parser.
----

View File

@ -198,7 +198,7 @@ How-to Guides
.. displayitem::
:header: Use a pretrained model
:description: Improve results with transer learning on pretrained models
:description: Improve results with transfer learning on pretrained models
:button_link: ../advanced/pretrained.html
:col_css: col-md-4
:height: 180

View File

@ -62,7 +62,7 @@ You can keep an eye on the gradient norm by logging it in your LightningModule:
This will plot the 2-norm of each layer to your experiment manager.
If you notice the norm is going up, there's a good chance your gradients will explode.
One technique to stop exploding gradients is to clip the gradient when the norm is above a certain threashold:
One technique to stop exploding gradients is to clip the gradient when the norm is above a certain threshold:
.. testcode::

View File

@ -66,7 +66,7 @@ The example below shows how to use the class-based interface:
acc = metric.compute()
print(f"Accuracy on all data: {acc}")
# Reseting internal state such that metric ready for new data
# Resetting internal state such that metric ready for new data
metric.reset()
------------

View File

@ -34,7 +34,7 @@ Built-in strategies can be selected in two ways.
1. Pass the shorthand name to the ``strategy`` Trainer argument
2. Import a Strategy from :mod:`lightning.pytorch.strategies`, instantiate it and pass it to the ``strategy`` Trainer argument
The latter allows you to configure further options on the specifc strategy.
The latter allows you to configure further options on the specific strategy.
Here are some examples:
.. code-block:: python

View File

@ -147,7 +147,7 @@ Glossary
.. displayitem::
:header: Half precision
:description: Using different numerical formats to save memory and run fatser
:description: Using different numerical formats to save memory and run faster
:col_css: col-md-12
:button_link: ../common/precision.html
:height: 100
@ -196,7 +196,7 @@ Glossary
.. displayitem::
:header: Log
:description: Outpus or results used for visualization and tracking
:description: Outputs or results used for visualization and tracking
:col_css: col-md-12
:button_link: ../visualize/loggers.html
:height: 100
@ -301,7 +301,7 @@ Glossary
.. displayitem::
:header: Style guide
:description: Best practices to improve readability and reproducability
:description: Best practices to improve readability and reproducibility
:col_css: col-md-12
:button_link: ../starter/style_guide.html
:height: 100

View File

@ -54,7 +54,7 @@ See a full example of a benchmark with the a `GPT-2 model <https://github.com/hp
* It only supports a single optimizer, which must be ``colossalai.nn.optimizer.CPUAdam`` or ``colossalai.nn.optimizer.
HybridAdam`` now. You can set ``adamw_mode`` to False to use normal Adam. Noticing that ``HybridAdam`` is highly optimized, it uses fused CUDA kernel and parallel CPU kernel.
It is recomended to use ``HybridAdam``, since it updates parameters in GPU and CPU both.
It is recommended to use ``HybridAdam``, since it updates parameters in GPU and CPU both.
* Your model must be created using the :meth:`~lightning.pytorch.core.LightningModule.configure_model` method.

View File

@ -11,7 +11,7 @@
- `PR7339`_
* - accessed ``Trainer.truncated_bptt_steps``
- swicth to manual optimization
- switch to manual optimization
- `PR7323`_
* - called ``LightningModule.write_predictions`` and ``LightningModule.write_predictions_dict``

View File

@ -12,7 +12,7 @@ sdiff train_torch.py train_fabric.py
#### 1. Image Classifier with Vanilla PyTorch
Trains a simple CNN over MNIST using vanilla PyTorch. It only supports singe GPU training.
Trains a simple CNN over MNIST using vanilla PyTorch. It only supports single GPU training.
```bash
# CPU

View File

@ -57,6 +57,16 @@ wrap-summaries = 119
wrap-descriptions = 120
blank = true
[tool.codespell]
# Todo: enable also python files in a next step
skip = '*.py'
quiet-level = 3
# comma separated list of words; waiting for:
# https://github.com/codespell-project/codespell/issues/2839#issuecomment-1731601603
# also adding links until they ignored by its: nature
# https://github.com/codespell-project/codespell/issues/2243#issuecomment-1732019960
ignore-words-list = "te, compiletime"
[tool.ruff]
line-length = 120

View File

@ -257,7 +257,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Added
- Added `Lightning{Flow,Work}.lightningignores` attributes to programmatically ignore files before uploading to the cloud ([#15818](https://github.com/Lightning-AI/lightning/pull/15818))
- Added a progres bar while connecting to an app through the CLI ([#16035](https://github.com/Lightning-AI/lightning/pull/16035))
- Added a progress bar while connecting to an app through the CLI ([#16035](https://github.com/Lightning-AI/lightning/pull/16035))
- Support running on multiple clusters ([#16016](https://github.com/Lightning-AI/lightning/pull/16016))
- Added guards to cluster deletion from cli ([#16053](https://github.com/Lightning-AI/lightning/pull/16053))
@ -384,13 +384,13 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added support for running Lightning App with VSCode IDE debugger ([#15590](https://github.com/Lightning-AI/lightning/pull/15590))
- Added `bi-directional` delta updates between the flow and the works ([#15582](https://github.com/Lightning-AI/lightning/pull/15582))
- Added `--setup` flag to `lightning run app` CLI command allowing for dependency installation via app comments ([#15577](https://github.com/Lightning-AI/lightning/pull/15577))
- Auto-upgrade / detect environment mis-match from the CLI ([#15434](https://github.com/Lightning-AI/lightning/pull/15434))
- Auto-upgrade / detect environment mismatch from the CLI ([#15434](https://github.com/Lightning-AI/lightning/pull/15434))
- Added Serve component ([#15609](https://github.com/Lightning-AI/lightning/pull/15609))
### Changed
- Changed the `flow.flows` to be recursive wont to align the behavior with the `flow.works` ([#15466](https://github.com/Lightning-AI/lightning/pull/15466))
- Changed the `flow.flows` to be recursive won't to align the behavior with the `flow.works` ([#15466](https://github.com/Lightning-AI/lightning/pull/15466))
- The `params` argument in `TracerPythonScript.run` no longer prepends `--` automatically to parameters ([#15518](https://github.com/Lightning-AI/lightning/pull/15518))
- Only check versions / env when not in the cloud ([#15504](https://github.com/Lightning-AI/lightning/pull/15504))
- Periodically sync database to the drive ([#15441](https://github.com/Lightning-AI/lightning/pull/15441))

View File

@ -455,7 +455,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Renamed `TQDMProgressBar.main_progress_bar` to `TQDMProgressBar.train_progress_bar` ([#16695](https://github.com/Lightning-AI/lightning/pull/16695))
- Marked the progress tracking classes as protected ([#17009](https://github.com/Lightning-AI/lightning/pull/17009))
- Marked the `lightning.pytorch.trainer.configuration_validator.verify_loop_configurations` function as protected ([#17009](https://github.com/Lightning-AI/lightning/pull/17009))
- Marked the `lightning.pytorch.utiltiies.distirbuted.register_ddp_comm_hook` function as protected ([#17009](https://github.com/Lightning-AI/lightning/pull/17009))
- Marked the `lightning.pytorch.utiltiies.distributed.register_ddp_comm_hook` function as protected ([#17009](https://github.com/Lightning-AI/lightning/pull/17009))
- Marked `lightning.pytorch.utilities.supporters.CombinedDataset` as protected ([#16714](https://github.com/Lightning-AI/lightning/pull/16714))
- Marked the `{Accelerator,Signal,Callback,Checkpoint,Data,Logger}Connector` classes as protected ([#17008](https://github.com/Lightning-AI/lightning/pull/17008))
- Marked the `lightning.pytorch.trainer.connectors.signal_connector.HandlersCompose` class as protected ([#17008](https://github.com/Lightning-AI/lightning/pull/17008))
@ -864,7 +864,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Changed
- The `Trainer.{fit,validate,test,predict,tune}` methods now raise a useful error message if the input is not a `LightningModule` ([#13892](https://github.com/Lightning-AI/lightning/pull/13892))
- Raised a `MisconfigurationException` if batch transfer hooks are overriden with `IPUAccelerator` ([#13961](https://github.com/Lightning-AI/lightning/pull/13961))
- Raised a `MisconfigurationException` if batch transfer hooks are overridden with `IPUAccelerator` ([#13961](https://github.com/Lightning-AI/lightning/pull/13961))
- Replaced the unwrapping logic in strategies with direct access to unwrapped `LightningModule` ([#13738](https://github.com/Lightning-AI/lightning/pull/13738))
- Enabled `on_before_batch_transfer` for `DPStrategy` and `IPUAccelerator` ([#14023](https://github.com/Lightning-AI/lightning/pull/14023))
- When resuming training with Apex enabled, the `Trainer` will now raise an error ([#14341](https://github.com/Lightning-AI/lightning/pull/14341))
@ -891,7 +891,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Deprecated
- Deprecated `LightningDeepSpeedModule` ([#14000](https://github.com/Lightning-AI/lightning/pull/14000))
- Deprecated `amp_level` from `Trainer` in favour of passing it explictly via precision plugin ([#13898](https://github.com/Lightning-AI/lightning/pull/13898))
- Deprecated `amp_level` from `Trainer` in favour of passing it explicitly via precision plugin ([#13898](https://github.com/Lightning-AI/lightning/pull/13898))
- Deprecated the calls to `pl.utiltiies.meta` functions in favor of built-in https://github.com/pytorch/torchdistx support ([#13868](https://github.com/Lightning-AI/lightning/pull/13868))
- Deprecated the `unwrap_lightning_module` and `unwrap_lightning_module_sharded` utility functions in favor of accessing the unwrapped `LightningModule` on the strategy directly ([#13738](https://github.com/Lightning-AI/lightning/pull/13738))
- Deprecated the `pl_module` argument in `LightningParallelModule`, `LightningDistributedModule`, `LightningShardedDataParallel`, `LightningBaguaModule` and `LightningDeepSpeedModule` wrapper classes ([#13738](https://github.com/Lightning-AI/lightning/pull/13738))
@ -4619,7 +4619,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Added
- Added supoort for multiple validation dataloaders
- Added support for multiple validation dataloaders
- Added support for latest test-tube logger (optimised for `torch==1.2.0`)
### Changed

View File

@ -268,8 +268,8 @@ ______________________________________________________________________
## When to use Fabric?
- **Minimum code changes**- You want to scale your PyTorch model to use multi-GPU or use advanced strategies like DeepSpeed without having to refactor. You dont care about structuring your code- you just want to scale it as fast as possible.
- **Maxmium control**- Write your own training and/or inference logic down to the individual optimizer calls. You arent forced to conform to a standardized epoch-based training loop like the one in Lightning Trainer. You can do flexible iteration based training, meta-learning, cross-validation and other types of optimization algorithms without digging into framework internals. This also makes it super easy to adopt Fabric in existing PyTorch projects to speed-up and scale your models without the compromise on large refactors. Just remember: With great power comes a great responsibility.
- **Maxmium flexibility**- You want to have full control over your entire training- in Fabric all features are opt-in, and it provides you with a tool box of primitives so you can build your own Trainer.
- **Maximum control**- Write your own training and/or inference logic down to the individual optimizer calls. You arent forced to conform to a standardized epoch-based training loop like the one in Lightning Trainer. You can do flexible iteration based training, meta-learning, cross-validation and other types of optimization algorithms without digging into framework internals. This also makes it super easy to adopt Fabric in existing PyTorch projects to speed-up and scale your models without the compromise on large refactors. Just remember: With great power comes a great responsibility.
- **Maximum flexibility**- You want to have full control over your entire training- in Fabric all features are opt-in, and it provides you with a tool box of primitives so you can build your own Trainer.
## When to use the [Lightning Trainer](https://lightning.ai/docs/pytorch/stable/common/trainer.html)?

View File

@ -339,7 +339,7 @@ ______________________________________________________________________
###### Self-supervised Learning
- [CPC transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#cpc-transforms)
- [Moco v2 tranforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#moco-v2-transforms)
- [Moco v2 transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#moco-v2-transforms)
- [SimCLR transforms](https://lightning-bolts.readthedocs.io/en/stable/transforms/self_supervised.html#simclr-transforms)
###### Convolutional Architectures

View File

@ -3,7 +3,7 @@ set -e # exit on error
echo "--- Install packages ---"
# show what's already installed
pip3 list
# typing-extensions==4.5.0 comes pre-installed in the environment, and pydantic doesnt support that, however,
# typing-extensions==4.5.0 comes pre-installed in the environment, and pydantic doesn't support that, however,
# pip cannot upgrade it because it's in the system folder: needs sudo
sudo pip3 install -U typing-extensions
# set particular PyTorch version

View File

@ -3,7 +3,7 @@ set -e # exit on error
echo "--- Install packages ---"
# show what's already installed
pip3 list
# typing-extensions==4.5.0 comes pre-installed in the environment, and pydantic doesnt support that, however,
# typing-extensions==4.5.0 comes pre-installed in the environment, and pydantic doesn't support that, however,
# pip cannot upgrade it because it's in the system folder: needs sudo
sudo pip3 install -U typing-extensions
# set particular PyTorch version