docs: enable Sphinx linter & fixing (#19515)

* docs: enable Sphinx linter
* fixes
This commit is contained in:
Jirka Borovec 2024-02-26 16:20:33 +01:00 committed by GitHub
parent e43820a4be
commit cf3553cdb5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
15 changed files with 55 additions and 50 deletions

View File

@ -71,6 +71,11 @@ repos:
additional_dependencies: [tomli]
args: ["--in-place"]
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.9.1
hooks:
- id: sphinx-lint
- repo: https://github.com/asottile/yesqa
rev: v1.5.0
hooks:
@ -86,10 +91,10 @@ repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: "v0.2.0"
hooks:
- id: ruff
args: ["--fix", "--preview"]
- id: ruff-format
args: ["--preview"]
- id: ruff
args: ["--fix", "--preview"]
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.17

View File

@ -87,13 +87,13 @@ And here's the output you get when running the App using the **Lightning CLI**:
.. code-block:: console
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
----

View File

@ -41,9 +41,9 @@ There are a couple of ways you can add a dynamic Work:
def run(self):
if not hasattr(self, "work"):
# The `Work` component is created and attached here.
# The `Work` component is created and attached here.
setattr(self, "work", Work())
# Run the `Work` component.
# Run the `Work` component.
getattr(self, "work").run()
**OPTION 2:** Use the built-in Lightning classes :class:`~lightning.app.structures.Dict` or :class:`~lightning.app.structures.List`
@ -60,7 +60,7 @@ There are a couple of ways you can add a dynamic Work:
def run(self):
if "work" not in self.dict:
# The `Work` component is attached here.
# The `Work` component is attached here.
self.dict["work"] = Work()
self.dict["work"].run()

View File

@ -24,4 +24,4 @@ Environment variables are available in all Flows and Works, and can be accessed
print(os.environ["BAZ"]) # FAZ
.. note::
Environment variables are not encrypted. For sensitive values, we recommend using :ref:`Encrypted Secrets <secrets>`.
Environment variables are not encrypted. For sensitive values, we recommend using :ref:`Encrypted Secrets <secrets>`.

View File

@ -8,7 +8,7 @@ Encrypted Secrets allow you to pass private data to your apps, like API keys, ac
Secrets provide you with a secure way to store this data in a way that is accessible to Apps so that they can authenticate third-party services/solutions.
.. tip::
For non-sensitive configuration values, we recommend using :ref:`plain-text Environment Variables <environment_variables>`.
For non-sensitive configuration values, we recommend using :ref:`plain-text Environment Variables <environment_variables>`.
************
Add a secret

View File

@ -34,7 +34,7 @@ Now, imagine you have implemented a **KerasScriptRunner** component for training
Here are the best practices steps before sharing the component:
* **Testing**: Ensure your component is well tested by following the ref:`../testing` guide.
* **Testing**: Ensure your component is well tested by following the :doc:`../testing` guide.
* **Documented**: Ensure your component has a docstring and comes with some usage explications.
.. Note:: As a Lightning user, it helps to implement your components thinking someone else is going to use them.

View File

@ -50,10 +50,10 @@ And here's the output you get when running the App using **Lightning CLI**:
.. code-block:: console
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...
INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view
State: {'works': {'w': {'vars': {'counter': 1}}}}
State: {'works': {'w': {'vars': {'counter': 2}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 3}}}}
State: {'works': {'w': {'vars': {'counter': 4}}}}
...

View File

@ -47,7 +47,7 @@ Update React <-- Lightning app
******************************
To change the React app from the Lightning app, use the values from the `lightningState`.
In this example, when the `react_ui.counter`` increaes in the Lightning app:
In this example, when the ``react_ui.counter`` increaes in the Lightning app:
.. literalinclude:: ../../../../../src/lightning/app/cli/react-ui-template/example_app.py
:emphasize-lines: 18, 24

View File

@ -55,8 +55,8 @@ Usage
Minor code changes are required for the user to get started with Intel® Neural Compressor quantization API. To construct the quantization process, users can specify the below settings via the Python code:
1. Calibration Dataloader (Needed for post-training static quantization)
2. Evaluation Dataloader and Metric
1. Calibration Dataloader (Needed for post-training static quantization)
2. Evaluation Dataloader and Metric
The code changes that are required for Intel® Neural Compressor are highlighted with comments in the line above.

View File

@ -32,8 +32,8 @@ As datasets grow in size and the number of nodes scales, loading training data c
The `StreamingDataset <https://github.com/mosaicml/streaming>`__ can make training on large datasets from cloud storage
as fast, cheap, and scalable as possible.
This library uses a custom built class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it
via a regular class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless:
This library uses a custom built :class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it
via a regular :class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless:
.. code-block:: python

View File

@ -660,7 +660,7 @@ Hydra makes every aspect of the NeMo model, including the PyTorch Lightning Trai
Using State-Of-The-Art Pre-trained TTS Model
--------------------------------------------
Generate speech using models trained on `LJSpeech <https://keithito.com/LJ-Speech-Dataset/>`,
Generate speech using models trained on `LJSpeech <https://keithito.com/LJ-Speech-Dataset/>`_,
around 24 hours of single speaker data.
See this `TTS notebook <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/tts/1_TTS_inference.ipynb>`_

View File

@ -31,22 +31,22 @@ Once the **.fit()** function has completed, you'll see an output like this:
FIT Profiler Report
-----------------------------------------------------------------------------------------------
| Action | Mean duration (s) | Total time (s) |
-----------------------------------------------------------------------------------------------
| [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 |
| run_training_epoch | 6.1558 | 6.1558 |
| run_training_batch | 0.0022506 | 0.015754 |
| [LightningModule]BoringModel.optimizer_step | 0.0017477 | 0.012234 |
| [LightningModule]BoringModel.val_dataloader | 0.00024388 | 0.00024388 |
| on_train_batch_start | 0.00014637 | 0.0010246 |
| [LightningModule]BoringModel.teardown | 2.15e-06 | 2.15e-06 |
| [LightningModule]BoringModel.on_train_start | 1.644e-06 | 1.644e-06 |
| [LightningModule]BoringModel.on_train_end | 1.516e-06 | 1.516e-06 |
| [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 |
| [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 |
| [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 |
-----------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
| Action | Mean duration (s) | Total time (s) |
-------------------------------------------------------------------------------------------
| [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 |
| run_training_epoch | 6.1558 | 6.1558 |
| run_training_batch | 0.0022506 | 0.015754 |
| [LightningModule]BoringModel.optimizer_step | 0.0017477 | 0.012234 |
| [LightningModule]BoringModel.val_dataloader | 0.00024388 | 0.00024388 |
| on_train_batch_start | 0.00014637 | 0.0010246 |
| [LightningModule]BoringModel.teardown | 2.15e-06 | 2.15e-06 |
| [LightningModule]BoringModel.on_train_start | 1.644e-06 | 1.644e-06 |
| [LightningModule]BoringModel.on_train_end | 1.516e-06 | 1.516e-06 |
| [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 |
| [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 |
| [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 |
-------------------------------------------------------------------------------------------
In this report we can see that the slowest function is **prepare_data**. Now you can figure out why data preparation is slowing down your training.

View File

@ -103,15 +103,15 @@
- `PR11871`_
* - used ``Trainer.validated_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(ckpt_path=...)``
- `PR11696`_
* - used ``Trainer.tested_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(ckpt_path=...)``
- `PR11696`_
* - used ``Trainer.predicted_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(ckpt_path=...)``
- `PR11696`_
* - rely on the returned dictionary from ``Callback.on_save_checkpoint``

View File

@ -26,7 +26,7 @@
- use DDP instead
- `PR16386`_ :doc:`DDP <../../accelerators/gpu_expert>`
* - used the pl.plugins.ApexMixedPrecisionPlugin`` plugin
* - used the ``pl.plugins.ApexMixedPrecisionPlugin`` plugin
- use PyTorch native mixed precision
- `PR16039`_

View File

@ -39,11 +39,11 @@
- `PR16184`_
* - called the ``pl.tuner.auto_gpu_select.pick_single_gpu`` function
- use Trainers flag``devices="auto"``
- use Trainers flag ``devices="auto"``
- `PR16184`_
* - called the ``pl.tuner.auto_gpu_select.pick_multiple_gpus`` functions
- use Trainers flag``devices="auto"``
- use Trainers flag ``devices="auto"``
- `PR16184`_
* - used Trainers flag ``accumulate_grad_batches`` with a scheduling dictionary value