docs: enable Sphinx linter & fixing (#19515)

* docs: enable Sphinx linter
* fixes
This commit is contained in:
Jirka Borovec 2024-02-26 16:20:33 +01:00 committed by GitHub
parent e43820a4be
commit cf3553cdb5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
15 changed files with 55 additions and 50 deletions

View File

@ -71,6 +71,11 @@ repos:
additional_dependencies: [tomli]
args: ["--in-place"]
- repo: https://github.com/sphinx-contrib/sphinx-lint
rev: v0.9.1
hooks:
- id: sphinx-lint
- repo: https://github.com/asottile/yesqa
rev: v1.5.0
hooks:
@ -86,10 +91,10 @@ repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: "v0.2.0"
hooks:
- id: ruff
args: ["--fix", "--preview"]
- id: ruff-format
args: ["--preview"]
- id: ruff
args: ["--fix", "--preview"]
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.17

View File

@ -34,7 +34,7 @@ Now, imagine you have implemented a **KerasScriptRunner** component for training
Here are the best practices steps before sharing the component:
* **Testing**: Ensure your component is well tested by following the ref:`../testing` guide.
* **Testing**: Ensure your component is well tested by following the :doc:`../testing` guide.
* **Documented**: Ensure your component has a docstring and comes with some usage explications.
.. Note:: As a Lightning user, it helps to implement your components thinking someone else is going to use them.

View File

@ -47,7 +47,7 @@ Update React <-- Lightning app
******************************
To change the React app from the Lightning app, use the values from the `lightningState`.
In this example, when the `react_ui.counter`` increaes in the Lightning app:
In this example, when the ``react_ui.counter`` increaes in the Lightning app:
.. literalinclude:: ../../../../../src/lightning/app/cli/react-ui-template/example_app.py
:emphasize-lines: 18, 24

View File

@ -32,8 +32,8 @@ As datasets grow in size and the number of nodes scales, loading training data c
The `StreamingDataset <https://github.com/mosaicml/streaming>`__ can make training on large datasets from cloud storage
as fast, cheap, and scalable as possible.
This library uses a custom built class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it
via a regular class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless:
This library uses a custom built :class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it
via a regular :class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless:
.. code-block:: python

View File

@ -660,7 +660,7 @@ Hydra makes every aspect of the NeMo model, including the PyTorch Lightning Trai
Using State-Of-The-Art Pre-trained TTS Model
--------------------------------------------
Generate speech using models trained on `LJSpeech <https://keithito.com/LJ-Speech-Dataset/>`,
Generate speech using models trained on `LJSpeech <https://keithito.com/LJ-Speech-Dataset/>`_,
around 24 hours of single speaker data.
See this `TTS notebook <https://github.com/NVIDIA/NeMo/blob/v1.0.0b1/tutorials/tts/1_TTS_inference.ipynb>`_

View File

@ -31,9 +31,9 @@ Once the **.fit()** function has completed, you'll see an output like this:
FIT Profiler Report
-----------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
| Action | Mean duration (s) | Total time (s) |
-----------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
| [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 |
| run_training_epoch | 6.1558 | 6.1558 |
| run_training_batch | 0.0022506 | 0.015754 |
@ -46,7 +46,7 @@ Once the **.fit()** function has completed, you'll see an output like this:
| [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 |
| [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 |
| [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 |
-----------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
In this report we can see that the slowest function is **prepare_data**. Now you can figure out why data preparation is slowing down your training.

View File

@ -103,15 +103,15 @@
- `PR11871`_
* - used ``Trainer.validated_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(ckpt_path=...)``
- `PR11696`_
* - used ``Trainer.tested_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(ckpt_path=...)``
- `PR11696`_
* - used ``Trainer.predicted_ckpt_path`` attribute
- rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(````ckpt_path=...)``
- rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(ckpt_path=...)``
- `PR11696`_
* - rely on the returned dictionary from ``Callback.on_save_checkpoint``

View File

@ -26,7 +26,7 @@
- use DDP instead
- `PR16386`_ :doc:`DDP <../../accelerators/gpu_expert>`
* - used the pl.plugins.ApexMixedPrecisionPlugin`` plugin
* - used the ``pl.plugins.ApexMixedPrecisionPlugin`` plugin
- use PyTorch native mixed precision
- `PR16039`_

View File

@ -39,11 +39,11 @@
- `PR16184`_
* - called the ``pl.tuner.auto_gpu_select.pick_single_gpu`` function
- use Trainers flag``devices="auto"``
- use Trainers flag ``devices="auto"``
- `PR16184`_
* - called the ``pl.tuner.auto_gpu_select.pick_multiple_gpus`` functions
- use Trainers flag``devices="auto"``
- use Trainers flag ``devices="auto"``
- `PR16184`_
* - used Trainers flag ``accumulate_grad_batches`` with a scheduling dictionary value