diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 0142779a2f..9e835b1824 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -71,6 +71,11 @@ repos: additional_dependencies: [tomli] args: ["--in-place"] + - repo: https://github.com/sphinx-contrib/sphinx-lint + rev: v0.9.1 + hooks: + - id: sphinx-lint + - repo: https://github.com/asottile/yesqa rev: v1.5.0 hooks: @@ -86,10 +91,10 @@ repos: - repo: https://github.com/astral-sh/ruff-pre-commit rev: "v0.2.0" hooks: - - id: ruff - args: ["--fix", "--preview"] - id: ruff-format args: ["--preview"] + - id: ruff + args: ["--fix", "--preview"] - repo: https://github.com/executablebooks/mdformat rev: 0.7.17 diff --git a/docs/source-app/core_api/lightning_app/communication_content.rst b/docs/source-app/core_api/lightning_app/communication_content.rst index ea39749018..4b373dbb07 100644 --- a/docs/source-app/core_api/lightning_app/communication_content.rst +++ b/docs/source-app/core_api/lightning_app/communication_content.rst @@ -87,13 +87,13 @@ And here's the output you get when running the App using the **Lightning CLI**: .. code-block:: console - INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view - State: {'works': {'w': {'vars': {'counter': 1}}}} - State: {'works': {'w': {'vars': {'counter': 2}}}} - State: {'works': {'w': {'vars': {'counter': 3}}}} - State: {'works': {'w': {'vars': {'counter': 3}}}} - State: {'works': {'w': {'vars': {'counter': 4}}}} - ... + INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view + State: {'works': {'w': {'vars': {'counter': 1}}}} + State: {'works': {'w': {'vars': {'counter': 2}}}} + State: {'works': {'w': {'vars': {'counter': 3}}}} + State: {'works': {'w': {'vars': {'counter': 3}}}} + State: {'works': {'w': {'vars': {'counter': 4}}}} + ... ---- diff --git a/docs/source-app/core_api/lightning_app/dynamic_work_content.rst b/docs/source-app/core_api/lightning_app/dynamic_work_content.rst index 6b8bc06cb6..31616a42a1 100644 --- a/docs/source-app/core_api/lightning_app/dynamic_work_content.rst +++ b/docs/source-app/core_api/lightning_app/dynamic_work_content.rst @@ -41,9 +41,9 @@ There are a couple of ways you can add a dynamic Work: def run(self): if not hasattr(self, "work"): - # The `Work` component is created and attached here. + # The `Work` component is created and attached here. setattr(self, "work", Work()) - # Run the `Work` component. + # Run the `Work` component. getattr(self, "work").run() **OPTION 2:** Use the built-in Lightning classes :class:`~lightning.app.structures.Dict` or :class:`~lightning.app.structures.List` @@ -60,7 +60,7 @@ There are a couple of ways you can add a dynamic Work: def run(self): if "work" not in self.dict: - # The `Work` component is attached here. + # The `Work` component is attached here. self.dict["work"] = Work() self.dict["work"].run() diff --git a/docs/source-app/glossary/environment_variables.rst b/docs/source-app/glossary/environment_variables.rst index c561a5dee4..dcbcd8c6f4 100644 --- a/docs/source-app/glossary/environment_variables.rst +++ b/docs/source-app/glossary/environment_variables.rst @@ -24,4 +24,4 @@ Environment variables are available in all Flows and Works, and can be accessed print(os.environ["BAZ"]) # FAZ .. note:: - Environment variables are not encrypted. For sensitive values, we recommend using :ref:`Encrypted Secrets `. + Environment variables are not encrypted. For sensitive values, we recommend using :ref:`Encrypted Secrets `. diff --git a/docs/source-app/glossary/secrets.rst b/docs/source-app/glossary/secrets.rst index 0f1b226739..95a0d564c6 100644 --- a/docs/source-app/glossary/secrets.rst +++ b/docs/source-app/glossary/secrets.rst @@ -8,7 +8,7 @@ Encrypted Secrets allow you to pass private data to your apps, like API keys, ac Secrets provide you with a secure way to store this data in a way that is accessible to Apps so that they can authenticate third-party services/solutions. .. tip:: - For non-sensitive configuration values, we recommend using :ref:`plain-text Environment Variables `. + For non-sensitive configuration values, we recommend using :ref:`plain-text Environment Variables `. ************ Add a secret diff --git a/docs/source-app/glossary/sharing_components.rst b/docs/source-app/glossary/sharing_components.rst index f9cc48c49b..2426bb43d4 100644 --- a/docs/source-app/glossary/sharing_components.rst +++ b/docs/source-app/glossary/sharing_components.rst @@ -34,7 +34,7 @@ Now, imagine you have implemented a **KerasScriptRunner** component for training Here are the best practices steps before sharing the component: -* **Testing**: Ensure your component is well tested by following the ref:`../testing` guide. +* **Testing**: Ensure your component is well tested by following the :doc:`../testing` guide. * **Documented**: Ensure your component has a docstring and comes with some usage explications. .. Note:: As a Lightning user, it helps to implement your components thinking someone else is going to use them. diff --git a/docs/source-app/workflows/access_app_state.rst b/docs/source-app/workflows/access_app_state.rst index 1f7e1258a4..8f99534bd2 100644 --- a/docs/source-app/workflows/access_app_state.rst +++ b/docs/source-app/workflows/access_app_state.rst @@ -50,10 +50,10 @@ And here's the output you get when running the App using **Lightning CLI**: .. code-block:: console - INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view - State: {'works': {'w': {'vars': {'counter': 1}}}} - State: {'works': {'w': {'vars': {'counter': 2}}}} - State: {'works': {'w': {'vars': {'counter': 3}}}} - State: {'works': {'w': {'vars': {'counter': 3}}}} - State: {'works': {'w': {'vars': {'counter': 4}}}} - ... + INFO: Your app has started. View it in your browser: http://127.0.0.1:7501/view + State: {'works': {'w': {'vars': {'counter': 1}}}} + State: {'works': {'w': {'vars': {'counter': 2}}}} + State: {'works': {'w': {'vars': {'counter': 3}}}} + State: {'works': {'w': {'vars': {'counter': 3}}}} + State: {'works': {'w': {'vars': {'counter': 4}}}} + ... diff --git a/docs/source-app/workflows/add_web_ui/react/communicate_between_react_and_lightning.rst b/docs/source-app/workflows/add_web_ui/react/communicate_between_react_and_lightning.rst index 63b92beea6..ef836e7e4b 100644 --- a/docs/source-app/workflows/add_web_ui/react/communicate_between_react_and_lightning.rst +++ b/docs/source-app/workflows/add_web_ui/react/communicate_between_react_and_lightning.rst @@ -47,7 +47,7 @@ Update React <-- Lightning app ****************************** To change the React app from the Lightning app, use the values from the `lightningState`. -In this example, when the `react_ui.counter`` increaes in the Lightning app: +In this example, when the ``react_ui.counter`` increaes in the Lightning app: .. literalinclude:: ../../../../../src/lightning/app/cli/react-ui-template/example_app.py :emphasize-lines: 18, 24 diff --git a/docs/source-pytorch/advanced/post_training_quantization.rst b/docs/source-pytorch/advanced/post_training_quantization.rst index 12139d2cea..504a57a419 100644 --- a/docs/source-pytorch/advanced/post_training_quantization.rst +++ b/docs/source-pytorch/advanced/post_training_quantization.rst @@ -55,8 +55,8 @@ Usage Minor code changes are required for the user to get started with Intel® Neural Compressor quantization API. To construct the quantization process, users can specify the below settings via the Python code: -1. Calibration Dataloader (Needed for post-training static quantization) -2. Evaluation Dataloader and Metric +1. Calibration Dataloader (Needed for post-training static quantization) +2. Evaluation Dataloader and Metric The code changes that are required for Intel® Neural Compressor are highlighted with comments in the line above. diff --git a/docs/source-pytorch/data/alternatives.rst b/docs/source-pytorch/data/alternatives.rst index 79454b6078..976f6f9de7 100644 --- a/docs/source-pytorch/data/alternatives.rst +++ b/docs/source-pytorch/data/alternatives.rst @@ -32,8 +32,8 @@ As datasets grow in size and the number of nodes scales, loading training data c The `StreamingDataset `__ can make training on large datasets from cloud storage as fast, cheap, and scalable as possible. -This library uses a custom built class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it -via a regular class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless: +This library uses a custom built :class:`~torch.utils.data.IterableDataset`. The library recommends iterating through it +via a regular :class:`~torch.utils.data.DataLoader`. This means that support in the ``Trainer`` is seamless: .. code-block:: python diff --git a/docs/source-pytorch/ecosystem/asr_nlp_tts.rst b/docs/source-pytorch/ecosystem/asr_nlp_tts.rst index 5989f11e60..af7fd05af7 100644 --- a/docs/source-pytorch/ecosystem/asr_nlp_tts.rst +++ b/docs/source-pytorch/ecosystem/asr_nlp_tts.rst @@ -660,7 +660,7 @@ Hydra makes every aspect of the NeMo model, including the PyTorch Lightning Trai Using State-Of-The-Art Pre-trained TTS Model -------------------------------------------- -Generate speech using models trained on `LJSpeech `, +Generate speech using models trained on `LJSpeech `_, around 24 hours of single speaker data. See this `TTS notebook `_ diff --git a/docs/source-pytorch/tuning/profiler_basic.rst b/docs/source-pytorch/tuning/profiler_basic.rst index cf1ae93e97..880381268f 100644 --- a/docs/source-pytorch/tuning/profiler_basic.rst +++ b/docs/source-pytorch/tuning/profiler_basic.rst @@ -31,22 +31,22 @@ Once the **.fit()** function has completed, you'll see an output like this: FIT Profiler Report - ----------------------------------------------------------------------------------------------- - | Action | Mean duration (s) | Total time (s) | - ----------------------------------------------------------------------------------------------- - | [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 | - | run_training_epoch | 6.1558 | 6.1558 | - | run_training_batch | 0.0022506 | 0.015754 | - | [LightningModule]BoringModel.optimizer_step | 0.0017477 | 0.012234 | - | [LightningModule]BoringModel.val_dataloader | 0.00024388 | 0.00024388 | - | on_train_batch_start | 0.00014637 | 0.0010246 | - | [LightningModule]BoringModel.teardown | 2.15e-06 | 2.15e-06 | - | [LightningModule]BoringModel.on_train_start | 1.644e-06 | 1.644e-06 | - | [LightningModule]BoringModel.on_train_end | 1.516e-06 | 1.516e-06 | - | [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 | - | [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 | - | [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 | - ----------------------------------------------------------------------------------------------- + ------------------------------------------------------------------------------------------- + | Action | Mean duration (s) | Total time (s) | + ------------------------------------------------------------------------------------------- + | [LightningModule]BoringModel.prepare_data | 10.0001 | 20.00 | + | run_training_epoch | 6.1558 | 6.1558 | + | run_training_batch | 0.0022506 | 0.015754 | + | [LightningModule]BoringModel.optimizer_step | 0.0017477 | 0.012234 | + | [LightningModule]BoringModel.val_dataloader | 0.00024388 | 0.00024388 | + | on_train_batch_start | 0.00014637 | 0.0010246 | + | [LightningModule]BoringModel.teardown | 2.15e-06 | 2.15e-06 | + | [LightningModule]BoringModel.on_train_start | 1.644e-06 | 1.644e-06 | + | [LightningModule]BoringModel.on_train_end | 1.516e-06 | 1.516e-06 | + | [LightningModule]BoringModel.on_fit_end | 1.426e-06 | 1.426e-06 | + | [LightningModule]BoringModel.setup | 1.403e-06 | 1.403e-06 | + | [LightningModule]BoringModel.on_fit_start | 1.226e-06 | 1.226e-06 | + ------------------------------------------------------------------------------------------- In this report we can see that the slowest function is **prepare_data**. Now you can figure out why data preparation is slowing down your training. diff --git a/docs/source-pytorch/upgrade/sections/1_7_advanced.rst b/docs/source-pytorch/upgrade/sections/1_7_advanced.rst index 8b92044546..21461c0700 100644 --- a/docs/source-pytorch/upgrade/sections/1_7_advanced.rst +++ b/docs/source-pytorch/upgrade/sections/1_7_advanced.rst @@ -103,15 +103,15 @@ - `PR11871`_ * - used ``Trainer.validated_ckpt_path`` attribute - - rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(````ckpt_path=...)`` + - rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.validate(ckpt_path=...)`` - `PR11696`_ * - used ``Trainer.tested_ckpt_path`` attribute - - rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(````ckpt_path=...)`` + - rely on generic read-only property ``Trainer.ckpt_path`` which is set when checkpoints are loaded via ``Trainer.test(ckpt_path=...)`` - `PR11696`_ * - used ``Trainer.predicted_ckpt_path`` attribute - - rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(````ckpt_path=...)`` + - rely on generic read-only property ``Trainer.ckpt_path``, which is set when checkpoints are loaded via ``Trainer.predict(ckpt_path=...)`` - `PR11696`_ * - rely on the returned dictionary from ``Callback.on_save_checkpoint`` diff --git a/docs/source-pytorch/upgrade/sections/1_9_devel.rst b/docs/source-pytorch/upgrade/sections/1_9_devel.rst index cf1493b905..440b21dfc1 100644 --- a/docs/source-pytorch/upgrade/sections/1_9_devel.rst +++ b/docs/source-pytorch/upgrade/sections/1_9_devel.rst @@ -26,7 +26,7 @@ - use DDP instead - `PR16386`_ :doc:`DDP <../../accelerators/gpu_expert>` - * - used the pl.plugins.ApexMixedPrecisionPlugin`` plugin + * - used the ``pl.plugins.ApexMixedPrecisionPlugin`` plugin - use PyTorch native mixed precision - `PR16039`_ diff --git a/docs/source-pytorch/upgrade/sections/1_9_regular.rst b/docs/source-pytorch/upgrade/sections/1_9_regular.rst index 0aa0108e48..2f35957bcd 100644 --- a/docs/source-pytorch/upgrade/sections/1_9_regular.rst +++ b/docs/source-pytorch/upgrade/sections/1_9_regular.rst @@ -39,11 +39,11 @@ - `PR16184`_ * - called the ``pl.tuner.auto_gpu_select.pick_single_gpu`` function - - use Trainer’s flag``devices="auto"`` + - use Trainer’s flag ``devices="auto"`` - `PR16184`_ * - called the ``pl.tuner.auto_gpu_select.pick_multiple_gpus`` functions - - use Trainer’s flag``devices="auto"`` + - use Trainer’s flag ``devices="auto"`` - `PR16184`_ * - used Trainer’s flag ``accumulate_grad_batches`` with a scheduling dictionary value