From 695c6f3063251074f9ea89f5de680c000d9990fc Mon Sep 17 00:00:00 2001 From: Jirka B Date: Tue, 14 Nov 2023 20:47:56 +0100 Subject: [PATCH] docs: pin some linked ipynb to 2.1.0 --- docs/source-pytorch/advanced/post_training_quantization.rst | 2 +- docs/source-pytorch/starter/style_guide.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source-pytorch/advanced/post_training_quantization.rst b/docs/source-pytorch/advanced/post_training_quantization.rst index 611014e74b..12139d2cea 100644 --- a/docs/source-pytorch/advanced/post_training_quantization.rst +++ b/docs/source-pytorch/advanced/post_training_quantization.rst @@ -126,7 +126,7 @@ At last, the quantized model can be saved by: Hands-on Examples ***************** -Based on the `given example code `_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process. +Based on the `given example code `_, we show how Intel Neural Compressor conduct model quantization on PyTorch Lightning. We first define the basic config of the quantization process. .. code-block:: python diff --git a/docs/source-pytorch/starter/style_guide.rst b/docs/source-pytorch/starter/style_guide.rst index 0dac0416c5..3ea1f621c7 100644 --- a/docs/source-pytorch/starter/style_guide.rst +++ b/docs/source-pytorch/starter/style_guide.rst @@ -219,5 +219,5 @@ This is true for both academic and corporate settings where data cleaning and ad of iterating through ideas. - Check out the live examples to get your hands dirty: -- `Introduction to PyTorch Lightning `_ +- `Introduction to PyTorch Lightning `_ - `Introduction to DataModules `_