From 2d0dd1c4453ad5877f33b91347be0629e5c807f9 Mon Sep 17 00:00:00 2001 From: Rohit Gupta Date: Wed, 12 Jan 2022 19:21:24 +0530 Subject: [PATCH] Refine remote fs doc (#11393) --- docs/source/common/checkpointing.rst | 10 ++++++++++ docs/source/common/remote_fs.rst | 17 ++++++++++------- docs/source/extensions/logging.rst | 15 ++++++++++++++- 3 files changed, 34 insertions(+), 8 deletions(-) diff --git a/docs/source/common/checkpointing.rst b/docs/source/common/checkpointing.rst index fdc49fa59c..c55130d49b 100644 --- a/docs/source/common/checkpointing.rst +++ b/docs/source/common/checkpointing.rst @@ -385,3 +385,13 @@ Custom Checkpoint IO Plugin .. note:: Some ``TrainingTypePlugins`` like ``DeepSpeedStrategy`` do not support custom ``CheckpointIO`` as checkpointing logic is not modifiable. + +----------- + +*************************** +Managing Remote Filesystems +*************************** + +Lightning supports saving and loading checkpoints from a variety of filesystems, including local filesystems and several cloud storage providers. + +Check out :ref:`Remote Filesystems ` document for more info. diff --git a/docs/source/common/remote_fs.rst b/docs/source/common/remote_fs.rst index df81828d38..cf85c0fcf8 100644 --- a/docs/source/common/remote_fs.rst +++ b/docs/source/common/remote_fs.rst @@ -1,16 +1,19 @@ -Remote filesystems -================== +.. _remote_fs: -PyTorch Lightning enables working with data from a variety of filesystems, including local filesystems and several cloud storage providers -such as ``s3`` on AWS, ``gcs`` on Google Cloud, or ``adl`` on Azure. +################## +Remote Filesystems +################## + +PyTorch Lightning enables working with data from a variety of filesystems, including local filesystems and several cloud storage providers such as +`S3 `_ on `AWS `_, `GCS `_ on `Google Cloud `_, +or `ADL `_ on `Azure `_. This applies to saving and writing checkpoints, as well as for logging. Working with different filesystems can be accomplished by appending a protocol like "s3:/" to file paths for writing and reading data. - .. code-block:: python - # `default_root_dir` is the default path used for logs and weights + # `default_root_dir` is the default path used for logs and checkpoints trainer = Trainer(default_root_dir="s3://my_bucket/data/") trainer.fit(model) @@ -32,7 +35,7 @@ Additionally, you could also resume training with a checkpoint stored at a remot trainer = Trainer(default_root_dir=tmpdir, max_steps=3) trainer.fit(model, ckpt_path="s3://my_bucket/ckpts/classifier.ckpt") -PyTorch Lightning uses `fsspec `__ internally to handle all filesystem operations. +PyTorch Lightning uses `fsspec `_ internally to handle all filesystem operations. The most common filesystems supported by Lightning are: diff --git a/docs/source/extensions/logging.rst b/docs/source/extensions/logging.rst index 153ef22003..8369b4451c 100644 --- a/docs/source/extensions/logging.rst +++ b/docs/source/extensions/logging.rst @@ -14,8 +14,9 @@ Logging ####### +***************** Supported Loggers -================= +***************** The following are loggers we support: @@ -101,6 +102,7 @@ Lightning offers automatic log functionalities for logging scalars, or manual lo Automatic Logging ================= + Use the :meth:`~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :doc:`lightning module <../common/lightning_module>` and :doc:`callbacks <../extensions/callbacks>`. @@ -182,6 +184,7 @@ If your work requires to log in an unsupported method, please open an issue with Manual Logging Non-Scalar Artifacts =================================== + If you want to log anything that is not a scalar, like histograms, text, images, etc., you may need to use the logger object directly. .. code-block:: python @@ -388,3 +391,13 @@ in the `hparams tab ` document for more info.