From 20303e47362bbc9c8953545d58c759301bb9f13c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Adrian=20W=C3=A4lchli?= Date: Mon, 24 Apr 2023 10:28:52 +0200 Subject: [PATCH] Add Fabric.launch to Fabric methods section (#17437) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> --- docs/source-fabric/api/fabric_methods.rst | 27 +++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/docs/source-fabric/api/fabric_methods.rst b/docs/source-fabric/api/fabric_methods.rst index 1df5aa5391..2a1a8cca8f 100644 --- a/docs/source-fabric/api/fabric_methods.rst +++ b/docs/source-fabric/api/fabric_methods.rst @@ -2,6 +2,33 @@ Fabric Methods ############## +launch +====== + +With :meth:`~lightning.fabric.fabric.Fabric.launch` you can conveniently launch your script or a function +into multiple processes for distributed training on a single machine. + +.. code-block:: python + + # Launch the script on 2 devices and init distributed backend + fabric = Fabric(devices=2) + fabric.launch() + +The same can be done with code inside a function: + +.. code-block:: python + + def run(fabric): + # Your distributed code here + ... + + + # Launch a function on 2 devices and init distributed backend + fabric = Fabric(devices=2) + fabric.launch(run) + +For example, you can use the latter for multi-GPU training inside a :doc:`Jupyter notebook <../fundamentals/notebooks>`. +For launching distributed training with the CLI, multi-node cluster, or cloud, see :doc:`../fundamentals/launch`. setup =====