From ed863dd3e7db3e10886c9d1347528ac6bf832ff6 Mon Sep 17 00:00:00 2001 From: Kaushik B <45285388+kaushikb11@users.noreply.github.com> Date: Tue, 22 Feb 2022 01:21:58 +0530 Subject: [PATCH] Update Bagua section example (#11899) --- docs/source/accelerators/gpu.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/accelerators/gpu.rst b/docs/source/accelerators/gpu.rst index 813e243a86..0a20d885db 100644 --- a/docs/source/accelerators/gpu.rst +++ b/docs/source/accelerators/gpu.rst @@ -506,7 +506,7 @@ but Bagua can usually produce a higher training throughput due to its backend wr .. code-block:: python - # train on 2 GPUs (using Bagua mode) + # train on 4 GPUs (using Bagua mode) trainer = Trainer(strategy="bagua", accelerator="gpu", devices=4)