diff --git a/docs/source/tpu.rst b/docs/source/tpu.rst index e4dd0c330b..ca8ad8fee7 100644 --- a/docs/source/tpu.rst +++ b/docs/source/tpu.rst @@ -147,6 +147,7 @@ To train on more than 8 cores, your code actually doesn't change! All you need to do is submit the following command: .. code-block:: bash + $ python -m torch_xla.distributed.xla_dist --tpu=$TPU_POD_NAME --conda-env=torch-xla-nightly