From f9a9a5f2980a67d59d7c200a8caf8eac99b095c4 Mon Sep 17 00:00:00 2001 From: irene50288 <2590780+irene50288@users.noreply.github.com> Date: Wed, 7 Oct 2020 13:11:03 -0700 Subject: [PATCH] [CI SKIP] Sequential data & TPU support docs fix (#3956) * sequential data docs fix * TPU support docs fix Co-authored-by: Iryna Koroliuk --- docs/source/sequences.rst | 2 +- docs/source/tpu.rst | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/sequences.rst b/docs/source/sequences.rst index b00e3c896a..93fefad0d0 100644 --- a/docs/source/sequences.rst +++ b/docs/source/sequences.rst @@ -15,7 +15,7 @@ Packed sequences as inputs -------------------------- When using PackedSequence, do 2 things: -1. return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example above shows the list implementation). +1. Return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example shows the list implementation). 2. Pack the sequence in forward or training and validation steps depending on use case. .. testcode:: diff --git a/docs/source/tpu.rst b/docs/source/tpu.rst index 0a05c1be12..3a74d8bf4f 100644 --- a/docs/source/tpu.rst +++ b/docs/source/tpu.rst @@ -163,7 +163,7 @@ on how to set up the instance groups and VMs needed to run TPU Pods. 16 bit precision ----------------- Lightning also supports training in 16-bit precision with TPUs. -By default, TPU training will use 32-bit precision. To enable 16-bit, also +By default, TPU training will use 32-bit precision. To enable 16-bit, set the 16-bit flag. .. code-block:: python