[CI SKIP] Sequential data & TPU support docs fix ()

* sequential data docs fix

* TPU support docs fix

Co-authored-by: Iryna Koroliuk <irynakoroliuk@Irynas-MacBook-Pro.local>
This commit is contained in:
irene50288 2020-10-07 13:11:03 -07:00 committed by GitHub
parent 3aef67c8dc
commit f9a9a5f298
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View File

@ -15,7 +15,7 @@ Packed sequences as inputs
--------------------------
When using PackedSequence, do 2 things:
1. return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example above shows the list implementation).
1. Return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example shows the list implementation).
2. Pack the sequence in forward or training and validation steps depending on use case.
.. testcode::

View File

@ -163,7 +163,7 @@ on how to set up the instance groups and VMs needed to run TPU Pods.
16 bit precision
-----------------
Lightning also supports training in 16-bit precision with TPUs.
By default, TPU training will use 32-bit precision. To enable 16-bit, also
By default, TPU training will use 32-bit precision. To enable 16-bit,
set the 16-bit flag.
.. code-block:: python