[CI SKIP] Sequential data & TPU support docs fix (#3956)
* sequential data docs fix * TPU support docs fix Co-authored-by: Iryna Koroliuk <irynakoroliuk@Irynas-MacBook-Pro.local>
This commit is contained in:
parent
3aef67c8dc
commit
f9a9a5f298
docs/source
|
@ -15,7 +15,7 @@ Packed sequences as inputs
|
|||
--------------------------
|
||||
When using PackedSequence, do 2 things:
|
||||
|
||||
1. return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example above shows the list implementation).
|
||||
1. Return either a padded tensor in dataset or a list of variable length tensors in the dataloader collate_fn (example shows the list implementation).
|
||||
2. Pack the sequence in forward or training and validation steps depending on use case.
|
||||
|
||||
.. testcode::
|
||||
|
|
|
@ -163,7 +163,7 @@ on how to set up the instance groups and VMs needed to run TPU Pods.
|
|||
16 bit precision
|
||||
-----------------
|
||||
Lightning also supports training in 16-bit precision with TPUs.
|
||||
By default, TPU training will use 32-bit precision. To enable 16-bit, also
|
||||
By default, TPU training will use 32-bit precision. To enable 16-bit,
|
||||
set the 16-bit flag.
|
||||
|
||||
.. code-block:: python
|
||||
|
|
Loading…
Reference in New Issue