2020-05-05 02:16:54 +00:00
|
|
|
.. testsetup:: *
|
|
|
|
|
|
|
|
from pytorch_lightning.trainer.trainer import Trainer
|
2020-08-14 01:54:57 +00:00
|
|
|
|
|
|
|
.. _debugging:
|
2020-05-05 02:16:54 +00:00
|
|
|
|
2020-02-11 04:55:22 +00:00
|
|
|
Debugging
|
2020-02-16 15:50:00 +00:00
|
|
|
=========
|
2020-02-11 04:55:22 +00:00
|
|
|
The following are flags that make debugging much easier.
|
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-06-17 14:53:48 +00:00
|
|
|
|
|
|
|
fast_dev_run
|
2020-02-16 15:50:00 +00:00
|
|
|
------------
|
2020-02-11 04:55:22 +00:00
|
|
|
This flag runs a "unit test" by running 1 training batch and 1 validation batch.
|
|
|
|
The point is to detect any bugs in the training/validation loop without having to wait for
|
|
|
|
a full epoch to crash.
|
|
|
|
|
2020-03-20 19:49:01 +00:00
|
|
|
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.fast_dev_run`
|
|
|
|
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
|
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
.. testcode::
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
trainer = Trainer(fast_dev_run=True)
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-06-17 14:53:48 +00:00
|
|
|
|
2020-02-11 04:55:22 +00:00
|
|
|
Inspect gradient norms
|
2020-02-16 15:50:00 +00:00
|
|
|
----------------------
|
2020-02-11 04:55:22 +00:00
|
|
|
Logs (to a logger), the norm of each weight matrix.
|
|
|
|
|
2020-03-20 19:49:01 +00:00
|
|
|
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.track_grad_norm`
|
|
|
|
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
|
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
.. testcode::
|
2020-02-11 04:55:22 +00:00
|
|
|
|
|
|
|
# the 2-norm
|
2020-05-05 02:16:54 +00:00
|
|
|
trainer = Trainer(track_grad_norm=2)
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-06-17 14:53:48 +00:00
|
|
|
|
2020-02-11 04:55:22 +00:00
|
|
|
Log GPU usage
|
2020-02-16 15:50:00 +00:00
|
|
|
-------------
|
2020-02-11 04:55:22 +00:00
|
|
|
Logs (to a logger) the GPU usage for each GPU on the master machine.
|
|
|
|
|
2020-03-20 19:49:01 +00:00
|
|
|
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.log_gpu_memory`
|
|
|
|
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
.. testcode::
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
trainer = Trainer(log_gpu_memory=True)
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-06-17 14:53:48 +00:00
|
|
|
|
2020-02-11 04:55:22 +00:00
|
|
|
Make model overfit on subset of data
|
2020-02-16 15:50:00 +00:00
|
|
|
------------------------------------
|
2020-02-11 04:55:22 +00:00
|
|
|
|
|
|
|
A good debugging technique is to take a tiny portion of your data (say 2 samples per class),
|
|
|
|
and try to get your model to overfit. If it can't, it's a sign it won't work with large datasets.
|
|
|
|
|
2020-06-17 12:03:28 +00:00
|
|
|
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.overfit_batches`
|
2020-03-20 19:49:01 +00:00
|
|
|
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
.. testcode::
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-23 15:19:38 +00:00
|
|
|
# use only 1% of training data (and use the same training dataloader (with shuffle off) in val and test)
|
2020-06-17 12:03:28 +00:00
|
|
|
trainer = Trainer(overfit_batches=0.01)
|
|
|
|
|
|
|
|
# or overfit a number of batches
|
|
|
|
trainer = Trainer(overfit_batches=0.01)
|
|
|
|
|
|
|
|
With this flag, the train, val, and test sets will all be the same train set. We will also replace the sampler
|
|
|
|
in the training set to turn off shuffle for you.
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-06-17 14:53:48 +00:00
|
|
|
|
2020-06-15 21:05:58 +00:00
|
|
|
Print a summary of your LightningModule
|
|
|
|
---------------------------------------
|
|
|
|
Whenever the ``.fit()`` function gets called, the Trainer will print the weights summary for the LightningModule.
|
|
|
|
By default it only prints the top-level modules. If you want to show all submodules in your network, use the
|
|
|
|
`'full'` option:
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
.. testcode::
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-15 21:05:58 +00:00
|
|
|
trainer = Trainer(weights_summary='full')
|
|
|
|
|
|
|
|
You can also display the intermediate input- and output sizes of all your layers by setting the
|
|
|
|
``example_input_array`` attribute in your LightningModule. It will print a table like this
|
|
|
|
|
|
|
|
.. code-block:: text
|
|
|
|
|
|
|
|
| Name | Type | Params | In sizes | Out sizes
|
|
|
|
--------------------------------------------------------------
|
|
|
|
0 | net | Sequential | 132 K | [10, 256] | [10, 512]
|
|
|
|
1 | net.0 | Linear | 131 K | [10, 256] | [10, 512]
|
|
|
|
2 | net.1 | BatchNorm1d | 1 K | [10, 512] | [10, 512]
|
|
|
|
|
|
|
|
when you call ``.fit()`` on the Trainer. This can help you find bugs in the composition of your layers.
|
|
|
|
|
|
|
|
See Also:
|
|
|
|
- :paramref:`~pytorch_lightning.trainer.trainer.Trainer.weights_summary` Trainer argument
|
|
|
|
- :class:`~pytorch_lightning.core.memory.ModelSummary`
|
2020-02-11 04:55:22 +00:00
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-06-17 14:53:48 +00:00
|
|
|
|
|
|
|
Shorten epochs
|
|
|
|
--------------
|
|
|
|
Sometimes it's helpful to only use a percentage of your training, val or test data (or a set number of batches).
|
|
|
|
For example, you can use 20% of the training set and 1% of the validation set.
|
|
|
|
|
|
|
|
On larger datasets like Imagenet, this can help you debug or test a few things faster than waiting for a full epoch.
|
|
|
|
|
|
|
|
.. testcode::
|
|
|
|
|
|
|
|
# use only 10% of training data and 1% of val data
|
|
|
|
trainer = Trainer(limit_train_batches=0.1, limit_val_batches=0.01)
|
|
|
|
|
|
|
|
# use 10 batches of train and 5 batches of val
|
|
|
|
trainer = Trainer(limit_train_batches=10, limit_val_batches=5)
|
|
|
|
|
2020-06-18 21:54:29 +00:00
|
|
|
----------------
|
2020-02-11 04:55:22 +00:00
|
|
|
|
|
|
|
Set the number of validation sanity steps
|
2020-02-16 15:50:00 +00:00
|
|
|
-----------------------------------------
|
2020-02-11 04:55:22 +00:00
|
|
|
Lightning runs a few steps of validation in the beginning of training.
|
|
|
|
This avoids crashing in the validation loop sometime deep into a lengthy training loop.
|
|
|
|
|
2020-03-20 19:49:01 +00:00
|
|
|
(See: :paramref:`~pytorch_lightning.trainer.trainer.Trainer.num_sanity_val_steps`
|
|
|
|
argument of :class:`~pytorch_lightning.trainer.trainer.Trainer`)
|
|
|
|
|
2020-05-05 02:16:54 +00:00
|
|
|
.. testcode::
|
2020-02-11 04:55:22 +00:00
|
|
|
|
|
|
|
# DEFAULT
|
2020-08-13 22:56:51 +00:00
|
|
|
trainer = Trainer(num_sanity_val_steps=2)
|