Document behaviour when setting both on_step=True and on_epoch=True in self.log (#4327)

* update logging.rst

* logger of choice

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* add metrics reference

* trigger ci

* Revert "trigger ci"

This reverts commit 97bf461cf9.

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
This commit is contained in:
Adrian Wälchli 2020-11-24 10:41:31 +01:00 committed by GitHub
parent 5b74effb1a
commit e971437551
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 8 additions and 1 deletions

View File

@ -85,7 +85,14 @@ The :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method has a
* `logger`: Logs to the logger like Tensorboard, or any other custom logger passed to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
.. note:: Setting `on_epoch=True` will accumulate your logged values over the full training epoch.
.. note::
- Setting ``on_epoch=True`` will cache all your logged values during the full training epoch and perform a
reduction `on_epoch_end`. We recommend using the :ref:`metrics` API when working with custom reduction.
- Setting both ``on_step=True`` and ``on_epoch=True`` will create two keys per metric you log with
suffix ``_step`` and ``_epoch``, respectively. You can refer to these keys e.g. in the `monitor`
argument of :class:`~pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint` or in the graphs plotted to the logger of your choice.
Manual logging