From e9714375517893d5886228900344596e57e652ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Adrian=20W=C3=A4lchli?= Date: Tue, 24 Nov 2020 10:41:31 +0100 Subject: [PATCH] Document behaviour when setting both on_step=True and on_epoch=True in self.log (#4327) * update logging.rst * logger of choice Co-authored-by: Nicki Skafte * add metrics reference * trigger ci * Revert "trigger ci" This reverts commit 97bf461cf9c00d182b0cc841c6b966a0ca9e85a4. Co-authored-by: Nicki Skafte Co-authored-by: chaton Co-authored-by: Roger Shieh Co-authored-by: Ananya Harsh Jha --- docs/source/logging.rst | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/docs/source/logging.rst b/docs/source/logging.rst index 30eb2c11ff..3d6ae87d01 100644 --- a/docs/source/logging.rst +++ b/docs/source/logging.rst @@ -85,7 +85,14 @@ The :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method has a * `logger`: Logs to the logger like Tensorboard, or any other custom logger passed to the :class:`~pytorch_lightning.trainer.trainer.Trainer`. -.. note:: Setting `on_epoch=True` will accumulate your logged values over the full training epoch. +.. note:: + + - Setting ``on_epoch=True`` will cache all your logged values during the full training epoch and perform a + reduction `on_epoch_end`. We recommend using the :ref:`metrics` API when working with custom reduction. + + - Setting both ``on_step=True`` and ``on_epoch=True`` will create two keys per metric you log with + suffix ``_step`` and ``_epoch``, respectively. You can refer to these keys e.g. in the `monitor` + argument of :class:`~pytorch_lightning.callbacks.model_checkpoint.ModelCheckpoint` or in the graphs plotted to the logger of your choice. Manual logging