*`on_step`: Logs the metric at the current step. Defaults to `True` in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_step`, and :func:`~pytorch_lightning.core.lightning.LightningModule.training_step_end`.
*`on_epoch`: Automatically accumulates and logs at the end of the epoch. Defaults to True anywhere in validation or test loops, and in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_epoch_end`.
You can implement your own logger by writing a class that inherits from :class:`~pytorch_lightning.loggers.base.LightningLoggerBase`.
Use the :func:`~pytorch_lightning.loggers.base.rank_zero_experiment` and :func:`~pytorch_lightning.utilities.distributed.rank_zero_only` decorators to make sure that only the first process in DDP training creates the experiment and logs the data respectively.
# Optional. Any code necessary to save logger data goes here
# If you implement this, remember to call `super().save()`
# at the start of the method (important for aggregation of metrics)
super().save()
@rank_zero_only
def finalize(self, status):
# Optional. Any code that needs to be run after training
# finishes goes here
pass
If you write a logger that may be useful to others, please send
a pull request to add it to Lightning!
----------
.._logging_frequency:
*************************
Control logging frequency
*************************
Logging frequency
=================
It may slow training down to log every single batch. By default, Lightning logs every 50 rows, or 50 training steps.
To change this behaviour, set the `log_every_n_steps`:class:`~pytorch_lightning.trainer.trainer.Trainer` flag.
..testcode::
k = 10
trainer = Trainer(log_every_n_steps=k)
Log writing frequency
=====================
Writing to a logger can be expensive, so by default Lightning write logs to disc or to the given logger every 100 training steps.
To change this behaviour, set the interval at which you wish to flush logs to the filesystem using `log_every_n_steps`:class:`~pytorch_lightning.trainer.trainer.Trainer` flag.
..testcode::
k = 100
trainer = Trainer(flush_logs_every_n_steps=k)
Unlike the `log_every_n_steps`, this argument does not apply to all loggers.
The example shown here works with :class:`~pytorch_lightning.loggers.tensorboard.TensorBoardLogger`,
which is the default logger in Lightning.
----------
************
Progress Bar
************
You can add any metric to the progress bar using :func:`~~pytorch_lightning.core.lightning.LightningModule.log`
method, setting `prog_bar=True`.
..code-block:: python
def training_step(self, batch, batch_idx):
self.log('my_loss', loss, prog_bar=True)
Modifying the progress bar
==========================
The progress bar by default already includes the training loss and version number of the experiment
if you are using a logger. These defaults can be customized by overriding the
:func:`~pytorch_lightning.core.lightning.LightningModule.get_progress_bar_dict` hook in your module.