**Audience:** Users who want to visualize and monitor their model development
----
*******************************
Why do I need to track metrics?
*******************************
In model development, we track values of interest such as the *validation_loss* to visualize the learning process for our models. Model development is like driving a car without windows, charts and logs provide the *windows* to know where to drive the car.
With Lightning, you can visualize virtually anything you can think of: numbers, text, images, audio. Your creativity and imagination are the only limiting factor.
----
*************
Track metrics
*************
Metric visualization is the most basic but powerful way of understanding how your model is doing throughout the model development process.
To track a metric, simply use the *self.log* method available inside the *LightningModule*
To launch the tensorboard dashboard run the following command on the commandline.
..code-block:: bash
tensorboard --logdir=lightning_logs/
If you're using a notebook environment such as *colab* or *kaggle* or *jupyter*, launch Tensorboard with this command
..code-block:: bash
%reload_ext tensorboard
%tensorboard --logdir=lightning_logs/
----
Accumulate a metric
===================
When *self.log* is called inside the *training_step*, it generates a timeseries showing how the metric behaves over time.
TODO: show chart
However, For the validation and test sets we are not generally interested in plotting the metric values per batch of data. Instead, we want to compute a summary statistic (such as average, min or max) across the full split of data.
When you call self.log inside the *validation_step* and *test_step*, Lightning automatically accumulates the metric and averages it once it's gone through the whole split (*epoch*).
By default, anything that is logged is saved to the current working directory. To use a different directory, set the *default_root_dir* argument in the Trainer.