lightning/docs/LightningModule/properties.md

63 lines
1.6 KiB
Markdown
Raw Normal View History

2019-06-28 22:42:53 +00:00
A LightningModule has the following properties which you can access at any time
---
#### current_epoch
The current epoch
---
#### dtype
Current dtype
2019-06-29 22:14:45 +00:00
---
#### logger
A reference to the logger you passed into trainer.
```python
Trainer(logger=your_logger)
```
Call it from anywhere in your LightningModule to add metrics, images, etc... whatever your logger supports.
Here is an example using the Test-tube logger (which is a wrapper on [PyTorch SummaryWriter](https://pytorch.org/docs/stable/tensorboard.html) with versioned folder structure).
2019-06-29 22:14:45 +00:00
```{.python}
# if logger is a tensorboard logger or test-tube experiment
self.logger.add_embedding(...)
self.logger.log({'val_loss': 0.9})
self.logger.add_scalars(...)
2019-06-29 22:14:45 +00:00
```
2019-06-28 22:42:53 +00:00
---
2019-06-29 22:14:45 +00:00
#### global_step
2019-06-28 22:42:53 +00:00
Total training batches seen across all epochs
---
#### gradient_clip_val
2019-06-28 22:42:53 +00:00
The current gradient clip value
---
#### on_gpu
True if your model is currently running on GPUs. Useful to set flags around the LightningModule for different CPU vs GPU behavior.
---
2019-06-29 22:14:45 +00:00
#### trainer
2019-06-28 22:42:53 +00:00
Last resort access to any state the trainer has. Changing certain properties here could affect your training run.
2019-06-29 22:14:45 +00:00
```{.python}
self.trainer.optimizers
self.trainer.current_epoch
...
```
2019-08-07 17:21:25 +00:00
## Debugging
The LightningModule also offers these tricks to help debug.
---
#### example_input_array
In the LightningModule init, you can set a dummy tensor for this property
to get a print out of sizes coming into and out of every layer.
```python
def __init__(self):
# put the dimensions of the first input to your system
self.example_input_array = torch.rand(5, 28 * 28)
```