Fix training resuming docs (#1265)

This commit is contained in:
Paweł Rzepiński 2020-03-29 21:29:48 +02:00 committed by GitHub
parent fb42872259
commit b74a3c5106
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 7 additions and 11 deletions

View File

@ -84,9 +84,7 @@ To save your own checkpoint call:
Checkpoint Loading
------------------
You might want to not only load a model but also continue training it. Use this method to
restore the trainer state as well. This will continue from the epoch and global step you last left off.
However, the dataloaders will start from the first batch again (if you shuffled it shouldn't matter).
To load a model along with its weights, biases and hyperparameters use following method:
.. code-block:: python
@ -95,4 +93,8 @@ However, the dataloaders will start from the first batch again (if you shuffled
y_hat = model(x)
A LightningModule is no different than a nn.Module. This means you can load it and use it for
predictions as you would a nn.Module.
predictions as you would a nn.Module.
.. note:: To restore the trainer state as well use
:meth:`pytorch_lightning.trainer.trainer.Trainer.resume_from_checkpoint`.

View File

@ -39,15 +39,9 @@ Lightning will restore the session if you pass a logger with the same version an
.. code-block:: python
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TestTubeLogger
logger = TestTubeLogger(
save_dir='./savepath',
version=1 # An existing version with a saved checkpoint
)
trainer = Trainer(
logger=logger,
default_save_path='./savepath'
resume_from_checkpoint=PATH
)
# this fit call loads model weights and trainer state