From f6b8b175bbf480e74586bf0fc5378e9efd221995 Mon Sep 17 00:00:00 2001 From: Joel Wong Date: Thu, 31 Oct 2019 21:40:32 +1100 Subject: [PATCH] Update Docs for current checkpointing behaviour (#445) Related issue #432 The old documentation suggested that the way to restore a training session is to use a test_tube Experiment. Trainer no longer takes an experiment as a parameter, so it seems the current way to restore a training session is to pass an experiment via a TestTubeLogger. Even if this is not the most elegant solution, updating the docs will at least point new users in the right direction. --- docs/Trainer/Checkpointing.md | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/docs/Trainer/Checkpointing.md b/docs/Trainer/Checkpointing.md index 5d3b257b05..791e72ebd5 100644 --- a/docs/Trainer/Checkpointing.md +++ b/docs/Trainer/Checkpointing.md @@ -32,12 +32,19 @@ You might want to not only load a model but also continue training it. Use this restore the trainer state as well. This will continue from the epoch and global step you last left off. However, the dataloaders will start from the first batch again (if you shuffled it shouldn't matter). -Lightning will restore the session if you pass an experiment with the same version and there's a saved checkpoint. +Lightning will restore the session if you pass a logger with the same version and there's a saved checkpoint. ``` {.python} -from test_tube import Experiment +from pytorch_lightning import Trainer +from pytorch_lightning.logging import TestTubeLogger -exp = Experiment(version=a_previous_version_with_a_saved_checkpoint) -trainer = Trainer(experiment=exp) +logger = TestTubeLogger( + save_dir='./savepath', + version=1 # An existing version with a saved checkpoint +) +trainer = Trainer( + logger=logger, + default_save_path='./savepath' +) # this fit call loads model weights and trainer state # the trainer continues seamlessly from where you left off