From 9e6ce3b0d6e018f78d1bf785ca2c888aaf2f9fbc Mon Sep 17 00:00:00 2001 From: William Falcon Date: Sat, 31 Aug 2019 03:19:08 -0400 Subject: [PATCH] testing loop docs --- docs/Trainer/Testing loop.md | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 docs/Trainer/Testing loop.md diff --git a/docs/Trainer/Testing loop.md b/docs/Trainer/Testing loop.md new file mode 100644 index 0000000000..8424be20cb --- /dev/null +++ b/docs/Trainer/Testing loop.md @@ -0,0 +1,31 @@ +To ensure you don't accidentally use test data to guide training decisions Lightning makes running the test set deliberate. + +--- +#### test +You have two options to run the test set. +First case is where you test right after a full training routine. +``` {.python} +# run full training +trainer.fit(model) + +# run test set +trainer.test() +``` + +Second case is where you load a model and run the test set +```{.python} +model = MyLightningModule.load_from_metrics( + weights_path='/path/to/pytorch_checkpoint.ckpt', + tags_csv='/path/to/test_tube/experiment/version/meta_tags.csv', + on_gpu=True, + map_location=None +) + +# init trainer with whatever options +trainer = Trainer(...) + +# test (pass in the model) +trainer.test(model) +``` +In this second case, the options you pass to trainer will be used when running the test set (ie: 16-bit, dp, ddp, etc...) +