From 051d9b1d940f6c90aec128215ac142a35dbe02cf Mon Sep 17 00:00:00 2001 From: Paul Date: Wed, 14 Oct 2020 19:14:40 +0200 Subject: [PATCH] Add trainer flag step [ci skip] (#4147) * Add trainer flag step Add step to disable automatic optimization in the trainer @justusschock * Apply suggestions from code review Co-authored-by: Nicki Skafte Co-authored-by: Jirka Borovec Co-authored-by: Nicki Skafte --- docs/source/optimizers.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/optimizers.rst b/docs/source/optimizers.rst index a8fbd202be..397effa300 100644 --- a/docs/source/optimizers.rst +++ b/docs/source/optimizers.rst @@ -20,9 +20,9 @@ Manual optimization =================== For advanced research topics like reinforcement learning, sparse coding, or GAN research, it may be desirable to manually manage the optimization process. To do so, do the following: - -* Ignore the optimizer_idx argument -* So we can scale the loss automatically for you use self.manual_backward(loss) instead of loss.backward() +* Disable automatic optimization in Trainer: Trainer(automatic_optimization=False) +* Drop or ignore the optimizer_idx argument +* Use `self.manual_backward(loss)` instead of `loss.backward()` to automatically scale your loss .. code-block:: python