* warnings.warn doesn't accept tuples, which causes "TypeError: expected string or bytes-like object" when the execution flow gets to this warning. Fixed that.
* Try adding a mock test
* Try adding a mock test
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
* Use high refresh rate on Google Colab (#3786)
Automatically override progress_bar_refresh_rate when on Google
Colab. Also added a constant IS_COLAB in utilities to check
whether it is being run in colab or not.
(#3786)
* Show a warning instead of overriding when rate is low on colab
* Change warning to suggestion and move it
Moved warning to configure_progress_bar instead of on_trainer_init
* Apply suggestions from code review
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* add a mock test
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Encapsulate extracting reference model within the plugin to allow custom wrapper logic to live within the plugin/accelerators
* Add missing new lines
* Fix call to accelerator
* Removed double blank
* Use accelerator backend
* Handle case where wrapper has not been initialized within the plugin
* Added basic get model tests, add better typing
* Change model name
* Split GPU/DDP test
* Add stronger typing, skip ddp test on windows
* Fix import
* Fix import in dp
* Fixed PEP8 definition
* Add ddp launcher for ddp testing
* Modify accelerator reference model to property, change name to reflect func
* Revert property as this is incorrect.=
* Revert across accelerators
* Modified name to get_model_from_plugin
* Code review changes, fix issue with dp
* Add verb to function getter
Co-authored-by: chaton <thomas@grid.ai>
* Implemented ModelSummary total params values
Signed-off-by: George Corrêa de Araújo <george.gcac@gmail.com>
* Fixed documentation, handling modules that are containers for other modules when calculating total params
Signed-off-by: gca <george.gcac@gmail.com>
* Reduced max line length, updated total number of params layout
Signed-off-by: gca <george.gcac@gmail.com>
* Now using only top-level modules of main module to calculate total params
Signed-off-by: gca <george.gcac@gmail.com>
* Added default value for named_modules param in summarize function
Signed-off-by: gca <george.gcac@gmail.com>
* Removed summary function params, removed unused properties
Signed-off-by: gca <george.gcac@gmail.com>
* Changed from np.prod(shape) to numel
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* changelog
* Update pytorch_lightning/core/memory.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Add prefix parameter in loggers
* chlog
* pep
* patch test
* remove args, access via self
* try fix the test
* try fix the test
* try fix the test
* prefix test
* fix assert has calls
fix assert call
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Show 1999 parameters as 1.9 K and 1000 parameters as 1.0 K, rather than both as 1 K.
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
* Allow ddp plugin to modify optimizer state saving
* Rely on the accelerator for optimizer states
* Ensure we init the accelerator for the saving function
* Better comment for optim state dump
* Revert "Ensure we init the accelerator for the saving function"
This reverts commit af65effa
* Added accelerator check to initialize tuner before saving model checkpoint
* Simplify comment
* Revert "Added accelerator check to initialize tuner before saving model checkpoint"
This reverts commit f9929c0c
* Return single optimizer state to reduce duplication
* Fixed docstring
* Fixed typing
* Fixed comment
* Added CHANGELOG.md
Co-authored-by: chaton <thomas@grid.ai>
* Allow ddp plugin to move the input to a different device if needed
* Swapped name to on_before_forward to align with hooks in the future
* Update pytorch_lightning/plugins/ddp_plugin.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Pass variable arg type to hook, add example
* Remove blank space (pep check)
* Added blank line
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Adam's beta 2 parameter was mistakenly referred to as the first order momentum of the gradient, whereas it should be the second order momentum. This has no effect on the correct working of the example.
* 1) Added experiment_id to NeptuneLogger initialization input arguments.
2) Now function _create_or_get_experiment() overrides "experiment_name", "params", "properties", "tags".
* Added test case for existing experiment.
* Revert "Added test case for existing experiment."
This reverts commit 9f3ba2e37b.
* Added test case for existing experiment.
* Fix merging issue.
* Moved experiment_id assignment directly to the part with experiment initialization.
* Update pytorch_lightning/loggers/neptune.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>