Commit Graph

3746 Commits

Author SHA1 Message Date
edenlightning 78076ea0d9
Replace readme DQN link with bolts implementation (#4841) 2020-11-24 23:45:25 +01:00
Adrian Wälchli fb0278a457
Update test for logging a metric object and state reset (#4825)
* update test

* docstring

Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
2020-11-24 11:28:02 +01:00
Adrian Wälchli e971437551
Document behaviour when setting both on_step=True and on_epoch=True in self.log (#4327)
* update logging.rst

* logger of choice

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* add metrics reference

* trigger ci

* Revert "trigger ci"

This reverts commit 97bf461cf9.

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
2020-11-24 10:41:31 +01:00
Teddy Koker 5b74effb1a
Update lr_monitor.py (#4826) 2020-11-24 02:07:33 -05:00
Peter Gagarinov 70361ebb6d
Fixed a crash bug in MLFlow logger (#4716)
* warnings.warn doesn't accept tuples, which causes "TypeError: expected string or bytes-like object" when the execution flow gets to this warning. Fixed that.

* Try adding a mock test

* Try adding a mock test

Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
2020-11-24 00:50:34 -05:00
Jungwhan 471ca375ba
Fix torchtext data to gpu (#4785)
Co-authored-by: chaton <thomas@grid.ai>
2020-11-24 00:27:14 -05:00
Jeff Yang 7d96fd1168
[tests/checkpointing] refactor with BoringModel (#4661)
* [tests/checkpointing] refactor with BoringModel

* [tests/checkpointing] refactor with BoringModel

* [tests/checkpointing] refactor with BoringModel

* LessBoringModel -> LogInTwoMethods

* LessBoringModel -> LogInTwoMethods

* LessBoringModel -> TrainingStepCalled

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
2020-11-24 01:23:12 +01:00
Adrian Wälchli 89e8796e2a
fix incomplete progress bar when refresh_rate > num batches (#4577)
* fix progress bar overshoot

* fix updates for partially incomplete main  progress bar when val loop starts

* add tests

* chlog
2020-11-24 00:01:33 +01:00
Sean Naren 9186abe73c
[docs] Add step to ensure sync_dist is adding to logging when multi-gpu enabled (#4817)
* Add additional check to ensure validation/test step are updated accordingly

* Update docs/source/multi_gpu.rst

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* Update docs/source/multi_gpu.rst

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* Update docs/source/multi_gpu.rst

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* Update docs/source/multi_gpu.rst

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-11-23 22:08:13 +00:00
Samyak S Sarnayak ccf38ced2e
Use high progress_bar_refresh_rate on Google Colab (#4654)
* Use high refresh rate on Google Colab (#3786)

Automatically override progress_bar_refresh_rate when on Google
Colab. Also added a constant IS_COLAB in utilities to check
whether it is being run in colab or not.
(#3786)

* Show a warning instead of overriding when rate is low on colab

* Change warning to suggestion and move it

Moved warning to configure_progress_bar instead of on_trainer_init

* Apply suggestions from code review

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* add a mock test

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2020-11-24 02:13:33 +05:30
Boris Dayma c586e5db77
feat(wandb): let wandb cli handle runs (#4648)
* feat(wandb): reinit handled by CLI

* fix: typo

* docs(wandb): improve formatting

* test(wandb): set wandb.run to None

* test(wandb): fix tests

* style: fix formatting

* docs(wandb): fix documentation

* Update code markup

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* docs(wandb): update CHANGELOG

* test(wandb): init called only when needed

* Update CHANGELOG.md

* try fix the test

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: edenlightning <66261195+edenlightning@users.noreply.github.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
2020-11-24 01:31:28 +05:30
Sean Naren 404af43cde
5/n: Extract reference model call to plugins/accelerators (#4773)
* Encapsulate extracting reference model within the plugin to allow custom wrapper logic to live within the plugin/accelerators

* Add missing new lines

* Fix call to accelerator

* Removed double blank

* Use accelerator backend

* Handle case where wrapper has not been initialized within the plugin

* Added basic get model tests, add better typing

* Change model name

* Split GPU/DDP test

* Add stronger typing, skip ddp test on windows

* Fix import

* Fix import in dp

* Fixed PEP8 definition

* Add ddp launcher for ddp testing

* Modify accelerator reference model to property, change name to reflect func

* Revert property as this is incorrect.=

* Revert across accelerators

* Modified name to get_model_from_plugin

* Code review changes, fix issue with dp

* Add verb to function getter

Co-authored-by: chaton <thomas@grid.ai>
2020-11-23 17:21:47 +00:00
Nicki Skafte 6831ba9aa0
[Metrics] Unification of FBeta (#4656)
* implementation

* init files

* more stable reduction

* add tests

* docs

* remove old implementation

* pep8

* changelog

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

Co-authored-by: Nicki Skafte <nugginea@gmail.com>
Co-authored-by: Teddy Koker <teddy.koker@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-11-23 09:44:35 +01:00
Mohamed Al Salti cd90dd429b
Fix batch_arg_name bug (#4812)
Add `batch_arg_name` to all calls to `_adjust_batch_size`
2020-11-23 11:34:11 +05:30
Simon-Martin Schröder 8601268c70
Fix #4375: Always use trainer.global_step for step (#4376)
* Fix #4375: Always use trainer.global_step for step

* Changelog

* Remove superflous use "epoch"

* Update Changelog

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
2020-11-22 13:02:06 +01:00
Teddy Koker 299de5dc62
don't override PYTHONWARNINGS (#4700)
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-11-22 11:25:24 +01:00
edenlightning a716ea60e1
Clarify checkpoint deprecation message (#4640)
* Clarify checkpoint deprecation message

* Update pytorch_lightning/trainer/connectors/callback_connector.py

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* Apply suggestions from code review

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jeff Yang <ydcjeff@outlook.com>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-22 07:35:54 +01:00
George b29757da90
Implemented ModelSummary total params values (#4521)
* Implemented ModelSummary total params values

Signed-off-by: George Corrêa de Araújo <george.gcac@gmail.com>

* Fixed documentation, handling modules that are containers for other modules when calculating total params

Signed-off-by: gca <george.gcac@gmail.com>

* Reduced max line length, updated total number of params layout

Signed-off-by: gca <george.gcac@gmail.com>

* Now using only top-level modules of main module to calculate total params

Signed-off-by: gca <george.gcac@gmail.com>

* Added default value for named_modules param in summarize function

Signed-off-by: gca <george.gcac@gmail.com>

* Removed summary function params, removed unused properties

Signed-off-by: gca <george.gcac@gmail.com>

* Changed from np.prod(shape) to numel

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* changelog

* Update pytorch_lightning/core/memory.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-11-22 07:07:52 +01:00
Rohit Gupta 2d9d7e4daa
Add prefix argument in loggers (#4557)
* Add prefix parameter in loggers

* chlog

* pep

* patch test

* remove args, access via self

* try fix the test

* try fix the test

* try fix the test

* prefix test

* fix assert has calls


fix assert call

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-11-22 06:38:58 +01:00
Rohit Gupta db69d169e8
Deprecate prefix argument in ModelCheckpoint (#4765)
* Deprecate prefix in ModelCheckpoint

* chlog

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-21 18:08:42 +05:30
YI-LIN SUNG 69b9949192
[docs] Remove the redundant indents in trainer.py (#4720)
* Remove the redundant indents in trainer.py

* Update pytorch_lightning/trainer/trainer.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Roger Shieh <sh.rog@protonmail.ch>
2020-11-21 08:15:09 +06:30
Jonas Haag 8dfbf6371b
Model summary: add 1 decimal place (#4745)
Show 1999 parameters as 1.9 K and 1000 parameters as 1.0 K, rather than both as 1 K.

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2020-11-20 23:22:21 +01:00
Jirka Borovec b10b11deb0
add version for CPU users (#4794) 2020-11-20 21:32:13 +01:00
Jirka Borovec 500e2853f3
increase Parity threshold (#4795)
* increase Parity threshold

* typos

* increase

* increase
2020-11-20 19:58:45 +00:00
Jirka Borovec 94a9d3d283
Update examples - use DataModule (#4740)
* rename

* add mnist_datamodule.py

* dm

* fix

* imports

* clean

* imports

* transforms

* skip
2020-11-20 23:40:40 +05:30
Roger Shieh 42e59c6add
Cast hparams to dict when not using omegaconf (#4770)
* init fix

* init test

* more specific dict assert

* update changelog

* Update tests/checkpointing/test_model_checkpoint.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-20 19:53:05 +08:00
chaton 4803f681b0
[FEAT] DDP: Create DDPLauncher (#4515)
* test

* poc

* add simpler test for ddp

* typo

* resolve pep8

* try coverage testing

* trying to add coverage inside ddp

* resolve flake8

* update

* forgot coverage

* move .coveragerc

* update rcfile path

* update

* test

* update

* adding description

* add DDPLauncher decorator

* add undecorated

* push update

* update ddp testing

* Update tests/backends/launcher.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update tests/backends/launcher.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* update on comments

* update on comments

* resolve comments

* resolve isort

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-20 10:17:46 +00:00
chaton 6e788d2dc6
Add lightning-geometric (#4771)
* add link

* typo

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
2020-11-20 08:45:04 +01:00
Jirka Borovec e752348e94
update stale bot (#4769)
Co-authored-by: chaton <thomas@grid.ai>
2020-11-19 21:45:44 +01:00
chaton 8e8263ab6c
update docs (#4739) 2020-11-19 22:23:10 +05:30
Yang Zhang 18730e307f
Add masked language modeling to community examples (#4744)
Add masked language modeling (based on Transformers) to community examples

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-19 08:35:25 +00:00
Roger Shieh cc8359be26
Proper casting for np scalars in hparams logging (#4647)
* first implementation

* add test and changelog

* Update tests/loggers/test_base.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* pep8

* rounding

* increase casting specificity to bool + number

* bugfix

* changelog formatting

* single loop

* Update CHANGELOG.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: chaton <thomas@grid.ai>
2020-11-19 15:52:48 +08:00
Jeff Yang c36121326d
[metrics] Update SSIM (#4566)
* [metrics] Update SSIM

* [metrics] Update SSIM

* [metrics] Update SSIM

* [metrics] Update SSIM

* [metrics] update ssim

* dist_sync_on_step True

* [metrics] update ssim

* Update tests/metrics/regression/test_ssim.py

Co-authored-by: chaton <thomas@grid.ai>

* Update pytorch_lightning/metrics/functional/ssim.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* ddp=True

* Update test_ssim.py

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
2020-11-19 11:51:18 +06:30
Tadej Svetina 7383a2c912
Change version in __init__.py 1.0.4 -> 1.1-dev (#4760)
* Change version in __init__.py 1.0.4 -> 1.0.7

* fix ver

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-19 07:56:30 +06:30
Sean Naren f0ab74dc2f
Expose scaler in amp plugin (#4737) 2020-11-18 22:30:47 +00:00
ananthsub 45c57600af
Move init_ddp_connection to DDP Plugin (#4407)
* Move init_ddp_connection to DDP Plugin

* cluster-env

* trainer?

* imports

* Update ddp_plugin.py

Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2020-11-18 15:49:22 -05:00
chaton b7601e9deb
[Example] Add Pytorch Geometric Example (#4568)
* add example for Pytorch Geometric

* remove hydra

* add docstring

* remove description

* rename folder

* update script to not break test

* remove .lock

* add Pytorch Geometric to doc

* add docstring at the begining

* add comments

* Update pl_examples/pytorch_ecosystem/pytorch_geometric/README.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update pl_examples/pytorch_ecosystem/pytorch_geometric/README.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update pl_examples/pytorch_ecosystem/pytorch_geometric/cora_dna.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* add toml

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-11-18 20:03:55 +00:00
Sean Naren e7134a9135
Sharded Plugin 2/n: Allow ddp plugin to modify optimizer state saving (#4675)
* Allow ddp plugin to modify optimizer state saving

* Rely on the accelerator for optimizer states

* Ensure we init the accelerator for the saving function

* Better comment for optim state dump

* Revert "Ensure we init the accelerator for the saving function"

This reverts commit af65effa

* Added accelerator check to initialize tuner before saving model checkpoint

* Simplify comment

* Revert "Added accelerator check to initialize tuner before saving model checkpoint"

This reverts commit f9929c0c

* Return single optimizer state to reduce duplication

* Fixed docstring

* Fixed typing

* Fixed comment

* Added CHANGELOG.md

Co-authored-by: chaton <thomas@grid.ai>
2020-11-18 16:38:35 +00:00
Sean Naren 8283680aa0
Sharded Plugin 3/n: Expose step input to DDP plugin (#4686)
* Allow ddp plugin to move the input to a different device if needed

* Swapped name to on_before_forward to align with hooks in the future

* Update pytorch_lightning/plugins/ddp_plugin.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Pass variable arg type to hook, add example

* Remove blank space (pep check)

* Added blank line

Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-18 15:45:30 +00:00
Jirka Borovec 5fd1afb38a
Delay PyPI releasing (#4730)
* Delay PyPI releasing

* Delay PyPI releasing

Co-authored-by: chaton <thomas@grid.ai>
2020-11-18 15:15:41 +01:00
Jirka Borovec 5ea383332d
update chlog after 1.0.7 release (#4735) 2020-11-18 12:26:41 +00:00
Jirka Borovec bddc6cd77a
pytest default color (#4703)
* pytest default color

* time

Co-authored-by: chaton <thomas@grid.ai>
2020-11-18 10:53:44 +00:00
Carlos Mocholí 396a46f55f
Add current_score to ModelCheckpoint.on_save_checkpoint (#4721)
* Add current_score to ModelCheckpoint.on_save_checkpoint

* Update CHANGELOG

[ci skip]

* fix

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* fix2

* Add test for NaN

* Fix failing tests

* Simplify line

* Add test docstrings

Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2020-11-18 08:09:44 +00:00
Akihiro Nitta ece09f0c5f
Apply import formatting to files in the 2nd top level (#4717)
* Update pyproject.toml

* Apply isort to files in second level

Co-authored-by: chaton <thomas@grid.ai>
2020-11-18 00:29:09 +01:00
Jirka Borovec 9a5d40aff4
test PL examples (#4551)
* test PL examples

* minor formatting

* skip failing

* skip failing

* args

* mnist datamodule

* refactor tests

* refactor tests

* skip

* skip

* drop DM

* drop DM

Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2020-11-17 19:35:17 +01:00
Jay Mody b8a1916453
Update trainer.rst (#4722)
small spelling fix
2020-11-17 23:07:15 +05:30
Ludger Paehler 7c4356464c
Minor typo in the description of Adam's beta 2 (#4715)
Adam's beta 2 parameter was mistakenly referred to as the first order momentum of the gradient, whereas it should be the second order momentum. This has no effect on the correct working of the example.
2020-11-17 17:00:36 +01:00
Maxim Ostroukhov c208ac68c8
Added experiment_id to NeptuneLogger (#3462)
* 1) Added experiment_id to NeptuneLogger initialization input arguments.
2) Now function _create_or_get_experiment() overrides "experiment_name", "params", "properties", "tags".

* Added test case for existing experiment.

* Revert "Added test case for existing experiment."

This reverts commit 9f3ba2e37b.

* Added test case for existing experiment.

* Fix merging issue.

* Moved experiment_id assignment directly to the part with experiment initialization.

* Update pytorch_lightning/loggers/neptune.py

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-11-16 23:50:23 +05:30
chaton 96769a7184
quick fix (#4697) 2020-11-16 16:20:35 +00:00
Nicki Skafte 51097669b9
[metrics] change default behaviour of state dict (#4685)
* fix state dict

* Update docs/source/metrics.rst

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* changelog

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
2020-11-16 12:33:45 +00:00