Commit Graph

979 Commits

Author SHA1 Message Date
William Falcon 21a1972921
fixed default sampler (#1425) 2020-04-09 08:52:15 -04:00
William Falcon b5c6d0e393
Update __init__.py 2020-04-08 14:44:14 -04:00
William Falcon 764e7e12a7
Update __init__.py 2020-04-08 11:53:23 -04:00
Martin.B fb8d085b5f
Fix TrainsLogger doctest failing (switch to bypass mode in GitHub CI) (#1379)
* Fix TrainsLogger doctest failing (switch to bypass mode in GitHub CI)

* fix

* test ci

* debug

* debug CI

* Fix CircleCI

* Fix Any CI environment switch to bypass mode

* Removed debug prints

* Improve code coverage

* Improve code coverage

* Reverted

* Improve code coverage

* Test CI

* test codecov

* Codecov fix

* remove pragma

Co-authored-by: bmartinn <>
2020-04-08 11:52:52 -04:00
vguizilini 2ae2bd2b46
Print test results only if prog_bar_metrics is not empty (#1411)
* Print test results only if prog_bar_metrics is not empty

* Update evaluation_loop.py

Co-authored-by: vitor-guizilini <vitor.guizilini@tri.global>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-08 11:51:52 -04:00
William Falcon 7d0c2c7db8
Loader docs (#1416)
* added multiple loader docs

* added multiple loader docs

* added multiple loader docs

* added multiple loader docs

* added multiple loader docs

* Apply suggestions from code review

* added multiple loader docs

* added build docs script

* typo

* added build docs script

* added build docs script

* added build docs script

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
2020-04-08 11:38:12 -04:00
david-alexander-white d8cbf8d60c
updated early stopping docs (#1410)
* remove incorrect comment in training_step

* added comment for on_batch_start in hooks.py

* update early stopping docs

* typo fix

* whitespace fix

* Apply suggestions from code review

* Update docs/source/early_stopping.rst

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-08 08:38:53 -04:00
Alexey Karnachev ddbf7de6dc
Added accumulation of loggers' metrics for the same steps (#1278)
* `add_argparse_args` method fixed (argument types added)

* autopep8 fixes

* --gpus=0 removed from test (for ci tests)

* Update pytorch_lightning/trainer/trainer.py

Co-Authored-By: Joe Davison <joe@huggingface.co>

* test_with_accumulate_grad_batches added

* agg_and_log_metrics logic added to the base logger class

* small format fix

* agg metrics strategies removed (not to complicate stuff)

* agg metrics: handle zero step

* autopep8

* changelog upd

* flake fix

* metrics aggregators factored out, metrics_agg.py added + tests

* metrics agg default value added

* Update pytorch_lightning/loggers/metrics_agg.py

Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>

* metrics aggregators factored out, metrics_agg.py added + tests

* metrics agg default value added

* Update pytorch_lightning/loggers/metrics_agg.py

Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>

* remove .item which causes sync issues (#1254)

* remove .item which causes sync issues

* fixed gradient acc sched

* fixed gradient acc sched

* test_metrics_agg.py removed (all tested in doctrings), agg metrics refactored

* test_metrics_agg.py removed (all tested in doctrings), agg metrics refactored

* autopep8

* loggers base.py types fixed

* test

* test

* metrics aggregation for loggers: each key now has a specific function (or default one)

* metrics aggregation for loggers: each key now has a specific function (or default one)

* docstrings upd

* manual typehints removed from docstrings

* batch_size decreased for test `test_with_accumulate_grad_batches`

* extend running accum

* refactor

* fix tests

* fix tests

* allowed_types generator scoped

* trainer.py distutils was imported twice, fixed

* TensorRunningAccum refactored

* TensorRunningAccum added to change log (Changed)

* change log pull link added

Co-authored-by: Joe Davison <joe@huggingface.co>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
2020-04-08 08:35:47 -04:00
Jirka Borovec 62822b6f73
fix missing images on pypi (#1407)
* formatting

* fix missing image on pypi

* fix pypi push
2020-04-07 15:42:19 -04:00
William Falcon fdb61cb854 release v0.7.2rc5 2020-04-07 14:41:00 -04:00
Jirka Borovec b780807e73
release 0.7.2rc4 (#1402)
* instructions for changelog

* instructions for changelog

* on
2020-04-07 11:55:27 -04:00
William Falcon 0d2eb95530
Update __init__.py 2020-04-07 09:55:56 -04:00
William Falcon 5ace7d455d
Update __init__.py 2020-04-07 09:50:15 -04:00
William Falcon 91a4ea9b38
Update __init__.py 2020-04-07 09:16:02 -04:00
Asaf Manor 09668df726
Update optimizers.py (#1383) 2020-04-07 09:09:23 -04:00
areshytko 495ffbd028
Tensorboard logger check if lightning_logs directory exists (#1377)
* tensorboard logger version if root_dir not exist

* update changelog

* resolve comments

Co-authored-by: Alexander Reshytko <areshytko@Alexanders-MacBook-Pro.local>
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-04-07 06:39:54 -04:00
Paweł Rzepiński b8ff9bc1d2
Fix unimplemented type() on TPU (#1396)
* Fix unimplemented type() on TPU

* Add changelog entry

* Add quotation marks
2020-04-06 20:29:55 -04:00
areshytko 9754c5da55
load_spawn_weights only in proc rank 0 (#1385)
Co-authored-by: Alexander Reshytko <areshytko@Alexanders-MacBook-Pro.local>
2020-04-06 10:17:16 -04:00
Roshan Rao 4ed3027309
Set precision=16 when use_amp is passed as True (#1145)
* Set precision=16 when use_amp is passed as True

* Update CHANGELOG.md

* add use_amp to deprecated API

* Update trainer.py

* Update trainer.py

* move the use_amp attribute to deprecated API

* move use_amp deprecation back to Trainer's __init__

* drop unsed

* drop deprecated

* reorder imports

* typing

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
2020-04-06 08:13:24 -04:00
Adrian Wälchli 26cb5f6817
Improved docs for LightningModule (#1389)
* improve docs for LightingModule

* fix typos

* revert a doctest

* more fixes
2020-04-06 08:12:44 -04:00
Jeremy Jordan 91c9b29d47
add trainer attribute to denote if interrupted (#1368)
* add trainer attribute to denote if interrupted

* bugfix and formatting
2020-04-05 11:12:41 -04:00
Ethan Harris b18accc64c
Add warning for few workers (#1378)
* Add warning for few workers

* Fix style issue

* Update CHANGELOG.md

* Update test

* formatting

* formatting

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-05 11:07:16 -04:00
William Falcon f1e11d8b38
model_checkpoint to save all models (#1359)
* model_checkpoint to save all models

* changelog

* rise if

Co-authored-by: jamesjjcondon <jamesjjcondon@gmail.com>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
2020-04-05 15:56:26 +02:00
Adrian Wälchli 1f2da71069
Improved docs for callbacks (#1370)
* improved docs for callbacks

* class references

* make doctest pass

* doctests

* fix lines too long

* fix line too long

* fix permission error in doctest

* Apply suggestions from code review

Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>

* fix doctest

* fix default

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-05 09:38:52 +00:00
William Falcon 16f4cc9ff0
Shubhamagarwal92 master (#1349)
* SA: for #958: set torch cuda device when finding root

* SA: for #958: removing root gpu hack in trainer/evaluation_loop

* SA: setting torch cuda device

* comment line too long

* check if root gpu exists or available

* Incorporating suggestions on #1094

* since root gpu returns none instead of -1 for cpu

* undo changes

* fixed dp memory thing

Co-authored-by: Shubham Agarwal <shubhamagarwal92@gmail.com>
2020-04-03 17:56:19 -04:00
Justus Schock f6a86e8551
generalize reinstantiation of dataloader (#1346)
* generalize reinstantiation of dataloader

* fix condition

* add test

* update changelog

* fix changelog

Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
2020-04-03 17:55:08 -04:00
William Falcon e68ba1c836
added warnings to unimplemented methods (#1317)
* added warnings and removed default optimizer

* opt

* Apply suggestions from code review

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-03 15:06:51 -04:00
William Falcon 3c5530c29d
Wandb bug/wandb multi (#1360)
* Allow reinits in sub procs

* Dont create an experiment on pickle, name, or project

* Comments consistency

* Fix test

* Apply suggestions from code review

Co-authored-by: Chris Van Pelt <vanpelt@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-03 15:03:00 -04:00
William Falcon dd5a05926c
Borisdayma: fix(wandb) - fix watch method (#1361)
* fix(wandb): fix watch method

* rebased

* Apply suggestions from code review

Co-authored-by: Boris Dayma <boris.dayma@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-03 15:02:38 -04:00
Jean-Baptiste SCHIRATTI e570d2e1ca
Doc fixes (#1362)
* Doc fixes from #1357 (awaelchli's comments) + changelog.

* Fix indentation.

* Add blank line to fix doc build?
2020-04-03 15:02:20 -04:00
Tullie Murrell 38e89dd890
Fix fast_dev_run running validation twice (#1365) 2020-04-03 15:00:26 -04:00
Tullie Murrell b31edf37bf
Change min max gpu memory to be on their own plots (#1358) 2020-04-03 14:59:27 -04:00
Adrian Wälchli ebd9fc9530
Fix for incorrect run on the validation set with overwritten validation_epoch_end and test_end (#1353)
* reorder if clauses

* fix wrong method overload in test

* fix formatting

* update change_log

* fix line too long
2020-04-03 09:25:32 -04:00
Jean-Baptiste SCHIRATTI 868b172f05
Make training_epoch_end behave like validation_epoch_end (#1357)
* Make training_epoch_end behave like validation_epoch_end + minor fixes in docstrings.

* Minor fixes (Borda's comments).

* Detach tensors in batch_output (to avoid possible memory leak) + doc fix.

Co-authored-by: Jean-Baptiste SCHIRATTI <jean-baptisteschiratti@MacBook-Pro-de-Jean-Baptiste.local>
2020-04-03 14:43:26 +02:00
William Falcon 2eca8a9ef2
quick patch __code__ (#1352)
* quick patch

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix

* testing fix
2020-04-03 08:40:02 -04:00
Santiago Castro 1576ad9963
Fix docs typo (#1355)
* Fix typo

* Fix typo
2020-04-03 07:35:09 +02:00
Gerard Bentley f33b5a8d99
Simplify progress bar args (#1108)
* show progress bar dependent on refresh_rate

* test progress_bar_refresh control show bar

* remove show_progress_bar from other tests

* borda fixes

* flake8 fix

* changelog update prog bar refresh rate

* move show_progress_bar to deprecated 0.9 api

* rm show_progress_bar references, test deprecated

* Update pytorch_lightning/trainer/__init__.py

* fix test

* changelog

* minor CHANGELOG.md format

* Update pytorch_lightning/trainer/__init__.py

* Update pytorch_lightning/trainer/trainer.py

Co-authored-by: Gerard Bentley <gbkh2015@mymail.pomona.edu>
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
2020-04-03 00:53:00 +02:00
Nicki Skafte 2912239fe6
Add useful errors when model is not configured correctly (#1199)
* add check_model_configuration method

* trying to fix errors

* trying to fix tests

* added test_epoch_end to lightning template

* fix tests

* fix new test after rebase

* fix spelling

* added more checks

* updated formating

* added tests

* fixed CHANGELOG

* Apply suggestions from code review

* move test to new module

* change check on configure_optimizers

Co-authored-by: Nicki Skafte <nugginea@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-02 11:53:37 -04:00
Ethan Harris 28242f02d1
Remove default optimizer, add None optimizer option (#1279)
* Add warning when using default optimizer

* Refactor optimizer tests to test_optimizers

* Remove default optimizer, add option to use no optimizer

* Update CHANGELOG.md

* Update pytorch_lightning/trainer/optimizers.py

Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>

* Fix style

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-02 11:48:53 -04:00
William Falcon 3cb149f4f4
Removes need to unsqueeze from dp (#1319)
* removes need to unsqueeze from dp

* removes need to unsqueeze from dp

* fixed examples

* added auto unsqueeze

* added auto unsqueeze

* added auto unsqueeze

* added auto unsqueeze

* Update pytorch_lightning/overrides/data_parallel.py

Co-Authored-By: Adrian Wälchli <adrian.waelchli@students.unibe.ch>

* fixed dp parse

* fixed dp parse

Co-authored-by: Adrian Wälchli <adrian.waelchli@students.unibe.ch>
2020-04-02 11:46:20 -04:00
Boris Dayma 6b41b5c589
feat(wandb): save models on wandb (#1339)
* feat(wandb): save models on wandb

* docs(changelog): allow to upload models on W&B
2020-04-02 08:55:34 -04:00
Teven 04935ea718
fixed extra dataloader bug (#1196)
* fixed extra dataloader bug

* Update pytorch_lightning/trainer/training_loop.py

Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>

* updated CHANGELOG

* Small non-repetition change

self.get_model() => model as it was already defined

* Update CHANGELOG.md

* changed argument name to reload_train_dataloader_every_epoch

* fixed doc underline too short

* reverted to `reload_dataloaders_every_epoch`

* fixed val and test reloading

* fixed val and test reloading

Co-authored-by: TevenLeScao <teven.lescao@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-04-02 11:41:56 +02:00
William Falcon e48422df38
Sampler (#1328)
* sampler

* check for dataloader type

* check for dataloader type

* fixed sampler cases
2020-04-01 12:57:36 -04:00
William Falcon 7de51f78ac
Sampler (#1318)
* sampler

* sampler

* sampler

* check for dataloader type

* check for dataloader type
2020-03-31 18:22:45 -04:00
Asaf Manor aca8c7e6f3
Optimizer Frequencies logic, and new configure_optimizers (#1269)
* init_optimizers accepts Dict, Sequence[Dict]
and returns optimizer_frequencies.
optimizer_frequencies was added as a member of Trainer.

* Optimizer frequencies logic implemented in training_loop.
Description added to configure_optimizers in LightningModule

* optimizer frequencies tests added to test_gpu

* Fixed formatting for merging PR #1269

* Apply suggestions from code review

* Apply suggestions from code review

Co-Authored-By: Asaf Manor <32155911+asafmanor@users.noreply.github.com>

* Update trainer.py

* Moving get_optimizers_iterable() outside.

* Update note

* Apply suggestions from code review

* formatting

* formatting

* Update CHANGELOG.md

* formatting

* Update CHANGELOG.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-03-31 16:41:24 +00:00
Jirka Borovec ee68d5ba8e
reorder deprecated args (#1230) 2020-03-31 09:01:47 -04:00
Jirka Borovec 6ddb03922a
Profiler summary (#1259)
* refactor and add types

* add Prorfiler summary

* fix imports

* Revert "refactor and add types"

This reverts commit b4c552fa

* changelog

* revert rename

* fix test

* mute verbose
2020-03-31 08:57:48 -04:00
Adrian Wälchli 4dcb9d3e30
fixed type hint for weights_summary arg (#1313)
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2020-03-31 13:42:29 +02:00
Adrian Wälchli 1aba411da9
Early stopping when validation is disabled (#1235)
* early stop fallback to train epoch

* added test

* fix imports

* update docs

* update changelog

* fix typo
2020-03-31 06:24:26 +00:00
Bilal Khan a707d4bea1
Replace Wandb callback's finalize with no-op (#1193)
* Replace Wandb callback's finalize with no-op

* Update pytorch_lightning/loggers/wandb.py

* Update wandb.py

* remove wandb logger's finalize and update tests

* update changelog

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-03-30 18:45:06 -04:00