Commit Graph

306 Commits

Author SHA1 Message Date
Jirka Borovec 2232d37225
new log section [skip ci] (#4121)
* new logs

* formatting

* 1.0.1
2020-10-14 19:13:37 +02:00
Jirka Borovec 19df091a3b
chlogs for 1.0 [skip ci] (#3978)
* chlogs

* logs

* space

* date

* logs

* logs

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* logs

* logs

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-10-13 06:52:11 -04:00
Rohit Gupta bdbf846029
Fix to print scaler value in progress bar (#4053)
* Fix to print scaler value in progress bar

* chlog

* Fix to print scaler value in progress bar

* Fix to print scaler value in progress bar
2020-10-10 12:20:11 -04:00
William Falcon 05e0b4e5a1
Revert "Remove limitation of batch scaler (#4006)" (#4040)
This reverts commit 7e756ca11f.
2020-10-09 21:03:23 -04:00
Nicki Skafte 7e756ca11f
Remove limitation of batch scaler (#4006)
* working code

* add tests

* fix scaling

* move patch dataloader to utils

* renaming

* fix tests

* add changelog

* update docs

* pep8
2020-10-09 14:53:01 -04:00
Nrupatunga fcfa587492
Bugfix/update trainer properties (#3975)
* make current_epoch and global_step to be same as trainer, after model restore.

* remove assignment here

* test

* minor modification

* merge with parent's master

* [bug-fix]: update trainer properties

* minor comment fix

* minor comment fix

* reset train loader in `on_train_epoch_start` hook

* makes sure the changes work

* minor chane

* update changelog

* adding unit test for reload_dataloaders_every_epoch arg

* modified changelog, to add PR number

* revert imports

* changes to unit test

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-10-08 10:20:55 -04:00
Jirka Borovec 2bd045bb4a
new chlog template (#3963) 2020-10-07 20:42:24 -04:00
Jirka Borovec fef9b7777f
missing logs [to be merged before release][skip ci] (#3131)
* missing logs

* chlog

* add chlo to docs

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* chlog

* docs

* log

* metrics

* logs

* log

* format

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-10-07 14:51:35 -04:00
ananthsub d3f40d6a9e
Update to_disk to use fsspec for remote file support (#3930)
* Update supporters.py

* Update CHANGELOG.md

* Update supporters.py

* Update supporters.py

* Update supporters.py

* Update supporters.py

* Update supporters.py

* Update supporters.py

* Update CHANGELOG.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-10-07 07:28:23 -04:00
Ananya Harsh Jha f76bc5254e
revamp entire metrics (#3868)
* removed metric

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* added new metrics

Co-authored-by: Teddy Koker teddy.koker@gmail.com

* pep8

Co-authored-by: Teddy Koker teddy.koker@gmail.com

* pep8

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* win ddp tests skip

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* reset in compute, cache compute

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* reduce_ops handling

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* sync -> sync_dist, type annotations

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* wip docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* mean squared error

* docstring

* added mean ___ error metrics

* added mean ___ error metrics

* seperated files

* accuracy doctest

* gpu fix

* remove unnecessary mixin

* metric and accuracy docstring

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* metric docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* pep8, changelog

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* refactor dist utils, pep8

* refactor dist utils, pep8

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>
2020-10-06 17:03:24 -04:00
maxjeblick 39b3704285
add current_epoch to dumped_params (#3261)
* add current epoch to __dumped_params

* log

* reset

* add to test

* Update CHANGELOG.md

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-10-06 19:54:48 +02:00
Lezwon Castelino 69833dad5b
Added check to verify xla device is TPU (#3274)
* tpu device check

* replaced with xmp spawn

* Revert "replaced with xmp spawn"

This reverts commit 6835380f

* replaced all instances of XLA_AVAILABLE

* moved inner_f to global scope

* made refactors

* added changelog

* added TPU_AVAILABLE variable

* fix codefactor issues

* removed form trainer and early stopping

* add TORCHXLA_AVAILABLE check

* added tests

* refactoring

* Update pytorch_lightning/utilities/xla_device_utils.py

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* updated function names

* fixed bug

* updated CHANGELOG.md

* added todo

* added type hints

* isort and black

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-10-06 19:54:37 +02:00
Teddy Koker 9600926619
Rename log_save_interval, row_log_interval (#3748)
* Rename row_log_interval -> log_every_n_steps
log_save_interval -> flush_logs_every_n_steps

* Changelog

* fixed title underline length

* typo

* Update pytorch_lightning/trainer/trainer.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update pytorch_lightning/trainer/trainer.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* pep8 + deprecation test

* 'todo: remove in 1.1 comment'

* 1.1 -> 0.11

* log

* docs

* depr API

* add depr tests

* note

* miss

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-10-06 10:27:06 -04:00
Nicki Skafte 3ab43dd779
Fix lr finder for optimizers with states (#3897)
* fix lr finder

* changelog

* add test
2020-10-06 09:12:29 -04:00
Nrupatunga 7d47ed178b
[Bug-Fix]:properties `current_epoch` and `global_step` between model and trainer same always (#3785)
* make current_epoch and global_step to be same as trainer, after model restore.

* remove assignment here

* test

* minor modification

* Update pytorch_lightning/core/lightning.py

type check, better clarity

Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>

* Update pytorch_lightning/core/lightning.py

type check, better clarity

Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>

* comments for current_epoch and global_step properties

* Update tests/models/test_restore.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* update comments according to the changes made

* Update tests/models/test_restore.py

* add current_epoch, global_step to jit ignore list

* Add comments to CHANGELOG

* Update CHANGELOG.md

* Update tests/models/test_restore.py

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-10-05 11:10:40 -04:00
Rohit Gupta a628d181ee
Fix val_progress_bar total with num_sanity_val_steps (#3751)
* Fix val_progress_bar total with num_sanity_val_steps

* chlog

* Fix val_progress_bar total with num_sanity_val_steps

* move test

* replaced with sanity flag and suggestions
2020-10-04 08:32:18 -04:00
Lezwon Castelino 4da240ea1b
added broadcast option to tpu (#3814)
* added broadcast option to tpu

* add device

* moved tpu broadcast to tpu_backend

* removed Lightning dist

* decode bytes

* pep8 fix

* fix bug

* test for broadcast

* updated changelog
2020-10-04 07:47:33 -04:00
ananthsub 8dd37e7c4a
Use fsspec in load to resolve more paths/URLs from storage backends (#3692)
* special case http for torch hub load

* Update CHANGELOG.md

* Update test.txt
2020-10-03 13:29:03 -04:00
Jeff Yang 9942f3ebdf
Fix `on_train_batch_start` hook to end epoch early (#3700)
* init

* add test

* changelog and docs

* fix test

* Apply suggestion from code review

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-10-02 21:46:46 +02:00
ananthsub 3ab730e316
Swap torch.load for fsspec load in ddp spawn backend (#3787)
* Update ddp_spawn_backend.py

* Update ddp_cpu_spawn_backend.py

* log

Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-10-02 21:00:01 +02:00
Nicki Skafte fe290280be
Metric aggregation testing (#3517)
* aggregation testing

* add more tests

* mse

* more tests

* fix tests

* fix doctest

* fix codefactor

* fix import error

* fix doctest

* revert docfix

* test for model integration

* fix integration test

* added test cases

* fix rmsle

* aggregation testing

* add more tests

* mse

* more tests

* fix tests

* fix doctest

* fix codefactor

* fix import error

* fix doctest

* revert docfix

* test for model integration

* fix integration test

* fix psnr

* add warning/valueerror to embedding similarity

* fixed f scores

* disable some test

* fix tests

* fixing codefactor

* fix pep8

* changelog

* fix doctest

* cleaning test

* fix pickle error

* pickle fix

* fix pickle error

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* code cleanup + changes based on suggestions

* update based on suggestion

* update based on suggestions

* Apply suggestions from code review

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

Co-authored-by: Nicki Skafte <nugginea@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-10-01 15:37:51 +02:00
Nicki Skafte 9a7d1a1876
[metrics] Accuracy num_classes error fix (#3764)
* change accuracy error to warning

* changelog
2020-10-01 13:00:42 +02:00
GimmickNG e4e60e9b82
Add datamodule parameter to lr_find() (#3425)
* Add datamodule parameter to lr_find()

* Fixed missing import

* Move datamodule parameter to end

* Add datamodule parameter test with auto_lr_find

* Change test for datamodule parameter

* Apply suggestions from code review

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>

* Fix lr_find documentation

Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>

* formatting

* Add description to datamodule param in lr_find

* pep8: remove trailing whitespace on line 105

* added changelog

Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Nicki Skafte <nugginea@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-10-01 10:33:12 +02:00
Adrian Wälchli c73032e39d
Make ModelCheckpoint(save_top_k=-1) track the best models (#3735)
* fix topk=-1 tracking best

* update test

* clean up

* add changelog

* enable loading best topk in trainer.test()

* make trivial

* return right away

* make windows test path happy
2020-09-30 08:34:02 -04:00
Adrian Wälchli 9405c880af
log/save_interval based on global step (#3667)
* log interval based on global step

* test

* test

* test

* test

* pep

* pep

* added changelog

* pep

* merge

* remove unused arg
2020-09-30 12:26:27 +02:00
ananthsub 3dcf7130c5
Support checkpoint hooks on data module (#3563)
* Split out changes from #3563 to make that PR easier to review. This formats the file according to the Black formatter

* Store a reference to the trainer on the datamodule

Fixes #3682

* Update data_connector.py

* Update data_connector.py

* Update test_datamodules.py

* Split out changes from #3563 to make that PR easier to review. This formats the file according to the Black formatter

* support checkpoint hooks for datamodule

refactor on_{save/load}_checkpoint to a separate hook class that both the lightning module and data module inherit
add spots in callback connector to call new datamodule hooks if available

* hooks formatting

* Update hooks.py

* Update checkpoint_connector.py

* Update lightning.py

* update based on upstream/master

checkout upstream/master

* Update checkpoint_connector.py

* add tests

* undo format revert

* Updated CHANGELOG.md

* add checkpoint hooks

* add Dict type

* import CheckpointHooks
2020-09-29 19:51:44 +02:00
Carlos Mocholí 3b2efe5b2a
Fix ModelCheckpoint period (#3630)
* Fix ModelCheckpoint period

* Remove comma

* Minor changes

* skip check

* Revert "skip check"

Already pushed to master

This reverts commit 00d9e77b81.

Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-09-29 15:36:45 +02:00
Rohit Gupta 783750547d
disable optimizers setup during testing (#3059)
* disable configure_optimizers during testing

* minor changes

* hvd and ddp

* fix precision during testing

* fix ddp

* fix amp

* fix cpu

* update dp

* simplify optimizers

* add test

* codefactor

* ref optimizer setup

* chlog

* suggestions

* isort

* rebased with master
2020-09-29 01:09:04 +02:00
Adrian Wälchli d15fd751c7
change default save_top_k, save_last to None (#3680)
* topk default

* fix test that doesn't have best available

* remove print

* #3680 changes

* fix backward

* temp revert

te

* add warning by carmocca

* format docstring for test

* specify monitor in ES test with top k

* improve docstring for save_last

* remove commented lines

* revert passing model to test

* undo regex mistake

* changelog

* fix test covering case monitor=None and savetopk=-1

* docstring

* fix test for saving all checkpoints

* don't save checkpoints for save_top_k=0

* add test for savetopk=0

Co-authored-by @carmocca

Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2020-09-27 20:05:02 -04:00
Jeff Yang 05e5f03fd7
Enable PyTorch 1.7 in conda CI (#3541)
* enable pt 1.7

* readme

* nightly diff version testing, will delete later

* nightly diff version testing, will delete later

* back to normal [ci skip]

* use __ignored_properties__

* define __ignored_properties__ in respective modules

* change log

* formatting

Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-09-25 16:20:15 +02:00
Carlos Mocholí e70aea7642
Allow ModelCheckpoint monitor to be None (#3633)
* Fix ModelCheckpoint period

* Test for less epochs
2020-09-25 15:54:04 +02:00
Nicki Skafte b1347c956a
[Metrics] AUROC error on multilabel + improved testing (#3350)
* error on multilabel

* fix tests

* fix pep8

* changelog

* update doc test

* fix doctest

* fix doctest

* update from suggestion

* Apply suggestions from code review

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* Update test_classification.py

* Update test_classification.py

* retrigger test

* 'pep8

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-09-21 11:46:48 +02:00
Adrian Wälchli a71d62d840
Fix deterministic behavior in ddp_spawn (#3573)
* docs

* set env variable

* fix

* changelog
2020-09-20 19:42:58 -04:00
Adrian Wälchli e6c7548b30
Fix overfit_batches > 0 on distributed_backend = "ddp" (#3534)
* example

* ex

* example

* sampler

* fix

* fix

* remove example

* changelog
2020-09-19 19:00:58 -04:00
Carlos Mocholí 580b04b490
Fix ModelCheckpoints name formatting (#3163)
* Fix ModelCheckpoint's name formatting

* Fix failing tests

* Add dot to CHECKPOINT_SUFFIX

* Set variables to their default values at the end of tests

* Fix logic for filepath='' and filename=None. Add test

* Fix Windows tests

* Fix typo. Remove leading line break and zeroes

* Remove CHECKPOINT_SUFFIX

* Fix typos. Use appropriate f-string format

* Apply suggestions from code review

* Fix broken tests after #3320

* Finish changes suggested by Borda

* Use explicit test var names

* Apply suggestions

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Apply suggestions

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update CHANGELOG

* Apply suggestions from code review

* for

* prepend whitespace in warn msg

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-09-18 23:09:11 +02:00
Abe Botros 76c4afb840
Fix IoU score for classes not present in target or pred (#3098)
* Fix IoU score for classes not present in target or pred

Fixes #3097

- Allow configurable not_present_score for IoU for classes
  not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
  metric class down to its underlying functional iou
  call.

* Changelog: move IoU not-present score fix to [unreleased]

* IoU: avoid recomputing class presence in target and pred

Use already-computed support, true positives, and false positives to
determine if a class is not present in either target or pred.

* Test IoU against sklearn jaccard_score

Also add TODO to test our IoU's not_present_score against sklearn's
jaccard_score's zero_division when it beecomes available.

* IoU: remove_bg -> ignore_index

Fixes #2736

- Rename IoU metric argument from `remove_bg` -> `ignore_index`.
- Accept an optional int class index to ignore, instead of a bool and
  instead of always assuming the background class has index 0.
- If given, ignore the class index when computing the IoU output,
  regardless of reduction method.

* Improve documentation for IoU not_present_score

* Update default IoU not_present_score to 0.0

* Add note about IoU division by zero

* Rename IoU not_present_score -> absent_score

* Update IoU absent score changelog wording

* Condense IoU absent_score argument docstring

* Remove unnecessary IoU ignore_index comment

* docstrings

* isort

* flake8

* Fix test of IoU against sklearn jaccard

Use macro instead of micro averaging in sklearn's jaccard score, to
match multi-class IoU, which conventionally takes per-class scores
before averaging.

Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
2020-09-17 10:37:49 +02:00
Adrian Wälchli 4ed96b2eb4
fix gradient norm tracking for row_log_interval > 1 (#3489)
* fix + test

* changelog

* Apply suggestions from code review

Co-authored-by: Tim Chard <timchard@hotmail.com>

* improve test

Co-authored-by: Tim Chard <timchard@hotmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
2020-09-15 18:41:27 +02:00
Nicki Skafte 28af34bc51
[Metrics] Class reduction similar to sklearn (#3322)
* new class reduce interface

* update docs

* pep8

* update_class_metrics

* fix doctest

* changelog

* fix docs

* fix codefactor

* fix codefactor

* formatting

* fix typo

* fix typo

* typo pr -> per

* update from suggestion

* fix error

* Apply suggestions from code review

* Update CHANGELOG.md

* formatting

* timeouts

* docstring formatting for reg metrics

* pep

* flake8

* revert workflow changes

* suggestions

Co-authored-by: Nicki Skafte <nugginea@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
2020-09-15 14:36:14 +02:00
Phil b5dc6998ae
Disable train dataloader shuffle when overfit_batches is active. (#3501)
* Disable train dataloader shuffle when overfit_batches is active.

* pep8

Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-09-15 05:07:27 -04:00
Nicki Skafte 93cf6d0054
[Metrics] class based embedding similarity + tests (#3358)
* embedding similarity class + test

* fix tests

* fix pep8

* add docs

* noindex

* Update docs/source/metrics.rst

* Update pytorch_lightning/metrics/self_supervised.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update pytorch_lightning/metrics/self_supervised.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* suggestions

* changes to init

* move __all__

* fix imports

* Apply suggestions from code review

* assert typo

* change import

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Nicki Skafte <nugginea@gmail.com>
2020-09-11 12:11:50 +02:00
Rohit Gupta a1ea681c47
Fix batch_outputs with optimizer frequencies (#3229)
* Fix batch_outputs with optimizers frequencies

* optimizers

* fix batch_outputs with optimizer frequencies

* clean test

* suggestion

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* chlog

* failing doctest

* failing doctest

* update doctest

* chlog

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-09-10 23:01:20 +02:00
Patrick Orlando 656c1af0df
Get experiment_id from MLFlow only once instead of each training loop (#3394)
* Get experiment_id from MLFlow only once instead of each training loop.

* Apply suggestions from code review

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* add test that asserts mlflow client is called to retrieve experiment id only once

* make pep8 happy

* logs

Co-authored-by: Patrick Orlando <patrick.orlando@rea-group.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-09-09 11:38:26 +02:00
Adrian Wälchli e245065fbc
limit auto scaling batch size to the size of the training dataset (#3271)
* fix

* fix and test

* fix merge error

* test for max dataset size

* changelog

* update docs

* fix merge

* unused imports

* imports
2020-09-09 10:51:43 +02:00
Travis Addair 091d37f968
Added check for apex AMP and unit tests for Horovod + AMP (#3404)
* Added check for apex AMP and unit tests for Horovod + AMP

* Changelog

* Fixed order of Horovod and Apex optimizer wrapping
2020-09-08 20:30:57 -04:00
Rohit Gupta 24809b0b26
Refactor GPUStatsMonitor to improve training speed (#3257)
* Refactor GPUMonitor to improve training speed

* added gpu ids to monitor

* update tests

* added deprecation warning

* pep

* fix test

* fix docs

* fix log_gpu_memory

* move deprecation check

* chlog

* Update CHANGELOG.md

* suggestions and fix

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-09-04 06:02:16 -04:00
Adrian Wälchli 48c22c8bad
update batch size in DataModule when auto scaling batch size (#3266)
* fix datamodule hasattr

* fix patch check

* fix setattr

* update docs

* revert patch fix

* changelog

* fix datamodule passed in as fit arg

* docs

* set datamodule batch size in lightning_setattr

* fix merge

* check with has_attr

* access datamodule via trainer

* pass fit args down to tuner

* docs

* fix typos in docs

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2020-09-03 22:07:49 +02:00
Adrian Wälchli 4ad5a78dce
to_torchscript method for LightningModule (#3258)
* script

* docs

* simple test

* move test

* fix doctest

* no grad context

* extend tests


test


test

* datamodule test

* clean up test

* docs

* name

* fix import

* update changelog

* fix import

* skip pytorch 1.3 in test

* update codeblock

* skip bugged 1.4

* typehints

* doctest not working on all pytorch versions

* rename TestGAN to prevent pytest interference

* add note about pytorch version

* fix torchscript version inconsistency in tests

* reset training state + tests

* update docstring

* Apply suggestions from code review

Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>

* update docstring, dict return

* add docs to index

* add link

* doc eval mode

* forward

* optional save to file path

* optional

* test torchscript device

* test save load with file path

* pep

* str

* Commit typing suggestion

Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>

* skip test if cuda not available

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
2020-09-03 20:24:44 +02:00
Rohit Gupta 4a22fca524
Changed LearningRateLogger to LearningRateMonitor (#3251)
* Change LearningRateLogger to LearningRateMonitor

* file rename

* docs

* add LearningRateLogger with deprecation warning

* deprecated LearningRateLogger

* move deprecation check

* chlog

Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-09-03 18:17:15 +00:00
Brendan Fahy 2d8c1b7c54
use fsspec instead of gfile for all IO (#3320)
* use fsspec instead of gfile for all IO

This better supports remote (and local) file operations with a dedicated package

* Apply suggestions from code review

Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>

* chlog

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
2020-09-03 14:19:20 +02:00
Rohit Gupta f03943ee94
Fix GpuUsageLogger to work on different platforms (#3008)
* Fix GpuUsageLogger

* docstrings

* misconfigexception

* add basic tests

* skip doctest

* fix parameter and docstring

* rm cl

* skip doctest

* cleanup

* chlog

* add suggestions from review

* add test from suggestions

* fix import

* fix test

* fix test

* fix test

* fix test

* rename GpuUsageLogger to GPUStatsMonitor

* doc fix

* Apply suggestions from code review

* update docs format

* update docs

* miss

* merge

* fix title formatting

* unindent

* punctuation

* simplify if statements

* fix test

* suggestions

* pep

* Update CHANGELOG.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* fix on_train_batch_*

* use AttributeDict

* usage

* rank zero

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* import

* minor changes

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Adrian Wälchli <adrian.waelchli@inf.unibe.ch>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-08-27 19:50:32 +02:00