* topk default
* fix test that doesn't have best available
* remove print
* #3680 changes
* fix backward
* temp revert
te
* add warning by carmocca
* format docstring for test
* specify monitor in ES test with top k
* improve docstring for save_last
* remove commented lines
* revert passing model to test
* undo regex mistake
* changelog
* fix test covering case monitor=None and savetopk=-1
* docstring
* fix test for saving all checkpoints
* don't save checkpoints for save_top_k=0
* add test for savetopk=0
Co-authored-by @carmocca
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Split out changes from #3563 to make that PR easier to review. This formats the file according to the Black formatter
* Store a reference to the trainer on the datamodule
Fixes#3682
* Update data_connector.py
* Update data_connector.py
* Update test_datamodules.py
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* enable any logged or written metric to be accessible in callbacks
* clarify forward
* clarify forward
* clarify forward
* clarify forward
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* Update pytorch_lightning/callbacks/model_checkpoint.py
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* ref: result 1/n (make monitor default to checkpoint_on to simplify result syntax)
* force crash when max_epochs < epochs in a checkpoint
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* Allow kwargs in WandbLogger
* isort
* kwargs docstring
* typo
* kwargs for other loggers
* pep and isort
* formatting
* fix failing test
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Improve Comet Logger pickled behavior
* Delay the creation of the actual experiment object for as long as we can.
* Save the experiment id in case an Experiment object is created so we can
continue the same experiment in the sub-processes.
* Run pre-commit on the comet file.
* Handle review comment
Make most Comet Logger attribute protected as they might not reflect the final
Experiment attributes. Also fix the typo in the test name.
* Ensure that CometLogger.name and CometLogger.version always returns str
* Add new test for CometLogger.version behavior
* Add new tests for CometLogger.name and CometLogger.version
* Apply review suggestions
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Remove extraneous comments in Comet logger tests
* Fix lint issues
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Fix ModelCheckpoint's name formatting
* Fix failing tests
* Add dot to CHECKPOINT_SUFFIX
* Set variables to their default values at the end of tests
* Fix logic for filepath='' and filename=None. Add test
* Fix Windows tests
* Fix typo. Remove leading line break and zeroes
* Remove CHECKPOINT_SUFFIX
* Fix typos. Use appropriate f-string format
* Apply suggestions from code review
* Fix broken tests after #3320
* Finish changes suggested by Borda
* Use explicit test var names
* Apply suggestions
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Apply suggestions
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Update CHANGELOG
* Apply suggestions from code review
* for
* prepend whitespace in warn msg
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Fixes the test for early stopping without val step.
The expression which checked, if early stopping was triggered, had an off-by-one error and hence was true even if early stopping was not triggered.
Furthermore set patience to 0 and max epochs to 10, to ensure loss has enough time to flatten.
* Fixes early stopping without val step.
The issue has been, that only `early_stop_on` key was checked and not an arbitrary monitor key.
* Fixes branch, which checks whether early stopping is done during validation.
Before only `val_early_stop_on` was checked. Since arbitrary keys can be used, the set of possible validation keys cannot be exhaustive. Hence this disables "early stopping on_train_epoch_end" via an instance attribute if early stopping was executed in on_validation_epoch_end.
Furthermore adds a test, which ensures arbitrary keys work.
* Improve check whether eval results are used.
Only disable early checking with train results if eval results are actually used. Before they were always disabled in ``on_validation_epoch_end``.
Rename and document instance variable, to make it more clear.
* Remove wrong documentation on behaviour of early stopping with train result' dict.
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Fix IoU score for classes not present in target or pred
Fixes#3097
- Allow configurable not_present_score for IoU for classes
not present in target or pred. Defaults to 1.0.
- Also allow passing `num_classes` parameter through from iou
metric class down to its underlying functional iou
call.
* Changelog: move IoU not-present score fix to [unreleased]
* IoU: avoid recomputing class presence in target and pred
Use already-computed support, true positives, and false positives to
determine if a class is not present in either target or pred.
* Test IoU against sklearn jaccard_score
Also add TODO to test our IoU's not_present_score against sklearn's
jaccard_score's zero_division when it beecomes available.
* IoU: remove_bg -> ignore_index
Fixes#2736
- Rename IoU metric argument from `remove_bg` -> `ignore_index`.
- Accept an optional int class index to ignore, instead of a bool and
instead of always assuming the background class has index 0.
- If given, ignore the class index when computing the IoU output,
regardless of reduction method.
* Improve documentation for IoU not_present_score
* Update default IoU not_present_score to 0.0
* Add note about IoU division by zero
* Rename IoU not_present_score -> absent_score
* Update IoU absent score changelog wording
* Condense IoU absent_score argument docstring
* Remove unnecessary IoU ignore_index comment
* docstrings
* isort
* flake8
* Fix test of IoU against sklearn jaccard
Use macro instead of micro averaging in sklearn's jaccard score, to
match multi-class IoU, which conventionally takes per-class scores
before averaging.
Co-authored-by: rohitgr7 <rohitgr1998@gmail.com>
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* cleaning up stale logger tests
* Get experiment_id from MLFlow only once instead of each training loop.
* Apply suggestions from code review
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* add test that asserts mlflow client is called to retrieve experiment id only once
* make pep8 happy
* logs
Co-authored-by: Patrick Orlando <patrick.orlando@rea-group.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
* script
* docs
* simple test
* move test
* fix doctest
* no grad context
* extend tests
test
test
* datamodule test
* clean up test
* docs
* name
* fix import
* update changelog
* fix import
* skip pytorch 1.3 in test
* update codeblock
* skip bugged 1.4
* typehints
* doctest not working on all pytorch versions
* rename TestGAN to prevent pytest interference
* add note about pytorch version
* fix torchscript version inconsistency in tests
* reset training state + tests
* update docstring
* Apply suggestions from code review
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
* update docstring, dict return
* add docs to index
* add link
* doc eval mode
* forward
* optional save to file path
* optional
* test torchscript device
* test save load with file path
* pep
* str
* Commit typing suggestion
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* skip test if cuda not available
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Justus Schock <12886177+justusschock@users.noreply.github.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
* change t() to transpose() as xla devices do not support .t() on 1-dim tensor
* detach tensor before copying
* Revert "detach tensor before copying"
This reverts commit 37cc7bbe
* changed dims
* added test_result_obj_on_tpu
* detach before copying
* detach before copying
* detach before copying
* replace torch.cat with sum
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* tests to ensure correct dataloading interval and sequence
* fix rmsle
* Updated test to match rmsle fix
* Updated RMSLE example result to match functional
* chlog
* add randomized test
* fix pep8
Co-authored-by: Jirka Borovec <jirka@pytorchlightning.ai>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
* Fix num_sanity_val_steps according to limit_val_steps
* fix test
* add num_sanity_batches
* pep
* update docstring in test
* add more test
* chlog
* update comments and docstring in test
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Adrian Wälchli <adrian.waelchli@inf.unibe.ch>
Co-authored-by: Ananya Harsh Jha <ananya@pytorchlightning.ai>
When a LightningModule inherits from a class that implements `__new__()` such as `typing.Generic`, `inspect.signature(cls)` short-circuits and returns the signature of `__new__()` instead of `__init__()`. So, we need to be more specific and call inspection directly on the init function.
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name
* re-enabled naming metrics in ckpt name