* Rebase onto master
* indent fix
* Remove duplicated logic
* Use single return
* Remove extra else
* add `__contains__` to TestHparamsNamespace to fix tests
* Fix lightning_setattr to set all valid attributes
* update doc
* better names
* fix holder order preference
* tests for new behavior
* Comment about using the last holder
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
(cherry picked from commit eee3b1a284)
* Check environment var independently to selecting a seed to prevent unnecessary warning message
* Add if statement to check if PL_GLOBAL_SEED has been set
* Added seed test to ensure that the seed stays the same, in case
* if
* Delete global seed after test has finished
* Fix code, add tests
* Ensure seed does not exist before tests start
* Refactor test based on review, add log call
* Ensure we clear the os environ in patched dict
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
(cherry picked from commit 635df27880)
* make device property always return a device with index
* pep8
* Update test_dtype_device_mixin.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* search for attribute in datamodule if not found elsewhere
* add test for datamodule
* add lightning_getattr test for datamodule
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Update CHANGELOG.md
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* tpu device check
* replaced with xmp spawn
* Revert "replaced with xmp spawn"
This reverts commit 6835380f
* replaced all instances of XLA_AVAILABLE
* moved inner_f to global scope
* made refactors
* added changelog
* added TPU_AVAILABLE variable
* fix codefactor issues
* removed form trainer and early stopping
* add TORCHXLA_AVAILABLE check
* added tests
* refactoring
* Update pytorch_lightning/utilities/xla_device_utils.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* updated function names
* fixed bug
* updated CHANGELOG.md
* added todo
* added type hints
* isort and black
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: make each backend independent for easier debugging and independent debugging
* ref: test val epoch end
* ref: test val epoch end
* Test using torchtext.data.Field with include_lengths=True/False
* Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True
* Add description for fix of issue #2688
* changes to accomodate CodeFactor issues
* Another attemt to make last CodeFactor issue pass (it's a false alarm)
* temporarly disable test of test_grad_tracking to check if testing will pass
* reenable test in test_grad_norm
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda
* Update pytorch_lightning/utilities/apply_func.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* adding tests more specific to batch_move_data_to_device with tochtext Batch
* added check that Tensors were moved to target device
* removed tests using RNN models to be moved into a separate PR
* fixing FLAKE8 errors that showed up after merge from master branch
modified: tests/base/datamodules.py
modified: tests/callbacks/test_model_checkpoint.py
* parameterized test to reduce code duplication
* Added check only if length tensor exist. Removed left over comments.
* rearranged device parameterization and added pytest.param
* Try to figure out why only one device is tested on Linux machines
* Testing on CPU and GPU devices (GPU test is skip if no cuda device is available.
* added test for TPU device (experimental)
* Adding test parameterization for TPU test (experimental)
* change import statement to limit what is imported for a TPU environment
* made test work with TPU
* Change to trigger CI
* Change to trigger CI
* uncommented TPU test to check CI
* reenabling TPU test
* small change to trigger CI build
* small change to trigger CI build
* small change to trigger CI build
* adding tests/utilities/test_apply_func_torchtext.py to CI TPU test
* try to make test not skipped on CI with TPU
* remove testing on TPU
* undo an accidental change to test_tpu.py (file should not have been touched)
* small change to trigger CI build
* small change to trigger CI build
* Update tests/utilities/test_apply_func_torchtext.py
* Revert to previous version
* Apply suggestions from code review
* Change to trigger CI
Co-authored-by: Thomas Schaaf <tschaaf@mmm.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
* recursive dtype device apply
* simplify
* simple test
* submodule test
* rename
* explicit
* type hints
* test for dp backend
* fix test skip
* rename
* add ddp_spawn test
* fix None index in test
* try fix ddp_spawn test
* changelog
* move _dtype and _device to mixin
* additional doctest