lightning/pytorch_lightning/utilities
Thomas Schaaf a6719f09f0
Bugfix/torchtext include lengths (#2689)
* Test using torchtext.data.Field with include_lengths=True/False

* Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True

* Add description for fix of issue #2688

* changes to accomodate CodeFactor issues

* Another attemt to make last CodeFactor issue pass (it's a false alarm)

* temporarly disable test of test_grad_tracking to check if testing will pass

* reenable test in test_grad_norm

* Update CHANGELOG.md

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda

* Update pytorch_lightning/utilities/apply_func.py

Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>

* adding tests more specific to batch_move_data_to_device with tochtext Batch

* added check that Tensors were moved to target device

* removed tests using RNN models to be moved into a separate PR

* fixing FLAKE8 errors that showed up after merge from master branch
	modified:   tests/base/datamodules.py
	modified:   tests/callbacks/test_model_checkpoint.py

* parameterized test to reduce code duplication

* Added check only if length tensor exist. Removed left over comments.

* rearranged device parameterization and added pytest.param

* Try to figure out why only one device is tested on Linux machines

* Testing on CPU and GPU devices (GPU test is skip if no cuda device is available.

* added test for TPU device (experimental)

* Adding test parameterization for TPU test (experimental)

* change import statement to limit what is imported for a TPU environment

* made test work with TPU

* Change to trigger CI

* Change to trigger CI

* uncommented TPU test to check CI

* reenabling TPU test

* small change to trigger CI build

* small change to trigger CI build

* small change to trigger CI build

* adding tests/utilities/test_apply_func_torchtext.py to CI TPU test

* try to make test not skipped on CI with TPU

* remove testing on TPU

* undo an accidental change to test_tpu.py (file should not have been touched)

* small change to trigger CI build

* small change to trigger CI build

* Update tests/utilities/test_apply_func_torchtext.py

* Revert to previous version

* Apply suggestions from code review

* Change to trigger CI

Co-authored-by: Thomas Schaaf <tschaaf@mmm.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 07:53:08 -04:00
..
__init__.py EvalResult support for val loop (PR 3/5) (#2651) 2020-07-22 13:53:10 -04:00
apply_func.py Bugfix/torchtext include lengths (#2689) 2020-07-31 07:53:08 -04:00
cloud_io.py repair CI for Win (#2358) 2020-06-26 21:38:25 -04:00
debugging.py Fix max_batches with fast_dev_run. (#2581) 2020-07-27 17:56:55 -04:00
device_dtype_mixin.py fix dtype/device property not getting updated in submodules (#2657) 2020-07-21 15:18:57 -04:00
distributed.py EvalResult support for val loop (PR 3/5) (#2651) 2020-07-22 13:53:10 -04:00
exceptions.py Profiler summary (#1259) 2020-03-31 08:57:48 -04:00
memory.py Add toma comments to auto_scale_batch_size (#1994) 2020-05-29 05:57:50 +00:00
parsing.py EvalResult support for val loop (PR 3/5) (#2651) 2020-07-22 13:53:10 -04:00
seed.py replace train_percent_check with limit_train_batches (#2220) 2020-06-17 13:42:28 -04:00