* add support for wrong dtype in apply_func
* apply loader resetting to possible collection of loaders
* add combined loader iter class
* integrate combined loader iter to training loop
* fix imports
* fix imports
* finish supporters
* add tests for supporters
* add test for model with multiple loaders
* fix trainer integration
* fix instance check
* Train loaders (#4032)
* patch for issues discussed in #1959, encapsulating underlying datastructures returned from train_dataloader
* update data_loading.py to it uses patch discussed in #1959
* rename class
* Separate CombinedLoaderIterator into two classes, and update related tests. (#4606)
* Fix the bugs after rebasing.
* Add custom get_len for apply_to_collection
* Refactor MultiIterator to be as CombinedLoaderIterator
* To get the right num_training_batches. Call the wrapper for multi trainloader in data_loading.py, instead of training_loop.py
* Reload _loader_iters when calling __iter__
* Don't transform DataLoader to CombinedLoaderIterator when it's along
* Updates test_fit_multiple_train_loaders for testing num_training_batches
* Seperate CombinedLoaderIterator into CombinedLoaderIterator and CombinedDataLoader. Add CombinedDataset for unified DataLoader format.
* Initialize CombinedDataLoader before calculating num_training_batches. Also updating self._worker_check for multiple loaders
* Update tests for supporters
* Update tests for multiple trainloaders. Add tests about few_workers for multiple loaders.
* Fix pep8 issues
* Add tests for train_loader_patch.py
* Add descriptions to multiple_trainloader_mode
* Remove unused variables
* Add docstrings and typing
* Add more tests for better converage
* Remove unused commented codes
* Add sampler property
* Remove extract_dataset
* Update typing
* pep8
* Update train_loader_patch.py
* Apply suggestions from code review
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/trainer/supporters.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* reviewer comments
* fix stupid import
* add docs
* add back line separator
* fix line sep
* pep8
* Apply suggestions from code review
* fix
* fix
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Apply suggestions from code review
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
* flake8
Co-authored-by: Justus Schock <justusschock@justuss-mbp.fritz.box>
Co-authored-by: Christofer Fransson <christofer_fransson@yahoo.com>
Co-authored-by: YI-LIN SUNG <r06942076@ntu.edu.tw>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
* Test using torchtext.data.Field with include_lengths=True/False
* Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True
* Add description for fix of issue #2688
* changes to accomodate CodeFactor issues
* Another attemt to make last CodeFactor issue pass (it's a false alarm)
* temporarly disable test of test_grad_tracking to check if testing will pass
* reenable test in test_grad_norm
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda
* Update pytorch_lightning/utilities/apply_func.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* adding tests more specific to batch_move_data_to_device with tochtext Batch
* added check that Tensors were moved to target device
* removed tests using RNN models to be moved into a separate PR
* fixing FLAKE8 errors that showed up after merge from master branch
modified: tests/base/datamodules.py
modified: tests/callbacks/test_model_checkpoint.py
* parameterized test to reduce code duplication
* Added check only if length tensor exist. Removed left over comments.
* rearranged device parameterization and added pytest.param
* Try to figure out why only one device is tested on Linux machines
* Testing on CPU and GPU devices (GPU test is skip if no cuda device is available.
* added test for TPU device (experimental)
* Adding test parameterization for TPU test (experimental)
* change import statement to limit what is imported for a TPU environment
* made test work with TPU
* Change to trigger CI
* Change to trigger CI
* uncommented TPU test to check CI
* reenabling TPU test
* small change to trigger CI build
* small change to trigger CI build
* small change to trigger CI build
* adding tests/utilities/test_apply_func_torchtext.py to CI TPU test
* try to make test not skipped on CI with TPU
* remove testing on TPU
* undo an accidental change to test_tpu.py (file should not have been touched)
* small change to trigger CI build
* small change to trigger CI build
* Update tests/utilities/test_apply_func_torchtext.py
* Revert to previous version
* Apply suggestions from code review
* Change to trigger CI
Co-authored-by: Thomas Schaaf <tschaaf@mmm.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
* refactor and added hook
variant a
variant b
add test
revert rename
add changelog
docs
* resolve merge duplication
* overridden typo
* fix test
* tpu id
* raise if TPU not available
* re-use apply_to_collection function for parsing collections
* comment
* make utility function available to user
* documentation
* move changelog entry to top
* fix tpu transfer call
* fix call
* remove hardcoded string
* improve test
* call model hook by default
* Apply suggestions from code review
* rename utility function
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>