* Fix num_classes warning
Put to_categorical before get_num_classes in metrics/functional/classification.py
* Update classification.py
Remove whitespaces in blank line.
* Fix shuffle for distributed sampler
* add test
* test
* chlog
* update test
* update test
* update test
* assertions via callback
* define callback outside for pickling
* skip ddp test on windows
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Test using torchtext.data.Field with include_lengths=True/False
* Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True
* Add description for fix of issue #2688
* changes to accomodate CodeFactor issues
* Another attemt to make last CodeFactor issue pass (it's a false alarm)
* temporarly disable test of test_grad_tracking to check if testing will pass
* reenable test in test_grad_norm
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda
* Update pytorch_lightning/utilities/apply_func.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* adding tests more specific to batch_move_data_to_device with tochtext Batch
* added check that Tensors were moved to target device
* removed tests using RNN models to be moved into a separate PR
* fixing FLAKE8 errors that showed up after merge from master branch
modified: tests/base/datamodules.py
modified: tests/callbacks/test_model_checkpoint.py
* parameterized test to reduce code duplication
* Added check only if length tensor exist. Removed left over comments.
* rearranged device parameterization and added pytest.param
* Try to figure out why only one device is tested on Linux machines
* Testing on CPU and GPU devices (GPU test is skip if no cuda device is available.
* added test for TPU device (experimental)
* Adding test parameterization for TPU test (experimental)
* change import statement to limit what is imported for a TPU environment
* made test work with TPU
* Change to trigger CI
* Change to trigger CI
* uncommented TPU test to check CI
* reenabling TPU test
* small change to trigger CI build
* small change to trigger CI build
* small change to trigger CI build
* adding tests/utilities/test_apply_func_torchtext.py to CI TPU test
* try to make test not skipped on CI with TPU
* remove testing on TPU
* undo an accidental change to test_tpu.py (file should not have been touched)
* small change to trigger CI build
* small change to trigger CI build
* Update tests/utilities/test_apply_func_torchtext.py
* Revert to previous version
* Apply suggestions from code review
* Change to trigger CI
Co-authored-by: Thomas Schaaf <tschaaf@mmm.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
* export model to onnx
* prepare data before exporting
* support for dataloaders and tensors
* added tests
* use example_input_array
add to changelog
* updated docstring
* added onnx inference tests
* temp commit
* removed schema valid test
* add onnxruntime to environment.yml
* moved onnxruntime to environment.yml pip
* add example in doc
* add lines between code block
* added PR to changelog
* is file check
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* remove *
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* infer example outputs
* added doctest for onnx
* fix windows tests
* moved eval within condition block
* self.forward to self
* added docs
* fixed docs error
* added to toctree
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* pt 1.6
* don't use the new zipfile serialization for now
* quick flake8 fixes
* remove unnecessary f
* coalesce strings
* remove comma
* remove extra commas
* Apply suggestions from code review
Co-authored-by: Peter Yu <2057325+yukw777@users.noreply.github.com>
* set _use_new_zipfile_serialization to False only for pytorch 1.6.0
* remove unnecessary comments
* flake8 fixes
* use pkg_resources instead of packaging
* readme
* format
* version
* chlog
Co-authored-by: Peter Yu <peter@asapp.com>
Co-authored-by: Peter Yu <2057325+yukw777@users.noreply.github.com>
The speed up is achieved by:
- Moving the "where" out of the loop (and replacing with min for simplicity).
- Replacing manual sum and pow with torch.norm. Even though this results
in unnessecary computation (computing pow(root)) this is still a lot
faster.
- Preallocating the output gives a slight speed up.
Note that calling .to for all parameters results in a small speed
penalty (~4 ms in my case) but allows parameters on different devices.
Overall this reduces the time used for gradient clipping from 206ms to
74 ms for my model (Resnet50 + few additional vars, all vars on GPU).
* truncate version number
* add docs and example
* extend docs
* docs
* docs
* changelog
* show last
* Update pytorch_lightning/core/lightning.py
* Update pytorch_lightning/core/lightning.py
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Fix fast_dev_run to run for all val_dataloaders
* fast_dev_run check
* changelog
* explicit
* limit_batches with fast_dev_run in init
* add test
* whitespace and comment fix
* comment and assertion
* added tests
* Fix fast_dev_run to run for all val_dataloaders
* fast_dev_run check
* changelog
* explicit
* limit_batches with fast_dev_run in init
* add test
* whitespace and comment fix
* comment and assertion
* added tests
* added tests
* added tests
* added tests
* update rtol
* Revert "update rtol"
This reverts commit 4320329540.
* added tests
Co-authored-by: William Falcon <waf2107@columbia.edu>
* fix weights_save path and drop ckpt_path
* add tests
* unused import
* update docs
* changelog
* pep8
* fix horovod test
* make backward compatible
* perform same test for all loggers
* fix for when logger=False and weights_save_path is set
* update changelog
* update docs
* update tests
* do not set save dir dynamically
* remove duplicate test
* remove duplicated tests
* update tests
* update tests
* remove remaining ckpt_path references
* move defaults to init as suggested by @Borda
* test deprecation