22d9464e56
* First attempt at auto-moving data for inference * Correct my copypaste errors * Correct for if device is CPU * Get rid of the WIP code I accidentally added * Add tests * Make tests more foolproof * Make sure we stick with pep8 formatting * Clarify docs a little * Apply suggestions from code review * Get everything working again hopefully * refactor and added hook variant a variant b add test revert rename add changelog docs * move changelog entry to top * Move data transfer to utilities * Add back in warnings for autotransfer * Get rid of the test code I ended up accidentally commiting again * Add docs any changelog * Correct PR number in Changelog * Correct changelog * Update data.py * Update test_cpu.py * make a decorator * type hint * changelog * changelog * remove old function * import * test for decorator * fix test * remove old test * doctest * apply decorator directly * convert doctest to code block * prevent side effects in tests * fix merge * update forward docs * update docs * added docs in section "deployment / prediction" * update changelog Co-authored-by: Hengjian Jia <henryjia18@gmail.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: William Falcon <waf2107@columbia.edu> |
||
---|---|---|
.. | ||
base | ||
callbacks | ||
core | ||
loggers | ||
metrics | ||
models | ||
trainer | ||
utilities | ||
Dockerfile | ||
README.md | ||
__init__.py | ||
collect_env_details.py | ||
conftest.py | ||
install_AMP.sh | ||
test_deprecated.py | ||
test_profiler.py |
README.md
PyTorch-Lightning Tests
Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.
Running tests
The automatic travis tests ONLY run CPU-based tests. Although these cover most of the use cases, run on a 2-GPU machine to validate the full test-suite.
To run all tests do the following:
Install Open MPI or another MPI implementation. Learn how to install Open MPI on this page.
git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning
# install AMP support
bash tests/install_AMP.sh
# install dev deps
pip install -r requirements/devel.txt
# run tests
py.test -v
To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have:
- At least 2 GPUs.
- NVIDIA-apex installed.
- Horovod with NCCL support:
HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL pip install horovod
Running Coverage
Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.
cd pytorch-lightning
# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v
# print coverage stats
coverage report -m
# exporting results
coverage xml
Building test image
You can build it on your own, note it takes lots of time, be prepared.
git clone <git-repository>
docker image build -t pytorch_lightning:devel-pt_1_4 -f tests/Dockerfile --build-arg TORCH_VERSION=1.4 .
To build other versions, select different Dockerfile.
docker image list
docker run --rm -it pytorch_lightning:devel-pt_1_4 bash
docker image rm pytorch_lightning:devel-pt_1_4