lightning/tests
Ananya Harsh Jha 6f1a2ce517
integrate metrics API with self.log (#3961)
* metrics integration into self.log

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* ddp and regualr test for self.log + metrics

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* pep8

* fix log tests

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

* docs

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>

Co-authored-by: Teddy Koker <teddy.koker@gmail.com>
2020-10-07 22:54:32 -04:00
..
backends Deprecate early_stop_callback Trainer argument (part 2) (#3845) 2020-10-04 17:36:47 -04:00
base removed support for EvalResult and TrainResult (#3968) 2020-10-07 22:39:16 -04:00
callbacks added tests for the training epoch end (#3967) 2020-10-07 22:27:36 -04:00
checkpointing Test to ensure ckpt filepath contains correct val score (#3933) 2020-10-07 07:43:17 -04:00
core integrate metrics API with self.log (#3961) 2020-10-07 22:54:32 -04:00
loggers Mocking Loggers Part 5/5 (final) (#3926) 2020-10-06 23:49:06 -04:00
metrics integrate metrics API with self.log (#3961) 2020-10-07 22:54:32 -04:00
models removed support for EvalResult and TrainResult (#3968) 2020-10-07 22:39:16 -04:00
trainer removed support for EvalResult and TrainResult (#3968) 2020-10-07 22:39:16 -04:00
utilities Added check to verify xla device is TPU (#3274) 2020-10-06 19:54:37 +02:00
README.md fix path in CI for release & python version in all dockers & duplicated badges (#3765) 2020-10-02 05:26:21 -04:00
__init__.py changelogs clean (#3082) 2020-08-20 22:58:53 +00:00
collect_env_details.py fix tensorboard version (#3132) 2020-09-15 23:48:48 +02:00
conftest.py repair CI for Win (#2358) 2020-06-26 21:38:25 -04:00
test_deprecated.py Rename log_save_interval, row_log_interval (#3748) 2020-10-06 10:27:06 -04:00
test_profiler.py RC & Docs/changelog (#1776) 2020-05-11 21:57:53 -04:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

The automatic travis tests ONLY run CPU-based tests. Although these cover most of the use cases, run on a 2-GPU machine to validate the full test-suite.

To run all tests do the following:

Install Open MPI or another MPI implementation. Learn how to install Open MPI on this page.

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install AMP support
bash requirements/install_AMP.sh

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have:

  1. At least 2 GPUs.
  2. NVIDIA-apex installed.
  3. Horovod with NCCL support: HOROVOD_GPU_ALLREDUCE=NCCL HOROVOD_GPU_BROADCAST=NCCL pip install horovod

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.4 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.4 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.4 bash
docker image rm pytorch_lightning:devel-torch1.4