lightning/tests
SeanNaren a52e6a4a61 Add additional checkpoint tests 2020-11-24 21:12:18 +00:00
..
backends [FEAT] DDP: Create DDPLauncher (#4515) 2020-11-20 10:17:46 +00:00
base [feat] Logging refactor 2/n - train (#4495) 2020-11-05 22:27:04 +00:00
callbacks isolate PL debugger in tests (#4643) 2020-11-14 11:22:56 +00:00
checkpointing Cast hparams to dict when not using omegaconf (#4770) 2020-11-20 19:53:05 +08:00
core Add dirpath and filename parameter in ModelCheckpoint (#4213) 2020-10-23 09:59:12 +05:30
loggers feat(wandb): let wandb cli handle runs (#4648) 2020-11-24 01:31:28 +05:30
metrics [Metrics] Unification of FBeta (#4656) 2020-11-23 09:44:35 +01:00
models Add current_score to ModelCheckpoint.on_save_checkpoint (#4721) 2020-11-18 08:09:44 +00:00
plugins Add additional checkpoint tests 2020-11-24 21:12:18 +00:00
trainer 5/n: Extract reference model call to plugins/accelerators (#4773) 2020-11-23 17:21:47 +00:00
tuner fix: `nb` is set total number of devices, when nb is -1. (#4209) 2020-10-29 10:50:37 +01:00
utilities [FEAT] DDP: Create DDPLauncher (#4515) 2020-11-20 10:17:46 +00:00
README.md Horovod: fixed early stopping and added metrics aggregation (#3775) 2020-11-05 12:52:02 -05:00
__init__.py changelogs clean (#3082) 2020-08-20 22:58:53 +00:00
collect_env_details.py fix tensorboard version (#3132) 2020-09-15 23:48:48 +02:00
conftest.py Apply import formatting to files in the 2nd top level (#4717) 2020-11-18 00:29:09 +01:00
test_deprecated.py Deprecate prefix argument in ModelCheckpoint (#4765) 2020-11-21 18:08:42 +05:30
test_profiler.py RC & Docs/changelog (#1776) 2020-05-11 21:57:53 -04:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

The automatic travis tests ONLY run CPU-based tests. Although these cover most of the use cases, run on a 2-GPU machine to validate the full test-suite.

To run all tests do the following:

Install Open MPI or another MPI implementation. Learn how to install Open MPI on this page.

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install AMP support
bash requirements/install_AMP.sh

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have:

  1. At least 2 GPUs.
  2. NVIDIA-apex installed.
  3. Horovod with NCCL support: HOROVOD_GPU_OPERATIONS=NCCL pip install horovod

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.4 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.4 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.4 bash
docker image rm pytorch_lightning:devel-torch1.4