lightning/tests
Carlos Mocholí f0c5479de9
Remove legacy `Result` parameters (#6016)
2021-03-28 11:55:08 +02:00
..
accelerators Automatically find and run special tests (#6669) 2021-03-26 17:04:59 +00:00
base Do not add return dict items to callback_metrics (#6682) 2021-03-26 14:05:20 +01:00
callbacks Do not add return dict items to callback_metrics (#6682) 2021-03-26 14:05:20 +01:00
checkpointing Do not add return dict items to callback_metrics (#6682) 2021-03-26 14:05:20 +01:00
core Support teardown hook on DataModule (#4673) 2021-03-25 07:51:55 -05:00
deprecated_api Simplify deprecations (#6620) 2021-03-25 15:26:38 +01:00
helpers Add PyTorch 1.8 Profiler 5/5 (#6618) 2021-03-23 20:43:21 +00:00
loggers Add artifcact_location arg to MLFlow logger (#6677) 2021-03-26 00:12:03 +01:00
metrics Simplify deprecations (#6620) 2021-03-25 15:26:38 +01:00
models Add on_epoch_start to run at the beginning of every loop irrespective of train/val/test (#6498) 2021-03-25 14:20:49 +01:00
overrides Flash predict step (#6577) 2021-03-23 11:13:13 -04:00
plugins Feature/double precision (#6595) 2021-03-24 15:47:58 +05:30
trainer Remove legacy `Result` parameters (#6016) 2021-03-28 11:55:08 +02:00
tuner Fix tuner.scale_batch_size not finding batch size attribute when using datamodule (#5968) 2021-03-14 09:16:19 +01:00
utilities Automatically find and run special tests (#6669) 2021-03-26 17:04:59 +00:00
README.md Fix pre-commit trailing-whitespace and end-of-file-fixer hooks. (#5387) 2021-01-26 14:27:56 +01:00
__init__.py fixing examples (#6600) 2021-03-20 18:58:59 +00:00
collect_env_details.py add copyright to tests (#5143) 2021-01-05 09:57:37 +01:00
conftest.py PoC: Accelerator refactor (#5743) 2021-02-12 15:48:56 -05:00
mnode_tests.txt Mnodes (#5020) 2021-02-04 20:55:40 +01:00
special_tests.sh Automatically find and run special tests (#6669) 2021-03-26 17:04:59 +00:00
test_profiler.py Add PyTorch 1.8 Profiler 5/5 (#6618) 2021-03-23 20:43:21 +00:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

The automatic travis tests ONLY run CPU-based tests. Although these cover most of the use cases, run on a 2-GPU machine to validate the full test-suite.

To run all tests do the following:

Install Open MPI or another MPI implementation. Learn how to install Open MPI on this page.

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install AMP support
bash requirements/install_AMP.sh

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have:

  1. At least 2 GPUs.
  2. NVIDIA-apex installed.
  3. Horovod with NCCL support: HOROVOD_GPU_OPERATIONS=NCCL pip install horovod

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.4 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.4 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.4 bash
docker image rm pytorch_lightning:devel-torch1.4