lightning/tests
Rohit Gupta 6609b2e46f
enable `on_load_checkpoint` for `datamodule` for all `trainer_fn` (#10238)
2021-11-01 14:20:46 +00:00
..
accelerators Fix `distrib_type` not being set when Plugin instances being passed to Trainer (#10251) 2021-11-01 17:11:57 +05:30
base Keep global step update in the loop (#8856) 2021-09-14 19:21:39 +05:30
callbacks Deprecate `ProgressBar` and rename it to `TQDMProgressBar` (#10134) 2021-11-01 11:42:21 +00:00
checkpointing Remove `optimizer_connector.py` (#10120) 2021-10-26 00:52:43 +00:00
core enable `on_load_checkpoint` for `datamodule` for all `trainer_fn` (#10238) 2021-11-01 14:20:46 +00:00
deprecated_api Deprecate `ProgressBar` and rename it to `TQDMProgressBar` (#10134) 2021-11-01 11:42:21 +00:00
helpers Lightning Lite core and tests (#10175) 2021-10-29 21:46:39 +00:00
lite Lightning Lite core and tests (#10175) 2021-10-29 21:46:39 +00:00
loggers Fix iterating over a DummyLogger when `fast_dev_run > 0` (#10232) 2021-10-29 07:22:59 +00:00
loops del iterator on_run_end() (#9915) 2021-10-29 16:29:44 +00:00
models Implement double optimizer closure for hook structure consistency (#10167) 2021-10-29 13:03:04 +00:00
overrides Mark accelerator connector as protected (#10032) 2021-10-25 19:24:54 +00:00
plugins Clip before step (#10248) 2021-10-30 11:27:49 +01:00
profiler Make sure file and folder exists in Profiler (#10073) 2021-10-26 11:13:31 +00:00
trainer Deprecate `ProgressBar` and rename it to `TQDMProgressBar` (#10134) 2021-11-01 11:42:21 +00:00
tuner reset val dataloader for binsearch (#9975) 2021-10-18 12:54:26 +02:00
utilities Replace `_TORCH_GREATER_EQUAL_DEV_1_10` with `_TORCH_GREATER_EQUAL_1_10` (#10240) 2021-10-29 10:36:02 +00:00
README.md CI: add mdformat (#8673) 2021-08-03 18:19:09 +00:00
__init__.py Replace `yapf` with `black` (#7783) 2021-07-26 13:37:35 +02:00
conftest.py Add support for `torch.use_deterministic_algorithms` (#9121) 2021-09-30 04:40:09 +00:00
mnode_tests.txt Mnodes (#5020) 2021-02-04 20:55:40 +01:00
special_tests.sh Skip reconciliate_processes if used within a cluster environment that creates processes externally (#9389) 2021-09-15 11:54:17 +01:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9