lightning/tests
Danielle Pintz c5d011c3cf
Remove `TrainerModelHooksMixin` (#10322)
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
2021-11-03 20:26:59 +00:00
..
accelerators introduce has_len_all_ranks() to check the length of dataloader across ranks (#9827) 2021-11-02 13:22:58 -04:00
base Keep global step update in the loop (#8856) 2021-09-14 19:21:39 +05:30
callbacks Add `leave` argument to RichProgressBar (#10301) 2021-11-02 13:20:52 -04:00
checkpointing Remove `optimizer_connector.py` (#10120) 2021-10-26 00:52:43 +00:00
core enable `on_load_checkpoint` for `datamodule` for all `trainer_fn` (#10238) 2021-11-01 14:20:46 +00:00
deprecated_api Remove `TrainerModelHooksMixin` (#10322) 2021-11-03 20:26:59 +00:00
helpers Lightning Lite core and tests (#10175) 2021-10-29 21:46:39 +00:00
lite Fix yielding from iterator in LiteDataLoader (#10304) 2021-11-02 11:40:35 +01:00
loggers Fix iterating over a DummyLogger when `fast_dev_run > 0` (#10232) 2021-10-29 07:22:59 +00:00
loops del iterator on_run_end() (#9915) 2021-10-29 16:29:44 +00:00
models Add support for empty `gpus` list to run on CPU (#10246) 2021-11-01 18:37:38 +00:00
overrides Mark accelerator connector as protected (#10032) 2021-10-25 19:24:54 +00:00
plugins Clip before step (#10248) 2021-10-30 11:27:49 +01:00
profiler Make sure file and folder exists in Profiler (#10073) 2021-10-26 11:13:31 +00:00
trainer introduce has_len_all_ranks() to check the length of dataloader across ranks (#9827) 2021-11-02 13:22:58 -04:00
tuner reset val dataloader for binsearch (#9975) 2021-10-18 12:54:26 +02:00
utilities Fix `apply_to_collection(defaultdict)` (#10316) 2021-11-03 11:18:10 +00:00
README.md CI: add mdformat (#8673) 2021-08-03 18:19:09 +00:00
__init__.py Replace `yapf` with `black` (#7783) 2021-07-26 13:37:35 +02:00
conftest.py Add support for `torch.use_deterministic_algorithms` (#9121) 2021-09-30 04:40:09 +00:00
mnode_tests.txt Mnodes (#5020) 2021-02-04 20:55:40 +01:00
special_tests.sh Lightning Lite Examples (#9987) 2021-11-02 08:04:29 +00:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9