lightning/tests
Carlos Mocholí 4a64bc3fd3
Fix DeepSpeed lr scheduler logic (#8527)
* Fix deepspeed scheduler logic

* Fix tests

* Minor changes

* Improve tests

* inference fix

* CHANGELOG

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-07-23 10:08:58 +01:00
..
accelerators fix: Enable manual optimization for TPUs (#8458) 2021-07-22 15:33:35 +05:30
base Parity test (#7832) 2021-07-21 02:53:53 +05:30
callbacks Remove `torch >= 1.6` checks (#8523) 2021-07-23 04:03:20 +00:00
checkpointing [bugfix] Re-compute accumulated_grad_batches (#8493) 2021-07-21 10:46:25 +00:00
core Add `ModelCheckpoint(save_on_train_epoch_end)` (#8389) 2021-07-13 14:47:59 +00:00
deprecated_api Maintain Backward compatibility for DeviceDtypeModuleMixin (#8474) 2021-07-21 13:13:19 +02:00
helpers Parity test (#7832) 2021-07-21 02:53:53 +05:30
loggers Remove `torch >= 1.6` checks (#8523) 2021-07-23 04:03:20 +00:00
loops Do not reset Loops total counters (#8475) 2021-07-19 18:22:47 +02:00
metrics Add missing logging tests (#8195) 2021-06-29 22:52:50 +00:00
models Bugfix: horovod optimizer missing 2 required positional arguments (#7840) 2021-07-21 08:11:26 +00:00
overrides Remove unnecessary use of comprehension (#8149) 2021-06-27 10:00:02 +01:00
plugins Fix DeepSpeed lr scheduler logic (#8527) 2021-07-23 10:08:58 +01:00
profiler Remove `torch >= 1.6` checks (#8523) 2021-07-23 04:03:20 +00:00
trainer Remove `torch >= 1.6` checks (#8523) 2021-07-23 04:03:20 +00:00
tuner Remove unnecessary comprehension (#8405) 2021-07-19 08:30:24 +00:00
utilities Remove `torch >= 1.6` checks (#8523) 2021-07-23 04:03:20 +00:00
README.md Set minimum PyTorch version to 1.6 (#8288) 2021-07-13 17:12:49 +00:00
__init__.py Parity test (#7832) 2021-07-21 02:53:53 +05:30
conftest.py Parity test (#7832) 2021-07-21 02:53:53 +05:30
mnode_tests.txt Mnodes (#5020) 2021-02-04 20:55:40 +01:00
special_tests.sh support launching Lightning ddp with traditional command (#7480) 2021-07-14 11:25:36 +00:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9