lightning/tests
Akash Kwatra 115a5d08e8
Decouple utilities from `LightningLoggerBase` (#11484)
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Jirka <jirka.borovec@seznam.cz>
2022-02-02 23:29:01 +01:00
..
accelerators Let Accelerator inherit from ABC to make sure abstractmethod takes effect (#11521) 2022-01-23 20:47:43 +01:00
benchmarks Tests: Fail on FutureWarning (#11541) 2022-01-20 12:52:34 +00:00
callbacks Use a dataclass as the scheduler config (#11443) 2022-01-18 20:23:32 +01:00
checkpointing Set the state before saving "last" or "none" checkpoints (#11481) 2022-02-02 23:07:05 +01:00
core Changes in preparation to #8578 (#11562) 2022-02-02 19:57:08 +00:00
deprecated_api Set the state before saving "last" or "none" checkpoints (#11481) 2022-02-02 23:07:05 +01:00
helpers
lite
loggers Decouple utilities from `LightningLoggerBase` (#11484) 2022-02-02 23:29:01 +01:00
loops Fix val_loop run on restart (#11552) 2022-02-02 20:19:34 +00:00
models Changes in preparation to #8578 (#11562) 2022-02-02 19:57:08 +00:00
overrides
plugins
profiler Sort simple profiler summary based on mean duration (#11671) 2022-02-02 20:44:42 +00:00
strategies Remove `Strategy.on_tpu` property (#11536) 2022-01-20 08:25:26 +01:00
trainer Changes in preparation to #8578 (#11562) 2022-02-02 19:57:08 +00:00
tuner Fix checkpoint values when saving and resetting the tuner state (#11518) 2022-01-20 18:54:40 +00:00
utilities Decouple utilities from `LightningLoggerBase` (#11484) 2022-02-02 23:29:01 +01:00
README.md
__init__.py
conftest.py
standalone_tests.sh

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9