lightning/tests
Rohit Gupta c02dc8585c
Profile `LightningDataModule` hooks (#12971)
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2022-05-06 12:23:36 +00:00
..
accelerators [FSDP] Adding Native FSDP Strategy (#12447) 2022-05-05 12:48:29 +00:00
benchmarks Make standalone tests less verbose (#12684) 2022-04-20 20:57:40 +02:00
callbacks Fix `TQDMProgressBar` reset and update to show correct time estimation (#12889) 2022-05-03 16:21:59 +00:00
checkpointing Versioning of last checkpoins (#12902) 2022-04-29 14:13:50 +09:00
core Profile `LightningDataModule` hooks (#12971) 2022-05-06 12:23:36 +00:00
deprecated_api Support automatic seeding of the LightningCLI (#12822) 2022-04-28 14:24:32 +00:00
helpers [1/2] Collaborative Strategy (#12842) 2022-05-05 16:06:26 +00:00
lite Fix tests failing on a single GPU (#11753) 2022-04-10 04:23:16 +00:00
loggers Remove deprecated `TestTubeLogger` (#12859) 2022-04-24 20:05:48 +02:00
loops Construct the hook kwargs inside each loop (#12100) 2022-05-03 17:08:02 +02:00
models Unpin CUDA docker image for GPU CI (#12373) 2022-05-06 02:56:57 +00:00
overrides Fix retrieval of batch indices when dataloader num_workers > 0 (#10870) 2021-12-02 10:36:10 +00:00
plugins Versioning of last checkpoins (#12902) 2022-04-29 14:13:50 +09:00
profiler Make standalone tests less verbose (#12684) 2022-04-20 20:57:40 +02:00
strategies [1/2] Collaborative Strategy (#12842) 2022-05-05 16:06:26 +00:00
trainer Add support for reloading the last checkpoint saved by passing `ckpt_path="last"` (#12816) 2022-05-05 08:32:58 +00:00
tuner Disable tuner with distributed strategies (#12179) 2022-03-07 08:45:07 +00:00
utilities Add profiling to dataloader `next()` (#12124) 2022-05-06 11:48:12 +02:00
README.md Merge pull request #12766 from PyTorchLightning/docs/slack 2022-04-18 11:13:22 -04:00
__init__.py
conftest.py Drop PyTorch 1.7 support (#12432) 2022-03-27 21:31:20 +00:00
standalone_tests.sh Fix selection of standalone tests (#10857) 2021-12-01 09:48:37 +01:00

README.md

PyTorch-Lightning Tests

Most of the tests in PyTorch Lightning train a BoringModel under various trainer conditions (ddp, ddp2+amp, etc...). Want to add a new test case and not sure how? Talk to us!

Running tests

Local: Testing your work locally will help you speed up the process since it allows you to focus on particular (failing) test-cases. To setup a local development environment, install both local and test dependencies:

# clone the repo
git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install required depedencies
python -m pip install ".[dev, examples]"
# install pre-commit (optional)
python -m pip install pre-commit
pre-commit install

Additionally, for testing backward compatibility with older versions of PyTorch Lightning, you also need to download all saved version-checkpoints from the public AWS storage. Run the following script to get all saved version-checkpoints:

wget https://pl-public-data.s3.amazonaws.com/legacy/checkpoints.zip -P legacy/
unzip -o legacy/checkpoints.zip -d legacy/

Note: These checkpoints are generated to set baselines for maintaining backward compatibility with legacy versions of PyTorch Lightning. Details of checkpoints for back-compatibility can be found here.

You can run the full test suite in your terminal via this make script:

make test

Note: if your computer does not have multi-GPU or TPU these tests are skipped.

GitHub Actions: For convenience, you can also use your own GHActions building which will be triggered with each commit. This is useful if you do not test against all required dependency versions.

Docker: Another option is to utilize the pytorch lightning cuda base docker image. You can then run:

python -m pytest pytorch_lightning tests pl_examples -v

You can also run a single test as follows:

python -m pytest -v tests/trainer/test_trainer_cli.py::test_default_args

Conditional Tests

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Standalone Tests

There are certain standalone tests, which you can run using:

PL_RUN_STANDALONE_TESTS=1 python -m pytest -v tests/trainer/
# or
./tests/standalone_tests.sh tests/trainer

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m pytest pytorch_lightning tests pl_examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9