lightning/tests
Kushashwa Ravi Shrimali 00211c1de3
Default `benchmark` based on `deterministic` flag (#11944)
Co-authored-by: Carlos Mocholi <carlossmocholi@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2022-02-24 19:06:03 +00:00
..
accelerators Update warnings for available accelerators not being used (#11909) 2022-02-24 09:12:02 +01:00
benchmarks Tests: Fail on FutureWarning (#11541) 2022-01-20 12:52:34 +00:00
callbacks Update RichProgressBarTheme after detecting light theme on colab (#10993) 2022-02-22 05:32:27 +00:00
checkpointing Set the last global step saved only when actually saving (#12057) 2022-02-24 09:43:50 +01:00
core Support optimizer step progress tracking with manual optimization (#11848) 2022-02-16 21:27:27 +00:00
deprecated_api Deprecate `agg_key_funcs`, `agg_default_func`, and `update_agg_funcs` from `LightningLoggerBase` (#11871) 2022-02-21 22:35:59 +00:00
helpers Unblock GPU CI (#11934) 2022-02-16 21:15:44 +01:00
lite Return the output of the optimizer step (#11711) 2022-02-09 09:37:13 +00:00
loggers Deprecate and remove calls to agg_and_log_metrics (#11832) 2022-02-18 02:54:33 +00:00
loops Avoid loading dataloaders if `limit_batches=0` (#11576) 2022-02-22 11:33:53 +00:00
models Remove `Trainer._device_type` (#11992) 2022-02-22 13:02:13 +00:00
overrides Fix retrieval of batch indices when dataloader num_workers > 0 (#10870) 2021-12-02 10:36:10 +00:00
plugins Update `tests/plugins/*.py` to use `devices` instead of `gpus` or `ipus` (#11872) 2022-02-21 22:57:21 +01:00
profiler Support manual optimization step profiling without a trainer reference (#11883) 2022-02-21 13:43:12 +00:00
strategies Fix passing _ddp_params_and_buffers_to_ignore (#11949) 2022-02-24 17:22:48 +00:00
trainer Default `benchmark` based on `deterministic` flag (#11944) 2022-02-24 19:06:03 +00:00
tuner Update `tests/tuner/*.py` to use `devices` instead of `gpus` or `ipus` (#11520) 2022-02-03 20:58:13 +05:30
utilities Add support for `dataloader_iter` to validate and test steps (#11546) 2022-02-24 18:01:35 +00:00
README.md
__init__.py
conftest.py Centralize rank_zero_only utilities into their own module (#11747) 2022-02-07 08:09:55 +00:00
standalone_tests.sh Fix selection of standalone tests (#10857) 2021-12-01 09:48:37 +01:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9