lightning/tests
Akash Kwatra bc1c8b926c
Deprecate `BaseProfiler` in favor of `Profiler` (#12150)
Co-authored-by: Kaushik B <45285388+kaushikb11@users.noreply.github.com>
Co-authored-by: Kaushik B <kaushikbokka@gmail.com>
2022-03-21 20:17:03 +00:00
..
accelerators Refactor `TorchElasticEnvironment.detect` to use `torch.distributed.is_torchelastic_launched` (#12376) 2022-03-21 16:51:24 +01:00
benchmarks add parity test for sync batchnorm (#12021) 2022-02-26 03:51:57 +00:00
callbacks [2/n] add `Stateful` functionality support for Callbacks (#12232) 2022-03-19 20:20:50 +00:00
checkpointing Support passing `storage_options` in `trainer.save_checkpoint()` API (#11891) 2022-03-09 18:35:50 +00:00
core Skip horovod 0.24.0 only (#12248) 2022-03-10 16:01:08 +00:00
deprecated_api Deprecate `BaseProfiler` in favor of `Profiler` (#12150) 2022-03-21 20:17:03 +00:00
helpers unify logger testing (#9081) 2022-03-11 14:24:30 +00:00
lite Return the output of the optimizer step (#11711) 2022-02-09 09:37:13 +00:00
loggers unify logger testing (#9081) 2022-03-11 14:24:30 +00:00
loops Have the outputs match the loops format (#12182) 2022-03-08 18:10:18 +00:00
models Remove `AcceleratorConnector.num_gpus` and deprecate `Trainer.num_gpus` (#12384) 2022-03-21 18:06:39 +01:00
overrides Fix retrieval of batch indices when dataloader num_workers > 0 (#10870) 2021-12-02 10:36:10 +00:00
plugins Refactor `TorchElasticEnvironment.detect` to use `torch.distributed.is_torchelastic_launched` (#12376) 2022-03-21 16:51:24 +01:00
profiler Deprecate `LoggerCollection` in favor of `trainer.loggers` (#12147) 2022-03-04 23:01:43 +00:00
strategies Update fairscale version (#11567) 2022-03-21 11:38:55 +00:00
trainer Remove `AcceleratorConnector.num_gpus` and deprecate `Trainer.num_gpus` (#12384) 2022-03-21 18:06:39 +01:00
tuner Disable tuner with distributed strategies (#12179) 2022-03-07 08:45:07 +00:00
utilities Add `LightningCLI(auto_registry)` (#12108) 2022-03-08 12:26:10 -05:00
README.md CI: add mdformat (#8673) 2021-08-03 18:19:09 +00:00
__init__.py Replace `yapf` with `black` (#7783) 2021-07-26 13:37:35 +02:00
conftest.py unify logger testing (#9081) 2022-03-11 14:24:30 +00:00
standalone_tests.sh Fix selection of standalone tests (#10857) 2021-12-01 09:48:37 +01:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9