lightning/tests
jjenniferdai 01e0dac60f
Deprecate `Trainer.should_rank_save_checkpoint` property (#11068)
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-12-16 02:24:49 +01:00
..
accelerators added UserWarnings if max_epochs not set in the Trainer class (#10700) 2021-12-06 09:44:25 +00:00
base Deprecate `DistributedType` in favor of `StrategyType` (#10505) 2021-11-15 17:10:08 +00:00
benchmarks Add separate CI job for slow tests (#10830) 2021-12-01 19:58:18 +00:00
callbacks Update the TQDM progress bar `on_train_epoch_end` (#11069) 2021-12-15 17:48:32 +00:00
checkpointing Do not require omegaconf to run tests (#10832) 2021-11-30 14:48:03 +00:00
core Fix `prepare_data` implementation in `BoringDataModule` (#10915) 2021-12-03 13:51:34 +00:00
deprecated_api Deprecate `Trainer.should_rank_save_checkpoint` property (#11068) 2021-12-16 02:24:49 +01:00
helpers Fix `prepare_data` implementation in `BoringDataModule` (#10915) 2021-12-03 13:51:34 +00:00
lite Update LiteOptimizer signature after optimizer changes in TrainingTypePlugin (#10708) 2021-11-30 15:16:59 +01:00
loggers Removed duplicated file extension when uploading model checkpoints with NeptuneLogger (#11015) 2021-12-11 00:03:12 +05:30
loops Sort out the dataloader idx logic for evaluation (#10923) 2021-12-03 20:01:46 +00:00
models Fix support for logging within callbacks returned from `LightningModule` (#10991) 2021-12-14 19:41:29 +01:00
overrides Fix retrieval of batch indices when dataloader num_workers > 0 (#10870) 2021-12-02 10:36:10 +00:00
plugins Standardize model attribute access in training type plugins (#11072) 2021-12-15 16:37:21 +01:00
profiler Don't import torch_xla.debug for torch-xla<1.8 (#10836) 2021-12-06 06:31:38 +00:00
trainer Remove dead check in `ModelCheckpoint` (#10930) 2021-12-15 09:19:45 +05:30
tuner Prune EvalModelTemplate (1/n) (#10969) 2021-12-07 18:04:06 +00:00
utilities Add separate CI job for slow tests (#10830) 2021-12-01 19:58:18 +00:00
README.md CI: add mdformat (#8673) 2021-08-03 18:19:09 +00:00
__init__.py Replace `yapf` with `black` (#7783) 2021-07-26 13:37:35 +02:00
conftest.py Add separate CI job for slow tests (#10830) 2021-12-01 19:58:18 +00:00
mnode_tests.txt
standalone_tests.sh Fix selection of standalone tests (#10857) 2021-12-01 09:48:37 +01:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9