lightning/tests
ananthsub fa41c588f4
Remove ProfilerConnector class (#7654)
* Remove ProfilerConnector class

* Update trainer.py

* Update CHANGELOG.md

* Update trainer.py

* Update trainer.py

* tests
2021-05-24 08:58:15 -07:00
..
accelerators refactor optimizer loop logic for manual and automatic optimization (#7526) 2021-05-17 14:42:01 +02:00
base Bugfix/Multiple dataloaders (#7433) 2021-05-11 16:33:29 +02:00
callbacks Fix progress bar print error when called before training (#7674) 2021-05-24 17:33:28 +02:00
checkpointing [feat] Support time-based checkpointing during training (#7515) 2021-05-19 22:14:13 +00:00
core Move sync code from step result to lightning module [6/n] (#7651) 2021-05-24 13:13:55 +01:00
deprecated_api Remove tbptt `self.log` flags and other dead code [5/n] (#7644) 2021-05-22 01:13:00 +00:00
helpers FSDP with full state dict (#7487) 2021-05-24 08:11:45 +01:00
loggers Add run_name argument to the MLFlowLogger constructor (#7622) 2021-05-21 09:17:32 +01:00
metrics Simplify deprecations (#6620) 2021-03-25 15:26:38 +01:00
models [feat] Support custom filesystems in LightningModule.to_torchscript (#7617) 2021-05-21 11:23:15 +00:00
overrides `TrainerState` refactor [5/5] (#7173) 2021-05-04 12:50:56 +02:00
plugins FSDP with full state dict (#7487) 2021-05-24 08:11:45 +01:00
trainer Remove ProfilerConnector class (#7654) 2021-05-24 08:58:15 -07:00
tuner Fix dataloaders are not reset when tuning the model (#7566) 2021-05-24 10:21:45 +02:00
utilities Fix `Trainer.plugins` type declaration (#7288) 2021-05-04 08:42:57 +02:00
README.md Delete unused CI scripts (#7152) 2021-04-22 03:07:48 +02:00
__init__.py fixing examples (#6600) 2021-03-20 18:58:59 +00:00
collect_env_details.py
conftest.py CI: fixture for global rank variable reset (#6839) 2021-04-06 09:37:17 -07:00
mnode_tests.txt Mnodes (#5020) 2021-02-04 20:55:40 +01:00
special_tests.sh DeepSpeed ZeRO Update (#6546) 2021-03-30 13:39:02 -04:00
test_profiler.py Remove ProfilerConnector class (#7654) 2021-05-24 08:58:15 -07:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.4 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.4 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.4 bash
docker image rm pytorch_lightning:devel-torch1.4