lightning/tests
thomas chaton d2983c7c51
[fix] Enable manual optimization DeepSpeed (#7970)
* resolve manual optimization

* resolve manual optimization

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update changelog

* Simplify message

* Move from deprecated

* Split model parallel/manual model

* Use property

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: SeanNaren <sean@grid.ai>
2021-06-16 09:25:41 +00:00
..
accelerators Remove convert_to_half, suggest using `model.half` (#7974) 2021-06-14 18:48:02 +01:00
base Merge pull request #7872 from PyTorchLightning/refactor/logger-poc-changes 2021-06-08 09:04:16 -04:00
callbacks Loop Refactor 1/N - Training Loop (#7871) 2021-06-15 12:55:06 +00:00
checkpointing update chlog + legacy chpt (#7954) 2021-06-13 09:42:49 +05:30
core Standardize positional datamodule and argument names (#7431) 2021-06-15 11:50:13 +00:00
deprecated_api Loop Refactor 1/N - Training Loop (#7871) 2021-06-15 12:55:06 +00:00
helpers Loop Refactor 1/N - Training Loop (#7871) 2021-06-15 12:55:06 +00:00
loggers add logger to __all__ (#6854) 2021-06-09 13:07:02 +00:00
metrics Some test updates (#7761) 2021-05-30 13:15:25 +02:00
models Support `save_hyperparameters()` in LightningModule dataclass (#7992) 2021-06-16 10:30:58 +02:00
overrides Fix double precision + ddp_spawn (#6924) 2021-06-01 15:21:17 +00:00
plugins [fix] Enable manual optimization DeepSpeed (#7970) 2021-06-16 09:25:41 +00:00
trainer Loop Refactor 1/N - Training Loop (#7871) 2021-06-15 12:55:06 +00:00
tuner Increment the total batch idx before the accumulation early exit (#7692) 2021-05-25 10:23:40 +02:00
utilities Easier configurability of callbacks that should always be present in LightningCLI (#7964) 2021-06-16 02:03:37 +02:00
README.md Delete unused CI scripts (#7152) 2021-04-22 03:07:48 +02:00
__init__.py fixing examples (#6600) 2021-03-20 18:58:59 +00:00
collect_env_details.py add copyright to tests (#5143) 2021-01-05 09:57:37 +01:00
conftest.py CI: fixture for global rank variable reset (#6839) 2021-04-06 09:37:17 -07:00
mnode_tests.txt Mnodes (#5020) 2021-02-04 20:55:40 +01:00
special_tests.sh [IPU] Add special tests for IPUs 2/n (#7833) 2021-06-04 23:23:09 +05:30
test_profiler.py Remove ProfilerConnector class (#7654) 2021-05-24 08:58:15 -07:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.4 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.4 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.4 bash
docker image rm pytorch_lightning:devel-torch1.4