lightning/tests
Danielle Pintz 34c454c756
Small improvements to TB and CSV loggers (#11764)
* small improvements to TB and CSV loggers
* addr comments
* remove redundant lines and update tests

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Aki Nitta <nitta@akihironitta.com>
2022-02-07 14:59:39 +09:00
..
accelerators Let Accelerator inherit from ABC to make sure abstractmethod takes effect (#11521) 2022-01-23 20:47:43 +01:00
benchmarks Tests: Fail on FutureWarning (#11541) 2022-01-20 12:52:34 +00:00
callbacks Allow access to ckpt_path within context of fit() (#11696) 2022-02-05 05:23:16 +01:00
checkpointing Mark `CheckpointConnector` as protected (#11550) 2022-02-03 02:26:08 +00:00
core Changes in preparation to #8578 (#11562) 2022-02-02 19:57:08 +00:00
deprecated_api Allow access to ckpt_path within context of fit() (#11696) 2022-02-05 05:23:16 +01:00
helpers Add `Trainer(strategy="bagua")` (#11146) 2022-02-04 17:02:09 +00:00
lite Update strategy import statements (#11231) 2021-12-23 08:26:28 +01:00
loggers Small improvements to TB and CSV loggers (#11764) 2022-02-07 14:59:39 +09:00
loops Mark `CheckpointConnector` as protected (#11550) 2022-02-03 02:26:08 +00:00
models Fix mid-epoch warning call while resuming (#11556) 2022-02-03 05:42:31 +00:00
overrides Fix retrieval of batch indices when dataloader num_workers > 0 (#10870) 2021-12-02 10:36:10 +00:00
plugins Add typing to accelerators/gpu.py (#11333) 2022-01-12 19:44:51 +00:00
profiler Remove `self._log_dir` from `BaseProfiler` (#11740) 2022-02-05 04:45:48 +01:00
strategies Add `Trainer(strategy="bagua")` (#11146) 2022-02-04 17:02:09 +00:00
trainer Allow access to ckpt_path within context of fit() (#11696) 2022-02-05 05:23:16 +01:00
tuner Update `tests/tuner/*.py` to use `devices` instead of `gpus` or `ipus` (#11520) 2022-02-03 20:58:13 +05:30
utilities Support no pre-fetching (#11606) 2022-02-05 03:59:46 +00:00
README.md CI: add mdformat (#8673) 2021-08-03 18:19:09 +00:00
__init__.py Replace `yapf` with `black` (#7783) 2021-07-26 13:37:35 +02:00
conftest.py Drop Python 3.6 support (#11117) 2021-12-21 17:06:15 +00:00
standalone_tests.sh Fix selection of standalone tests (#10857) 2021-12-01 09:48:37 +01:00

README.md

PyTorch-Lightning Tests

Most PL tests train a full MNIST model under various trainer conditions (ddp, ddp2+amp, etc...). This provides testing for most combinations of important settings. The tests expect the model to perform to a reasonable degree of testing accuracy to pass.

Running tests

git clone https://github.com/PyTorchLightning/pytorch-lightning
cd pytorch-lightning

# install dev deps
pip install -r requirements/devel.txt

# run tests
py.test -v

To test models that require GPU make sure to run the above command on a GPU machine. The GPU machine must have at least 2 GPUs to run distributed tests.

Note that this setup will not run tests that require specific packages installed such as Horovod, FairScale, NVIDIA/apex, NVIDIA/DALI, etc. You can rely on our CI to make sure all these tests pass.

Running Coverage

Make sure to run coverage on a GPU machine with at least 2 GPUs and NVIDIA apex installed.

cd pytorch-lightning

# generate coverage (coverage is also installed as part of dev dependencies under requirements/devel.txt)
coverage run --source pytorch_lightning -m py.test pytorch_lightning tests examples -v

# print coverage stats
coverage report -m

# exporting results
coverage xml

Building test image

You can build it on your own, note it takes lots of time, be prepared.

git clone <git-repository>
docker image build -t pytorch_lightning:devel-torch1.9 -f dockers/cuda-extras/Dockerfile --build-arg TORCH_VERSION=1.9 .

To build other versions, select different Dockerfile.

docker image list
docker run --rm -it pytorch_lightning:devel-torch1.9 bash
docker image rm pytorch_lightning:devel-torch1.9