fix changelog (#1452)

* fix changelog

* formatting

* add ddp_cpu

* docs

* add another
This commit is contained in:
Jirka Borovec 2020-04-20 23:36:26 +02:00 committed by GitHub
parent 0203938af8
commit bd168819f2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 40 additions and 19 deletions

View File

@ -9,15 +9,20 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Added
- Added flag `replace_sampler_ddp` to manually disaple sampler replacement in ddp ([#1513](https://github.com/PyTorchLightning/pytorch-lightning/pull/1513))
- Added speed parity tests (max 1 sec difference per epoch)([#1482](https://github.com/PyTorchLightning/pytorch-lightning/pull/1482))
- Added `auto_select_gpus` flag to trainer that enables automatic selection of available GPUs on exclusive mode systems.
- Added learining rate finder ([#1347](https://github.com/PyTorchLightning/pytorch-lightning/pull/1347))
- Added support for ddp mode in clusters without SLURM ([#1345](https://github.com/PyTorchLightning/pytorch-lightning/issues/1345))
- Added learning rate finder ([#1347](https://github.com/PyTorchLightning/pytorch-lightning/pull/1347))
- Added `test_dataloaders` parameter to `Trainer.test()` ([#1393](https://github.com/PyTorchLightning/pytorch-lightning/issues/1393))
- Added `terminate_on_nan` flag to trainer that performs a NaN check with each training iteration when set to `True`. ([#1475](https://github.com/PyTorchLightning/pytorch-lightning/pull/1475))
- Added `terminate_on_nan` flag to trainer that performs a NaN check with each training iteration when set to `True` ([#1475](https://github.com/PyTorchLightning/pytorch-lightning/pull/1475))
- Added `ddp_cpu` backend for testing ddp without GPUs ([#1158](https://github.com/PyTorchLightning/pytorch-lightning/pull/1158))
### Changed
@ -31,42 +36,38 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Deprecated
-
### Removed
-
- Removed `test_dataloaders` parameter from `Trainer.fit()` ([#1393](https://github.com/PyTorchLightning/pytorch-lightning/issues/1393))
-
### Fixed
- Fixed memory leak from opt return ([#1528](https://github.com/PyTorchLightning/pytorch-lightning/pull/1528))
- Fixed saving checkpoint before deleting old ones ([#1453](https://github.com/PyTorchLightning/pytorch-lightning/pull/1453))
- Fixed loggers - flushing last logged metrics even before continue, e.g. `trainer.test()` results ([#1459](https://github.com/PyTorchLightning/pytorch-lightning/pull/1459))
- Fixed optimizer configuration when `configure_optimizers` returns dict without `lr_scheduler` ([#1443](https://github.com/PyTorchLightning/pytorch-lightning/pull/1443))
- Fixed LightningModule - Mixing hparams and arguments in `LightningModule.__init__()` crashes load_from_checkpoint() ([#1505](https://github.com/PyTorchLightning/pytorch-lightning/pull/1505))
- Added a missing call to the `on_before_zero_grad` model hook ([#1493](https://github.com/PyTorchLightning/pytorch-lightning/pull/1493)).
- Fixed a bug that caused the `callbacks` Trainer argument to reference a global variable ([#1534](https://github.com/PyTorchLightning/pytorch-lightning/pull/1534)).
## [0.7.3] - 2020-04-09
### Added
- Added `rank_zero_warn` for warning only in rank 0 ([#1428](https://github.com/PyTorchLightning/pytorch-lightning/pull/1428))
- Added `test_dataloaders` parameter to `Trainer.test()` ([#1393](https://github.com/PyTorchLightning/pytorch-lightning/issues/1393))
### Changed
- Removed `test_dataloaders` parameter from `Trainer.fit()` ([#1393](https://github.com/PyTorchLightning/pytorch-lightning/issues/1393))
### Fixed
- Fixed optimizer configuration when `configure_optimizers` returns dict without `lr_scheduler` ([#1443](https://github.com/PyTorchLightning/pytorch-lightning/pull/1443))
- Fixed default `DistributedSampler` for DDP training ([#1425](https://github.com/PyTorchLightning/pytorch-lightning/pull/1425))
- Fixed workers warning not on windows ([#1430](https://github.com/PyTorchLightning/pytorch-lightning/pull/1430))
- Fixed returning tuple from `run_training_batch` ([#1431](https://github.com/PyTorchLightning/pytorch-lightning/pull/1431))
@ -77,7 +78,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
### Added
- Added speed parity tests (max 1 sec difference per epoch)([#1482](https://github.com/PyTorchLightning/pytorch-lightning/pull/1482))
- Added same step loggers' metrics aggregation ([#1278](https://github.com/PyTorchLightning/pytorch-lightning/pull/1278))
- Added parity test between a vanilla MNIST model and lightning model ([#1284](https://github.com/PyTorchLightning/pytorch-lightning/pull/1284))
- Added parity test between a vanilla RNN model and lightning model ([#1351](https://github.com/PyTorchLightning/pytorch-lightning/pull/1351))

View File

@ -227,7 +227,8 @@ man_pages = [
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, project, project + ' Documentation', author, project,
'One line description of project.', 'Miscellaneous'),
'The lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------

View File

@ -9,6 +9,27 @@ __homepage__ = 'https://github.com/PyTorchLightning/pytorch-lightning'
# this has to be simple string, see: https://github.com/pypa/twine/issues/522
__docs__ = "PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers." \
" Scale your models. Write less boilerplate."
__long_docs__ = """
Lightning is a way to organize your PyTorch code to decouple the science code from the engineering.
It's more of a style-guide than a framework.
In Lightning, you organize your code into 3 distinct categories:
1. Research code (goes in the LightningModule).
2. Engineering code (you delete, and is handled by the Trainer).
3. Non-essential research code (logging, etc. this goes in Callbacks).
Although your research/production project might start simple, once you add things like GPU AND TPU training,
16-bit precision, etc, you end up spending more time engineering than researching.
Lightning automates AND rigorously tests those parts for you.
Overall, Lightning guarantees rigorously tested, correct, modern best practices for the automated parts.
Documentation
-------------
- https://pytorch-lightning.readthedocs.io/en/latest
- https://pytorch-lightning.readthedocs.io/en/stable
"""
import logging as python_logging

View File

@ -38,7 +38,6 @@ ignore =
[check-manifest]
ignore =
*.yml
tox.ini
.github
.github/*
.circleci

View File

@ -69,13 +69,13 @@ def info_packages():
def nice_print(details, level=0):
lines = []
for k in sorted(details):
key = f'{k}:'
key = f'* {k}:' if level == 0 else f'- {k}:'
if isinstance(details[k], dict):
lines += [level * LEVEL_OFFSET + key]
lines += nice_print(details[k], level + 1)
elif isinstance(details[k], (set, list, tuple)):
lines += [level * LEVEL_OFFSET + key]
lines += [(level + 1) * LEVEL_OFFSET + v for v in details[k]]
lines += [(level + 1) * LEVEL_OFFSET + '- ' + v for v in details[k]]
else:
template = '{:%is} {}' % KEY_PADDING
key_val = template.format(key, details[k])
@ -85,9 +85,9 @@ def nice_print(details, level=0):
def main():
details = {
"system": info_system(),
'cuda': info_cuda(),
'packages': info_packages(),
"System": info_system(),
'CUDA': info_cuda(),
'Packages': info_packages(),
}
lines = nice_print(details)
text = os.linesep.join(lines)