prune changelog (#1123)

This commit is contained in:
Jirka Borovec 2020-03-12 15:48:51 +01:00 committed by GitHub
parent 5e013f6e2f
commit 9255e54acb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 1 additions and 199 deletions

View File

@ -28,22 +28,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
## [0.7.1] - 2020-03-07
### Added
- _None_
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixes `print` issues and `data_loader` ([#1080](https://github.com/PyTorchLightning/pytorch-lightning/pull/1080))
@ -209,10 +193,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Deprecated `tng_dataloader`
### Removed
- _None_
### Fixed
- Fixed an issue where the number of batches was off by one during training
@ -235,10 +215,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Changed default for `amp_level` to `O1`
### Deprecated
- _None_
### Removed
- Removed the `print_weights_summary` argument from `Trainer`
@ -270,14 +246,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Disabled auto GPU loading when restoring weights to prevent out of memory errors
- Changed logging, early stopping and checkpointing to occur by default
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug with samplers that do not specify `set_epoch`
@ -287,10 +255,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
## [0.5.0] - 2019-09-26
### Added
- _None_
### Changed
- Changed `data_batch` argument to `batch` throughout
@ -300,14 +264,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Changed `gradient_clip` argument to `gradient_clip_val`
- Changed `add_log_row_interval` to `row_log_interval`
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug with tensorboard logging in multi-gpu setup
@ -329,14 +285,6 @@ memory utilization
- Changed gpu API to take integers as well (e.g. `gpus=2` instead of `gpus=[0, 1]`)
- All models now loaded on to CPU to avoid device and out of memory issues in PyTorch
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug where data types that implement `.to` but not `.cuda` would not be properly moved onto the GPU
@ -350,41 +298,17 @@ memory utilization
- Added `GradientAccumulationScheduler` callback which can be used to schedule changes to the number of accumulation batches
- Added option to skip the validation sanity check by setting `nb_sanity_val_steps = 0`
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug when setting `nb_sanity_val_steps = 0`
## [0.4.7] - 2019-08-24
### Added
- _None_
### Changed
- Changed the default `val_check_interval` to `1.0`
- Changed defaults for `nb_val_batches`, `nb_tng_batches` and `nb_test_batches` to 0
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug where the full validation set as used despite setting `val_percent_check`
@ -402,18 +326,6 @@ memory utilization
- Added support for data to be given as a `dict` or `list` with a single gpu
- Added support for `configure_optimizers` to return a single optimizer, two list (optimizers and schedulers), or a single list
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug where returning just an optimizer list (i.e. without schedulers) from `configure_optimizers` would throw an `Exception`
@ -424,22 +336,6 @@ memory utilization
- Added `optimizer_step` method that can be overridden to change the standard optimizer behaviour
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- _None_
## [0.4.4] - 2019-08-12
### Added
@ -452,14 +348,6 @@ memory utilization
- `validation_step` and `val_dataloader` are now optional
- `lr_scheduler` is now activated after epoch
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug where a warning would show when using `lr_scheduler` in `torch>1.1.0`
@ -467,70 +355,22 @@ memory utilization
## [0.4.3] - 2019-08-10
### Added
- _None_
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug where accumulate gradients would scale the loss incorrectly
## [0.4.2] - 2019-08-08
### Added
- _None_
### Changed
- Changed install requirement to `torch==1.2.0`
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- _None_
## [0.4.1] - 2019-08-08
### Added
- _None_
### Changed
- Changed install requirement to `torch==1.1.0`
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- _None_
## [0.4.0] - 2019-08-08
### Added
@ -542,10 +382,6 @@ memory utilization
- Changed `training_step` and `validation_step`, outputs will no longer be automatically reduced
### Deprecated
- _None_
### Removed
- Removed need for `Experiment` object in `Trainer`
@ -554,49 +390,15 @@ memory utilization
- Fixed issues with reducing outputs from generative models (such as images and text)
## [0.3.6.1] - 2019-07-27
### Added
- _None_
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- Fixed a bug where `Experiment` object was not process safe, potentially causing logs to be overwritten
## [0.3.6] - 2019-07-25
### Added
- Added a decorator to do lazy data loading internally
### Changed
- _None_
### Deprecated
- _None_
### Removed
- _None_
### Fixed
- _None_
- Fixed a bug where `Experiment` object was not process safe, potentially causing logs to be overwritten
## [0.3.5] - 2019-MM-DD