* Fuse_modules in a qat-respecting way
* Add compatibility for PyTorch <1.11
In older pytorch versions, `fuse_modules` used the `Module.training`
flag to determine wheter fusion should be QAT-compliant or not, refer
https://github.com/pytorch/pytorch/releases/tag/v1.11.0
* Add CHANGELOG for pull #12891
* Fix conditional import of fuse_modules_qat
`torch.ao.quantization.fuse_modules_qat` was actually added in
torch 1.11.
* Update CHANGELOG.md
Co-authored-by: Akihiro Nitta <nitta@akihironitta.com>