* print thousands as K, M, B, T, ...
* add option to print top-level modules only
* added doc string and added spacing
* do not print summary if neither "full" nor "top"
* updated docs showing summary print options
* fix line length for travis
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up docs
* cleaned up test_tube logger
* cleaned up test_tube logger
* cleaned up test_tube logger
* early stopping callback is not default
* added a default logger
* added default checkpoint callback
* added default checkpoint/loggers
* added default checkpoint/loggers
* updated docs
* cleaned demos
* cleaned demos
* cleaned demos
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* Create underlying loggers lazily
This avoids creating duplicate experiments or run in multi-node DDP.
* Save hyperparameters automatically
* Update docs for snapshotting hyperparams
* Fix test tube
* Fix test tube pickling
* added nvidia flag set
* added nvidia flag set
* added nvidia flag set
* added nvidia flag set
* added nvidia flag set
* added nvidia flag set
* added nvidia flag set
* added nvidia flag set
* added simple cluster template
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* sets correct backend for possible combinations of gpu inputs
* Allow to deactivate GPU memory logging in Trainer
Adds the flag `log_gpu_memory` to Trainer to deactivate logging of GPU
memory utilization. On some servers logging the GPU memory usage can
significantly slow down training.
* Update Logging.md
* Update trainer.py
* cleaned up progbar
* cleaned up progbar
* cleaned up progbar
* cleaned up progbar
* cleaned up progbar
* cleaned up progbar
* cleaned up progbar
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* updated base files
* flake 8