* Renamed `on_sanity_check_start` to `on_train_start` and added `on_train_end` to `ModelHooks`
* changed tests to use `on_train_start` instead of `on_sanity_check_start`
* Fixing comet ml bug and adding functionality
* Updating documents
* Fixing code style issues in comet_logger
* Changing comet_logger experiment to execute lazily
* Adding tests for comet_logger and addressing comments from @Borda
* Setting step_num to optional keyword argument in log_metrics() to comply to other loggers
* Adding offline logging mode for comet_ml, updating tests and docs
* Switching to MisconfigurationException
* Make name and version properties required
* Warn before deleting files in checkpoint directory
* Get default checkpoint path from any logger
* Fix typos
* Uncomment logger tests
* Whitespace
* Update callback_config_mixin.py
checkpoints and version file names would just have a number. it's easy to tell what you're looking at with version_ prepended
* Address comments
* Fix broken tests
* #452 Fix ValueError
* #452 Use subprocess.run
* #452 Simplify code for gpu_memory_map
* #452 Simplify code for min max memory
* #452 Add test for get_memory_profile
* #452 Use os.sep
* #452 Use os.linesep
* hpc restore takes priority over non hpc weights
* hpc restore takes priority over non hpc weights
* hpc restore takes priority over non hpc weights
* hpc restore takes priority over non hpc weights
* hpc restore takes priority over non hpc weights
* hpc restore takes priority over non hpc weights
* hpc restore takes priority over non hpc weights
* Unit tests for num_gpu property as proxy for __parse_gpu_ids.
* Refactoring __parse_gpu_ids
* Moved the function outside the class as it is
an utility function and did not depend on class in any way.
* Added unit tests for it.
* Mocked torch.cuda.device_count function in tests.
This allows the tests to be run on machines that do not have gpus.
* Fixed the parse_gpu_ids function to handle -1 case.
Function now handles -1 the same way as it does for '-1'.
* Unit tests for root_gpu added.
Added backend as a parameter as currently depending on backend set
or not, code fails with exception in certain circumstances, before
giving a wrong answer.
* Moved __set_root_gpu function out of the class.
This function does not depend on the class and can be tested
more easily this way.
Also added unit tests for this function. They simply reuse
data for the root_gpu property.
* determine_root_gpu_device passes unit tests.
* num_gpus passes unit tests.
Also added a None test for this function.
* parse_gpu_ids tests changed to reflect desired state after refactoring.
Planning to refactor parse_gpu_ids to always return list of ints.
This will simplify code that use output of this function.
* * parse_gpu_ids always returns lists
* parse_gpu_ids checks given ids against available ids
* parse_gpu_ids raises exception for non existant ids
* parse_gpu_ids returns None when no gpus are available
* cleaned up determine_root_gpu_device
* cleaned up num_gpus property
* Updated unit tests to reflect changes in the functions
* Flake8 fixes
* Moved fixture code up before where it is used.
* Updated documentation.
* Changed tests to match the API:
* gpus=-1 or gpus='-1' should use all available gpu devices
* gpus=N
* N=0: no gpus should be used.
* N>0: N gpus should be used
* gpus=list of ints or a comma separated string of numbers:
Use the gpus indicated by the list or the string.
* Fixed code to pass all the changed tests for parsing gpus param.
* Refactoring parse_gpu_ids function.
* flake8 fixes.
* Updating documentation.
* flake8 fixes.
* flake8 fixes.
* flake8 fixes
* Update trainer.py
* Update dp_mixin.py
* Make reduce_distributed_output a stand alone function.
Fix imports.
Fix flake8.
* Add comet_ml dependency to tests requirements.txt
* Revert "Make reduce_distributed_output a stand alone function. Fix imports. Fix flake8."
This reverts commit eac0338
* Merge with master.
* moved dp, ddp outside of trainer
* added main mixins
* finished major mixin refactor
* flake8
* finished major mixin refactor
* finished major mixin refactor
* finished major mixin refactor
* finished major mixin refactor
* finished major mixin refactor
* finished major mixin refactor
* finished major mixin refactor
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* changes to seed for tests
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* fix test
* no warnings always
* no warnings always
* no warnings always
* no warnings always
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up docs
* cleaned up test_tube logger
* cleaned up test_tube logger
* cleaned up test_tube logger