* updated gitignore
* Update README.md
* updated gitignore
* updated links in ninja file
* updated docs
* Update README.md
* Update README.md
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* fixing TensorBoard (#687)
* flake8
* fix typo
* fix tensorboardlogger
drop test_tube dependence
* formatting
* fix tensorboard & tests
* upgrade Tensorboard
* test formatting separately
* try to fix JIT issue
* add tests for 1.4
* added direct links to docs
* updated gitignore
* updated links in ninja file
* updated docs
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* finished rebase
* making private members
* making private members
* making private members
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* set auto dp if no backend
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* fixed lightning import
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* finished lightning module
* finished lightning module
* finished lightning module
* finished lightning module
* added callbacks
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* set auto dp if no backend
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* flake 8
* flake 8
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Basic wandb support
* refactor(wandb): remove unused variables and document logger
* docs(wandb): explain how to use WandbLogger
* test(wandb): add tests for WandbLogger
* feat(wandb): add save_dir
* fix(wandb): allow pickle of logger
* fix(wandb): save logs in custom directory
* test(wandb): test import
* docs(wandb): simplify docstring and use doctest
* test: increase number of epochs for satisfactory accuracy
* test(test_load_model_from_checkpoint): ensure we load last checkpoint
Co-authored-by: Chris Van Pelt <vanpelt@wandb.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* added neptune integration
* added tests for NeptuneLogger, added neptune to docs
* updated link to neptune support
* fixed docstrings, fixed try/except in tests, changed append_tags input
* fixed docstrings line lenght
* bumped epoch nr in model restore tests
* added tags support for single strings
* fixed passing neptune token to backend
* fixed project name in offline mode
* added save_top_k=-1 to checkpoint callback
* reformated initialization of neptune in online mode
* bumped epoch nr to 4 in test_load_model_from_checkpoint
* bumped epoch nr to 5
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Fixing comet ml bug and adding functionality
* Updating documents
* Fixing code style issues in comet_logger
* Changing comet_logger experiment to execute lazily
* Adding tests for comet_logger and addressing comments from @Borda
* Setting step_num to optional keyword argument in log_metrics() to comply to other loggers
* Adding offline logging mode for comet_ml, updating tests and docs
* Switching to MisconfigurationException
Related issue #432
The old documentation suggested that the way to restore a training session is to use a test_tube Experiment. Trainer no longer takes an experiment as a parameter, so it seems the current way to restore a training session is to pass an experiment via a TestTubeLogger. Even if this is not the most elegant solution, updating the docs will at least point new users in the right direction.
* Unit tests for num_gpu property as proxy for __parse_gpu_ids.
* Refactoring __parse_gpu_ids
* Moved the function outside the class as it is
an utility function and did not depend on class in any way.
* Added unit tests for it.
* Mocked torch.cuda.device_count function in tests.
This allows the tests to be run on machines that do not have gpus.
* Fixed the parse_gpu_ids function to handle -1 case.
Function now handles -1 the same way as it does for '-1'.
* Unit tests for root_gpu added.
Added backend as a parameter as currently depending on backend set
or not, code fails with exception in certain circumstances, before
giving a wrong answer.
* Moved __set_root_gpu function out of the class.
This function does not depend on the class and can be tested
more easily this way.
Also added unit tests for this function. They simply reuse
data for the root_gpu property.
* determine_root_gpu_device passes unit tests.
* num_gpus passes unit tests.
Also added a None test for this function.
* parse_gpu_ids tests changed to reflect desired state after refactoring.
Planning to refactor parse_gpu_ids to always return list of ints.
This will simplify code that use output of this function.
* * parse_gpu_ids always returns lists
* parse_gpu_ids checks given ids against available ids
* parse_gpu_ids raises exception for non existant ids
* parse_gpu_ids returns None when no gpus are available
* cleaned up determine_root_gpu_device
* cleaned up num_gpus property
* Updated unit tests to reflect changes in the functions
* Flake8 fixes
* Moved fixture code up before where it is used.
* Updated documentation.
* Changed tests to match the API:
* gpus=-1 or gpus='-1' should use all available gpu devices
* gpus=N
* N=0: no gpus should be used.
* N>0: N gpus should be used
* gpus=list of ints or a comma separated string of numbers:
Use the gpus indicated by the list or the string.
* Fixed code to pass all the changed tests for parsing gpus param.
* Refactoring parse_gpu_ids function.
* flake8 fixes.
* Updating documentation.
* flake8 fixes.
* flake8 fixes.
* flake8 fixes
* Update trainer.py
* Update dp_mixin.py
* Make reduce_distributed_output a stand alone function.
Fix imports.
Fix flake8.
* Add comet_ml dependency to tests requirements.txt
* Revert "Make reduce_distributed_output a stand alone function. Fix imports. Fix flake8."
This reverts commit eac0338
* Merge with master.
* added comet logger
* bug fix in cases where comet was not imported before torch
* fixed mlflow logger to be consistent with docs, updated cometLogger and cometLoggers docs + flake 8 compliance
* print thousands as K, M, B, T, ...
* add option to print top-level modules only
* added doc string and added spacing
* do not print summary if neither "full" nor "top"
* updated docs showing summary print options
* fix line length for travis
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up demos
* cleaning up docs
* cleaned up test_tube logger
* cleaned up test_tube logger
* cleaned up test_tube logger
* added lbfgs support
* added lbfgs support
* added lbfgs support
* Fixes#280 (#309)
* added test seeds (#306)
* added test seeds
* added test seeds
* updated docs
* added lbfgs support
* added lbfgs support
* added lbfgs support
* added lbfgs support
* added lbfgs support
* added lbfgs support
* added lbfgs support
* added lbfgs support
* early stopping callback is not default
* added a default logger
* added default checkpoint callback
* added default checkpoint/loggers
* added default checkpoint/loggers
* updated docs
* cleaned demos
* cleaned demos
* cleaned demos
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* clean up docs around loggers
* Create underlying loggers lazily
This avoids creating duplicate experiments or run in multi-node DDP.
* Save hyperparameters automatically
* Update docs for snapshotting hyperparams
* Fix test tube
* Fix test tube pickling