* do not include local vars in auto collection
* add test
* add test for model with "self" renamed to "obj"
* skip decorator
* changelog
* changelog
* update docs
* remove obsolete child collection
* generalize **args, **kwargs names
* docs
* also update varargs passed in
* Revert "also update varargs passed in"
This reverts commit 3d7a30dbee07a513ee13e1cc3e08ca5ccdb85734.
* update test
* refactor and added hook
variant a
variant b
add test
revert rename
add changelog
docs
* resolve merge duplication
* overridden typo
* fix test
* tpu id
* raise if TPU not available
* re-use apply_to_collection function for parsing collections
* comment
* make utility function available to user
* documentation
* move changelog entry to top
* fix tpu transfer call
* fix call
* remove hardcoded string
* improve test
* call model hook by default
* Apply suggestions from code review
* rename utility function
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix grad norm formula
* grad-norm tracker test
* fixed seed and explicit rtol in grad norm tracking test
* a docstring for grad-norms and forced cast to float of norm_type
* support for inf-norm
* renamed the grad norm test
* docs
* fixed language in docstring
* Apply suggestions from code review
Co-authored-by: Jirka <jirka@pytorchlightning.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* replace ddp spawn with subprocess
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
* hot fix
The changes are quite local and limited in nature -- viz., checking for
some indicator environment variables. We check for (SLURM_LOCALID,
NODE_RANK, GROUP_RANK) in order. If multiple are found set, a warning is
logged.
This patch also fixes a minor bug with comparing the `WORLD_SIZE`
environment variable. This can be a string type.
* Join Horovod workers at the end of trainer.fit() to prevent race conditions following training
* flake8
* flake8
Co-authored-by: Jirka <jirka.borovec@seznam.cz>
* update prog. bar metrics on train epoch end
* changelog
* wip test
* more thorough testing
* comments
* update docs
* move test
Co-authored-by: Jirka <jirka.borovec@seznam.cz>
* diable val and test shuffling
* diable val and test shuffling
* diable val and test shuffling
* diable val and test shuffling
* log
* condition
* shuffle
* refactor
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
* squash and rebase
sanity check hooks
sanity check callback hook finish
moved core progress bar functionality into callback
wip
remove duplicate merge
clean up
imports
docs
sanity check progress bar main
sanity
move callback calls
init progrss bar callback
configuration and docs
changelog
rate decorator
pass process_position
disable on rank > 0
position index
is_enabled
remove decorator
refactor init tqdm bars
callback method ordering
cannot reset when disabled
sequence -> list
default values
fix has no attr _time()
move on_val_end to proper place
fix the pickle issue
update warning
properties
check for None
remove old comment
switch order
pull out non-tqdm functionality into base class
documentation for the base class
docs
fix refresh rate issue in validation
restrict type hint of trainer arg
more docs
update trainer docs
rst docs
fix lines too long
fix test
add missing type hints
fix typo
move docstring to __init__ solves doctest failures
remove doctest :(( can't fix the pickle error
fix example
simplify by saving trainer reference
fix docs errors
move docstring
initial value
multiple val checks per epoch
simpler handling of inf dataset sizes
update inf docs
renamed training_tqdm_dict
rename get_tqdm_dict
rename occurences of tqdm
update changelog
fix doctest
fix formatting errors
added callback tests
progress bar on off test
more tests for progress bar
weird test fix?
add ignored property
disable default progress bar in LR finder
change enable/disable behavior
trying doctest in CI again
undo doctest pickle error
undo doctest pickle error :((
remove progress_bar_callback Trainer arg and fix tests
restore progress bar after auto lr find
update docs
fix rebase
fix wrong negation
* fix fast dev run total
* more thorough testing
* remove old args
* fix merge
* fix merge
* separate tests
* type hint total batches
* reduce if
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* is_disabled
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* is_enabled
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* rename enabled/disabled
* move deprecated api
* remove duplicated test from merge
* fix rename is_disabled
* newline
* test also testprogress for fast dev run
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Attempt to fix#1468
* Remove the if statement, it doesn't actually make any difference
* Update docs
* Correct warnings I caused in the last commit
* Add to changelog
* Actually add to changelog
* Clarify documentation and examples
* Update CHANGELOG.md
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* improved docs for core
update links
add references to hooks lifecycle
wip
continue with __init__.py
improve docs for memory.py
improve docs for saving.py
simpler links
fix formatting
* move hooks lifecycle to top of file
* fix doctest import problem
* add missing hook in lifecycle
* Add tests for distributed backend config
* Refactor set_distributed_mode
* Use gloo backend on cpu
* Use 127.0.0.1 instead of 127.0.0.2
Not totally clear on why this is necessary, but it seemt to work
* Update LightningDDP so that it works with CPU
* Add ddp_cpu backend and num_processes Trainer arg
* PEP8
* Fix test skipping. Inequalities are hard :/
* Skip ddp_cpu test on Windows
* Make a few more cases fall back to ddp_cpu
* New function name
* Flake8
* Don't test distributed on MacOS with torch < 1.3
Support for distributed in MacOS was added in Torch 1.3.0
* Add ddp_cpu and num_processes to docs
* Parametrize trainer config tests
* Tweak warning
Co-Authored-By: Jirka Borovec <Borda@users.noreply.github.com>
* Remove redundant test
* Replace pass branches with comments
* Add missing warnings import
* save_path -> root_dir
* Use new rank_zero_warn
* Whitespace
* Apply suggestions from code review
* formatting
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: J. Borovec <jirka.borovec@seznam.cz>
* Make training_epoch_end behave like validation_epoch_end + minor fixes in docstrings.
* Minor fixes (Borda's comments).
* Detach tensors in batch_output (to avoid possible memory leak) + doc fix.
Co-authored-by: Jean-Baptiste SCHIRATTI <jean-baptisteschiratti@MacBook-Pro-de-Jean-Baptiste.local>
* fix RunningMean
* changelog
* fix none
* Update supporters.py
just needed to multiply by zero for init
* Revert "Update supporters.py"
This reverts commit 7e0da6c6
* fix NaN
* formatting
Co-authored-by: William Falcon <waf2107@columbia.edu>
* Update docs so users know the desired manner of configuring learning rate schedulers.
* update list
* as note
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* check if hparams_type exists in checkpoint dictionary for backward compatibility
* concisely maintain backward compatibility for hparams type
* Bug fix in checkpoint loading (#1132)
* first pass for LightningModule typehints
* fix return types
* add missing types
* add type annotations to grads.py
* add type annotations to hooks.py
* add type annotation to memory.py
* proper docstring quotation marks
* add type annotations to saving.py
* fix cyclic import problem
* fix cyclic import problem
* add missing whitespace
* finish type hints for load_from_ methods
* docs: prepare_data does not return anything
* fix auto types in docs
* revert typehint for trainer in hook
* remove unnecessary return docs
* some fixes for memory docs
* revert typing for args kwargs
* added all missing None return types
* remove unused import
* add more details to dict/list return types
* fix line too long
* optimize imports
* linted
* Revert "linted"
This reverts commit 85559611e8.
* remove whitespace
* update
* update
* update
* update
* update
* changelog
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
* training_end renamed to training_step_end
* training_end renamed to training_step_end
* training_end renamed to training_step_end
* training_end renamed to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* fix lost model reference
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* training_end to training_step_end
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* enabled early stopping/checkpooiunt even without val step
* name formatting
* version
* testing
* add test
* fix test
* Update model_checkpoint.py
* doctests
* pylint
* tests
* debug
* debug
* enabled early stopping/checkpooiunt even without val step
* fix MNIST download (#1044)
* fix MNIST download
* simple
* name formatting
* version
* testing
* add test
* fix test
* doctests
* tests
* debug
* debug
* rebased 1041
* rebased 1041
* tests
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
* rebased 1041
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* removed abstract requirement so LightningModule == nn.Module
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* default adam
* docs
* docs
* docs
* added get dataloaders directly using a getter
* deleted decorator
* added prepare_data hook
* refactored dataloader init
* refactored dataloader init
* added dataloader reset flag and main loop
* added dataloader reset flag and main loop
* added dataloader reset flag and main loop
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* made changes
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed bad loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixed error in .fit with loaders
* fixes#909
* fixes#909
* bug fix
* Fixes#902
* new way of passing dataloaders
* fixed docs
* fixed codestyle to follow flake8
* allow val/test be list of dataloaders and smarter checking
* added test
* fix flake error
* fix linking to new test model
* split into multiple test
* fix naming and typo
* minor documentation changes
* remove random file
* Update trainer.py
* Update trainer.py
* Update trainer.py
* Update trainer.py
* Update trainer.py
* Update trainer.py
* better error/warning message
* final adjustments
* update CHANGELOG.md
Co-authored-by: William Falcon <waf2107@columbia.edu>
* added tpu docs
* added tpu flags
* add tpu docs + init training call
* amp
* amp
* amp
* amp
* optimizer step
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* fix test pkg create (#873)
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added test return and print
* added test return and print
* added test return and print
* added test return and print
* added test return and print
* Update pytorch_lightning/trainer/trainer.py
Co-Authored-By: Luis Capelo <luiscape@gmail.com>
* Fix segmentation example (#876)
* removed torchvision model and added custom model
* minor fix
* Fixed relative imports issue
* Fix/typo (#880)
* Update greetings.yml
* Update greetings.yml
* Changelog (#869)
* Create CHANGELOG.md
* Update CHANGELOG.md
* Update CHANGELOG.md
* Update PULL_REQUEST_TEMPLATE.md
* Update PULL_REQUEST_TEMPLATE.md
* Add PR links to Version 0.6.0 in CHANGELOG.md
* Add PR links for Unreleased in CHANGELOG.md
* Update PULL_REQUEST_TEMPLATE.md
* Fixing Function Signatures (#871)
* added tpu docs
* added tpu flags
* add tpu docs + init training call
* amp
* amp
* amp
* amp
* optimizer step
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added auto data transfer to TPU
* added test return and print
* added test return and print
* added test return and print
* added test return and print
* added test return and print
* added test return and print
* added test return and print
* added test return and print
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Luis Capelo <luiscape@gmail.com>
Co-authored-by: Akshay Kulkarni <akshayk.vnit@gmail.com>
Co-authored-by: Ethan Harris <ewah1g13@soton.ac.uk>
Co-authored-by: Shikhar Chauhan <xssChauhan@users.noreply.github.com>
* added outline of all features
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated common use cases doc
* updated docs
* remove unnecessary pass statements
* use isinstance for type checks
* remove unnecessary else/elif after return
* remove unnecessary return statements
* move doc string to top
* merge isinstance calls
* remove unnecessary else/elif after raise
* use list comprehension
* do not use len without comparison
* add missing shebang
* revert isinstance check back to type
broke tests, because bool is actually subclass of int
* add missing period to doc string
* remove unnecessary pass statements
* use isinstance for type checks
* remove unnecessary else/elif after return
* remove unnecessary return statements
* move doc string to top
* merge isinstance calls
* remove unnecessary else/elif after raise
* use list comprehension
* do not use len without comparison
* add missing shebang
* revert isinstance check back to type
broke tests, because bool is actually subclass of int
* add missing period to doc string
* Fix default ckpt path when logger exists (#771)
* rename logging -> loggers (#767)
* move logging >> loggers
* add warning
* fix tests
* logging alias
* formatting
* formatting
* use isinstance for type checks
* revert isinstance check back to type
broke tests, because bool is actually subclass of int
* add more detail to tbptt example (#755)
* add more detail to tbptt example
* warn user about new arg in training_step
Co-authored-by: Vadim Bereznyuk <kuynzereb@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
* updated gitignore
* updated gitignore
* updated links in ninja file
* updated docs
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* updated gitignore
* updated links in ninja file
* updated docs
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* finished rebase
* making private members
* making private members
* making private members
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* set auto dp if no backend
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* fixed lightning import
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* finished lightning module
* finished lightning module
* finished lightning module
* finished lightning module
* added callbacks
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* set auto dp if no backend
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* flake 8
* flake 8
* fix docs path
* updated gitignore
* updated gitignore
* updated links in ninja file
* updated docs
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* updated gitignore
* updated docs
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* finished rebase
* making private members
* making private members
* making private members
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* set auto dp if no backend
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* fixed lightning import
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* finished lightning module
* finished lightning module
* finished lightning module
* finished lightning module
* added callbacks
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* flake 8
* flake 8
* fix docs path
* flake 8
* Update theme_variables.jinja
* updated gitignore
* Update README.md
* updated gitignore
* updated links in ninja file
* updated docs
* Update README.md
* Update README.md
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* fixing TensorBoard (#687)
* flake8
* fix typo
* fix tensorboardlogger
drop test_tube dependence
* formatting
* fix tensorboard & tests
* upgrade Tensorboard
* test formatting separately
* try to fix JIT issue
* add tests for 1.4
* added direct links to docs
* updated gitignore
* updated links in ninja file
* updated docs
* finished callbacks
* finished callbacks
* finished callbacks
* fixed left menu
* added callbacks to menu
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* added direct links to docs
* finished rebase
* making private members
* making private members
* making private members
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* set auto dp if no backend
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* working on trainer docs
* fixed lightning import
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* cleared spaces
* finished lightning module
* finished lightning module
* finished lightning module
* finished lightning module
* added callbacks
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* set auto dp if no backend
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* added loggers
* flake 8
* flake 8
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* fix dangling gradients
make sure only the gradients of the current optimizer's paramaters are calculated in the training step.
* add note about multiple optimizer gradient update
* Update training_loop.py
* Renamed `on_sanity_check_start` to `on_train_start` and added `on_train_end` to `ModelHooks`
* changed tests to use `on_train_start` instead of `on_sanity_check_start`
* use print for INFO and lower levels summarize()
* use logging.INFO instead of magic number
* bring logging.info back for other cases
* move logging config to __init__.py
* prepend the model summary with a newline