samlurye
f90849cc95
Deprecate LightningModule.summarize() in favor of pl.utilities.model_summary.summarize() ( #8513 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: ananthsub <ananth.subramaniam@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2021-08-03 22:08:51 +00:00
Carlos Mocholí
e63968ab88
Add `pyupgrade` to `pre-commit` ( #8557 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2021-07-26 14:38:12 +02:00
Carlos Mocholí
a64cc37394
Replace `yapf` with `black` ( #7783 )
...
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2021-07-26 13:37:35 +02:00
Palermo
36b893c43e
Add `ModelSummary.max_depth` ( #8062 )
...
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2021-07-01 12:08:16 +02:00
Vatsalya Chaubey
ce93d8bcfd
Handle errors due to uninitailized parameters ( #7642 )
...
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-06-14 15:56:03 +00:00
Rohit Gupta
7ca41734da
Add `dataloader_idx` to batch transfer hooks ( #6241 )
...
* replace with kwargs
* chlog
* fix
* add test
* fix
* device
* deepspeed
* pep
* optional
* docs
* bc
* comments
* pep
* mypy
* pep
* Apply suggestions from code review
* kwargs
* docs
* .
* .
* 1.3 -> 1.4
* kwargs -> step_kwargs
2021-05-13 23:03:55 +05:30
Adrian Wälchli
02fa32b7bc
Handle torch.jit scripted modules in layer summary ( #6511 )
2021-03-15 03:17:42 +01:00
Rohit Gupta
bcc0004955
Add before_batch_transfer and after_batch_transfer hooks ( #3671 )
...
* add hooks
* comment
* docs
* add tests
* make it private
* fix tests
* docs
* chlog
* testcode
* codefactor
* fix doctest
* fix doctest
* suggestions
* is always overriden
* pep and BoringModel
* BoringModel
* docs
* docs
* docs
* fix
* rebase
* rebase
* suggestions
* docs
* suggestions
* try fix docs
* docs
* update name
* yapf
* docs
* rebase
* yapf
2021-02-18 06:58:12 -05:00
Justus Schock
da6dbc8d1d
PoC: Accelerator refactor ( #5743 )
...
* restoring the result from subprocess
* fix queue.get() order for results
* add missing "block_backward_sync" context manager
* add missing "block_backward_sync" context manager
* fix sync_batchnorm
* fix supported gpu-ids for tuple
* fix clip gradients and inf recursion
* accelerator selection: added cluster_environment plugin
* fix torchelastic test
* fix reduce early stopping decision for DDP
* fix tests: callbacks, conversion to lightning optimizer
* fix lightning optimizer does not pickle
* fix setting benchmark and deterministic option
* fix slurm amp test
* fix prepare_data test and determine node_rank
* fix retrieving last path when testing
* remove obsolete plugin argument
* fix test: test_trainer_config
* fix torchscript tests
* fix trainer.model access
* move properties
* fix test_transfer_batch_hook
* fix auto_select_gpus
* fix omegaconf test
* fix test that needs to simulate slurm ddp
* add horovod plugin
* fix test with named arguments
* clean up whitespace
* fix datamodules test
* remove old accelerators
* fix naming
* move old plugins
* move to plugins
* create precision subpackage
* create training_type subpackage
* fix all new import errors
* fix wrong arguments order passed to test
* fix LR finder
* Added sharded training type and amp plugin
* Move clip grad to precision plugin
* Added sharded spawn, select accelerators based on distributed_backend + enable custom fp16 plugin automatically
* Fix import issue, attempting to fix tests
* Fix initial test
* Reflect hook logic from master, should wrap model after move to device
* Optional state consolidation, since master has optimizers not wrapped
* change attribute for instance test
* reset optimizers
optimizers are not used in main process, so state would be wrong.
* legacy
* imports in accel
* legacy2
* trainer imports
* fix import errors after rebase
* move hook to new setup location
* provide unwrapping logic
* fix trainer callback system
* added ddp2 implementation
* fix imports .legacy
* move plugins
* restore legacy
* drop test.py from root
* add tpu accelerator and plugins
* fixes
* fix lightning optimizer merge
* reset bugreportmodel
* unwrapping
* step routing forward
* model access
* unwrap
* opt
* integrate distrib_type
* sync changes
* sync
* fixes
* add forgotten generators
* add missing logic
* update
* import
* missed imports
* import fixes
* isort
* mv f
* changelog
* format
* move helper to parallel plugin
* d
* add world size
* clean up
* duplicate
* activate ddp_sharded and tpu
* set nvidia flags
* remove unused colab var
* use_tpu <-> on_tpu attrs
* make some ddp_cpu and clusterplugin tests pass
* Ref/accelerator connector (#5742 )
* final cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* connector cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* trainer cleanup
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* accelerator cleanup + missing logic in accelerator connector
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* add missing changes to callbacks
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* reflect accelerator changes to lightning module
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* clean cluster envs
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* cleanup plugins
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* add broadcasting
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* yapf
* remove plugin connector
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* plugins
* manual optimization
* update optimizer routing
* add rank to torchelastic
* fix memory mixed precision
* setstate on trainer for pickling in ddp spawn
* add predict method
* add back commented accelerator code
* adapt test for sync_batch_norm to new plugin
* fix deprecated tests
* fix ddp cpu choice when no num_processes are given
* yapf format
* skip a memory test that cannot pass anymore
* fix pickle error in spawn plugin
* x
* avoid
* x
* fix cyclic import in docs build
* add support for sharded
* update typing
* add sharded and sharded_spawn to distributed types
* make unwrap model default
* refactor LightningShardedDataParallel similar to LightningDistributedDataParallel
* update sharded spawn to reflect changes
* update sharded to reflect changes
* Merge 1.1.5 changes
* fix merge
* fix merge
* yapf isort
* fix merge
* yapf isort
* fix indentation in test
* copy over reinit scheduler implementation from dev1.2
* fix apex tracking calls with dev_debugger
* reduce diff to dev1.2, clean up
* fix trainer config test when gpus>0 and num_processes >0 and ddp_cpu
* sort plugin tests legacy/new
* fix error handling for amp on cpu
* fix merge
fix merge
fix merge
* [Feat] Resolve manual_backward (#5837 )
* resolve manual_backward
* resolve flake8
* update
* resolve for ddp_spawn
* resolve flake8
* resolve flake8
* resolve flake8
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* fix tests/accelerator tests on cpu
* [BugFix] Resolve manual optimization (#5852 )
* resolve manual_optimization
* update
* update
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* Remove copy trainer parameters to happen earlier within the loop and add safe guard to get ref model (#5856 )
* resovle a bug
* Accelerator refactor sharded rpc (#5854 )
* rpc branch
* merge
* update handling of rpc
* make devices etc. Optional in RPC
* set devices etc. later if necessary
* remove devices from sequential
* make devices optional in rpc
* fix import
* uncomment everything
* fix cluster selection
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
* resolve bug
* fix assert in rpc test
* resolve a test
* fix docs compilation
* accelerator refactor - fix for sharded parity test (#5866 )
* fix memory issue with ddp_spawn
* x
x
x
x
x
x
x
x
x
* x
* Remove DDP2 as this does not apply
* Add missing pre optimizer hook to ensure lambda closure is called
* fix apex docstring
* [accelerator][BugFix] Resolve some test for 1 gpu (#5863 )
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* update
* update
* revert init
* resolve a bug
* update
* resolve flake8
* update
* update
* update
* revert init
* update
* resolve flake8
* update
* update
* update
* update
* update
* all_gather
* update
* make plugins work, add misconfig for RPC
* update
* update
* remove breaking test
* resolve some tests
* resolve flake8
* revert to ddp_spawn
Co-authored-by: root <root@ip-172-31-88-60.ec2.internal>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
Co-authored-by: Justus Schock <justus.schock@rwth-aachen.de>
* yapf isort
* resolve flake8
* fix apex doctests
* fix apex doctests 2
* resolve docs
* update drone
* clean env
* update
* update
* update
* update
* merge
* Fix RPC related tests, clean out old API, update for new accelerator API [skip ci] (#5881 )
* Fix RPC related tests, clean out old API, update for new accelerator API
* Move tests out of legacy folder, update paths and names
* Update test_remove_1-4.py
* Expose properties for tpu cores/gpus/num_gpus
* Add root GPU property
* Move properties to properties.py
* move tests that were previously in drone
* Fix root GPU property (#5908 )
* Move root GPU to property, remove horovod set as this is handled in horovod plugin, ensure we mock correctly to set GPU accelerator
* Add missing tests back
* fix best model path transfer when no checkpoint callback available
* Fix setup hook order [wip] (#5858 )
* Call trainer setup hook before accelerator setup
* Add test case
* add new test
* typo
* fix callback order in test
Co-authored-by: tchaton <thomas@grid.ai>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* rename ddp sequential -> rpc sequential for special test
* revert
* fix stupid merge problem
* Use property in connector for sampler (#5913 )
* merge the import conflicts
* fix spawning of processes in slurm
* [wip] Fix some bugs for TPU [skip ci] (#5878 )
* fixed for single tpu
* fixed spawn
* fixed spawn
* update
* update
* wip
* resolve bugs
* resolve bug
* update on comment
* removed decorator
* resolve comments
* set to 4
* update
* update
* need cleaning
* update
* update
* update
* resolve flake8
* resolve bugs
* exclude broadcast
* resolve bugs
* change test
* update
* update
* skip if meet fails
* properly raise trace
* update
* add catch
* wrap test
* resolve typo
* update
* typo
Co-authored-by: Lezwon Castelino <lezwon@gmail.com>
Co-authored-by: Your Name <you@example.com>
* resolve some tests
* update
* fix imports
* update
* resolve flake8
* update azure pipeline
* skip a sharded test on cpu that requires a gpu
* resolve tpus
* resolve bug
* resolve flake8
* update
* updat utils
* revert permission change on files
* suggestions from carlos
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* remove unrelated formatting changes
* remove incomplete comment
* Update pytorch_lightning/accelerators/__init__.py
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* remove unrelated formatting change
* add types
* warn 1.7 ddp manual backward only if ddp kwarg unset
* yapf + isort
* pep8 unused imports
* fix cyclic import in docs
* Apply suggestions from code review
* typer in accelerator.py
* typo
* Apply suggestions from code review
* formatting
* update on comments
* update typo
* Update pytorch_lightning/trainer/properties.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* update
* suggestion from code review
* suggestion from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: SeanNaren <sean@grid.ai>
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Ubuntu <ubuntu@ip-172-31-88-60.ec2.internal>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
Co-authored-by: root <root@ip-172-31-88-60.ec2.internal>
Co-authored-by: Lezwon Castelino <lezwon@gmail.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-02-12 15:48:56 -05:00
Jirka Borovec
b434c479e7
Quantisation ( #5706 )
...
* empty
* sq
* obs
* int
* ts
* helpers
* chlog
* yapf
* avg
* dupl
* Apply suggestions from code review
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* fixes
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* fixes
* note
* warn
* 45
* link
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* Apply suggestions from code review
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
* yapf
* flake8
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2021-02-11 07:04:57 -05:00
Jirka Borovec
58c90ba7b7
formatting 5/n: Core ( #5721 )
...
* yapf core
* Update pytorch_lightning/core/lightning.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
2021-02-08 14:29:43 -05:00
chaton
5f3372871a
[feat] Add PyTorch Profiler. ( #5560 )
...
* add profiler
* add profiler
* update
* resolve flake8
* update doc
* update changelog
* clean doc
* delete prof file
* merge pr codebase
* update
* update doc
* update doc
* update doc
* update on comments
* update docstring
* update docstring
* try
* update test
* Update pytorch_lightning/profiler/__init__.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* Update pytorch_lightning/profiler/__init__.py
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
* update on comments
* remove old code
* add support for ddp
* resolve flake8
* Update pytorch_lightning/profiler/__init__.py
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
* resolve tests
* resolve flake8
Co-authored-by: Jirka Borovec <jirka.borovec@seznam.cz>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2021-01-26 06:48:54 -05:00
Jirka Borovec
7b30133a82
flake8 & isort ( #5647 )
2021-01-25 14:31:38 -05:00
NeuralLink
db784225eb
summarize total size of model params in bytes ( #5590 )
...
* simplified model size calc
* fix spaces
* fix newlines
* minor refactor
* Update pytorch_lightning/core/memory.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* make model size property
* fix doctest
* Update pytorch_lightning/core/memory.py
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* remove explicit doctest from file
* better docs
* model precalculate size 1.0 mbs
* better comment
* Update tests/core/test_memory.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* Update tests/core/test_memory.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* merge _model_size into model_size property itself
* minor comment fix
* add feature to changelog
* added precision test
* isort
* minor def name typo
* remove monkeypath set env as boringmodel wont need any torch hub cache
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2021-01-25 09:35:29 +01:00
Jirka Borovec
54d20dc596
Refactor: clean trainer device & distrib getters ( #5300 )
...
* warnings
* .
* .
* flake8
* .
* .
* .
* use_tpu
* use_dp
* .
* use_ddp
* .
* use_horovod
* .
* .
* .
2021-01-12 05:22:37 -05:00
George
b29757da90
Implemented ModelSummary total params values ( #4521 )
...
* Implemented ModelSummary total params values
Signed-off-by: George Corrêa de Araújo <george.gcac@gmail.com>
* Fixed documentation, handling modules that are containers for other modules when calculating total params
Signed-off-by: gca <george.gcac@gmail.com>
* Reduced max line length, updated total number of params layout
Signed-off-by: gca <george.gcac@gmail.com>
* Now using only top-level modules of main module to calculate total params
Signed-off-by: gca <george.gcac@gmail.com>
* Added default value for named_modules param in summarize function
Signed-off-by: gca <george.gcac@gmail.com>
* Removed summary function params, removed unused properties
Signed-off-by: gca <george.gcac@gmail.com>
* Changed from np.prod(shape) to numel
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
* changelog
* Update pytorch_lightning/core/memory.py
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Nicki Skafte <skaftenicki@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-11-22 07:07:52 +01:00
Jonas Haag
8dfbf6371b
Model summary: add 1 decimal place ( #4745 )
...
Show 1999 parameters as 1.9 K and 1000 parameters as 1.0 K, rather than both as 1 K.
Co-authored-by: chaton <thomas@grid.ai>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Sean Naren <sean.narenthiran@gmail.com>
2020-11-20 23:22:21 +01:00
Rohit Gupta
24809b0b26
Refactor GPUStatsMonitor to improve training speed ( #3257 )
...
* Refactor GPUMonitor to improve training speed
* added gpu ids to monitor
* update tests
* added deprecation warning
* pep
* fix test
* fix docs
* fix log_gpu_memory
* move deprecation check
* chlog
* Update CHANGELOG.md
* suggestions and fix
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
2020-09-04 06:02:16 -04:00
Peter Yu
cee5eaf659
flake8 fixes ( #3064 )
...
* flake8 fixes
* fix pep8
* fix pep8
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-08-20 07:45:22 -04:00
William Falcon
f43028f3ae
added copyright notices ( #3062 )
2020-08-19 22:03:22 -04:00
Jirka Borovec
4354690e55
add apex test ( #2921 )
...
* add apex test
* rename
* level
* events
* wrap
* evt
* miss
* apex
* apex
* apex
* apex
* apex
* apex
* Update tests/models/test_amp.py
Co-authored-by: William Falcon <waf2107@columbia.edu>
* notes
* notes
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-08-13 10:03:13 -04:00
Jirka Borovec
a6e7aa7796
allow using apex with any PT version ( #2865 )
...
* wip
* setup
* type
* name
* wip
* docs
* imports
* fix if
* fix if
* use_amp
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* fix tests
* Apply suggestions from code review
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
* fix tests
* todos
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
2020-08-08 11:07:32 +02:00
Jirka Borovec
b7d72706c3
clean imports ( #2867 )
...
* clean imports
* miss
2020-08-08 00:33:51 +02:00
William Falcon
e068af9ea8
Ampt ( #2572 )
...
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
* remove grad scaling tpu
2020-07-09 21:28:11 -04:00
Adrian Wälchli
6bfcfa8671
fix dtype conversion of example_input_array in model summary ( #2510 )
...
* fix dtype conversion
* changelog
2020-07-05 07:17:22 -04:00
Jirka Borovec
0be78d13aa
native amp ( #2373 )
...
* native amp
* typo
* imports
* apex
2020-06-26 21:45:13 -04:00
Adrian Wälchli
f972ab3a82
Fix summary hook handles not getting removed ( #2298 )
...
* detach hooks after completion
* detach hook
* update docs
* add test
* docs
* changelog
2020-06-20 07:38:47 -04:00
Adrian Wälchli
7dc58bd286
Refactor model summary + generalize example input array ( #1773 )
...
* squash
variant a
variant b
add test
revert rename
add changelog
docs
move changelog entry to top
use hooks
wip
wipp
layer summary
clean up, refactor
type hints
rename
remove obsolete code
rename
unused imports
simplify formatting of table and increase readability
doctest
superclass object
update examples
print unknown sizes
more docs and doctest
testing
unknown layers
add rnn test
remove main
restore train mode
test device wip
device
constant
simplify model forward transfer
return summary object in method
extend tests
fix summary for empty module
extend tests
refactor and added hook
variant a
variant b
add test
revert rename
add changelog
docs
move changelog entry to top
remove hardcoded string
simplify
test unknown shapes and all others
comments for tests
fix hparams attribute
* update default
* unused import
* clean up
* replace hardcoded strings
* fix doctest
* fix top/full
* black
* fix rnn test
* fix rnn
* update debugging docs
update docs
typo
update docs
update docs
* add changelog
* extract constant
* setter and getter
* move parity models to test folder
* parameterize mode
2020-06-15 17:05:58 -04:00
Adrian Wälchli
2ab2f7d08d
Improved docs for pytorch_lightning.core (continued) ( #1483 )
...
* improved docs for core
update links
add references to hooks lifecycle
wip
continue with __init__.py
improve docs for memory.py
improve docs for saving.py
simpler links
fix formatting
* move hooks lifecycle to top of file
* fix doctest import problem
* add missing hook in lifecycle
2020-04-16 12:04:55 -04:00
Tullie Murrell
b31edf37bf
Change min max gpu memory to be on their own plots ( #1358 )
2020-04-03 14:59:27 -04:00
Jirka Borovec
22a7264e9a
improve partial Codecov ( #1172 )
...
* ignore in setup
* show report
* abs imports
* abstract pass
* cover loggers
* doctest trains
* locals
* pass
* revert tensorboard
* use tensorboardX
* revert tensorboardX
* fix trains
* Add TrainsLogger.set_credentials (#1179 )
* Add TrainsLogger.set_credentials to control trains server configuration and authentication from code. Sync trains package version.
Fix CI Trains tests
* Add global TrainsLogger set_bypass_mode (#1187 )
* Add global TrainsLogger set_bypass_mode skips all external communication
Co-authored-by: bmartinn <>
* rm some no-cov
Co-authored-by: Martin.B <51887611+bmartinn@users.noreply.github.com>
2020-03-19 09:14:29 -04:00
Jacob Zhong
1a73fa0b03
change default logger to dedicated one ( #1064 )
...
Fix test
Fix format
Update pytorch_lightning/__init__.py
Separate imports
2020-03-17 18:44:00 -04:00
Adrian Wälchli
3c2fd560aa
Type Hints for Lightning Core ( #946 )
...
* first pass for LightningModule typehints
* fix return types
* add missing types
* add type annotations to grads.py
* add type annotations to hooks.py
* add type annotation to memory.py
* proper docstring quotation marks
* add type annotations to saving.py
* fix cyclic import problem
* fix cyclic import problem
* add missing whitespace
* finish type hints for load_from_ methods
* docs: prepare_data does not return anything
* fix auto types in docs
* revert typehint for trainer in hook
* remove unnecessary return docs
* some fixes for memory docs
* revert typing for args kwargs
* added all missing None return types
* remove unused import
* add more details to dict/list return types
* fix line too long
* optimize imports
* linted
* Revert "linted"
This reverts commit 85559611e8
.
* remove whitespace
* update
* update
* update
* update
* update
* changelog
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-03-12 12:47:23 -04:00
Hanbyul Kim
563e2ba2c6
resolving documentation warnings ( #833 )
...
* add more underline
* fix LightningMudule import error
* remove unneeded blank line
* escape asterisk to fix inline emphasis warning
* add PULL_REQUEST_TEMPLATE.md
* add __init__.py and import imagenet_example
* fix duplicate label
* add noindex option to fix duplicate object warnings
* remove unexpected indent
* refer explicit LightningModule
* fix minor bug
* refer EarlyStopping explicitly
* restore exclude patterns
* change the way how to refer class
* remove unused import
* update badges & drop Travis/Appveyor (#826 )
* drop Travis
* drop Appveyor
* update badges
* fix missing PyPI images & CI badges (#853 )
* docs - anchor links (#848 )
* docs - add links
* add desc.
* add Greeting action (#843 )
* add Greeting action
* Update greetings.yml
Co-authored-by: William Falcon <waf2107@columbia.edu>
* add pep8speaks (#842 )
* advanced profiler describe + cleaned up tests (#837 )
* add py36 compatibility
* add test case to capture previous bug
* clean up tests
* clean up tests
* Update lightning_module_template.py
* Update lightning.py
* respond lint issues
* break long line
* break more lines
* checkout conflicting files from master
* shorten url
* checkout from upstream/master
* remove trailing whitespaces
* remove unused import LightningModule
* fix sphinx bot warnings
* Apply suggestions from code review
just to trigger CI
* Update .github/workflows/greetings.yml
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: William Falcon <waf2107@columbia.edu>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
2020-02-27 16:07:51 -05:00
Adrian Wälchli
472f394788
Resolve some codefactor issues ( #756 )
...
* remove unnecessary pass statements
* use isinstance for type checks
* remove unnecessary else/elif after return
* remove unnecessary return statements
* move doc string to top
* merge isinstance calls
* remove unnecessary else/elif after raise
* use list comprehension
* do not use len without comparison
* add missing shebang
* revert isinstance check back to type
broke tests, because bool is actually subclass of int
* add missing period to doc string
* remove unnecessary pass statements
* use isinstance for type checks
* remove unnecessary else/elif after return
* remove unnecessary return statements
* move doc string to top
* merge isinstance calls
* remove unnecessary else/elif after raise
* use list comprehension
* do not use len without comparison
* add missing shebang
* revert isinstance check back to type
broke tests, because bool is actually subclass of int
* add missing period to doc string
* Fix default ckpt path when logger exists (#771 )
* rename logging -> loggers (#767 )
* move logging >> loggers
* add warning
* fix tests
* logging alias
* formatting
* formatting
* use isinstance for type checks
* revert isinstance check back to type
broke tests, because bool is actually subclass of int
* add more detail to tbptt example (#755 )
* add more detail to tbptt example
* warn user about new arg in training_step
Co-authored-by: Vadim Bereznyuk <kuynzereb@gmail.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Jeremy Jordan <13970565+jeremyjordan@users.noreply.github.com>
2020-02-01 18:44:05 -05:00
Jirka Borovec
76a1c67d87
rename logging -> loggers ( #767 )
...
* move logging >> loggers
* add warning
* fix tests
* logging alias
* formatting
* formatting
2020-02-01 15:47:58 -05:00
Nicki Skafte
9a6838d349
Removed dependency on pandas, instead use generic csv ( #736 )
...
* removed dependency on pandas, instead use generic csv
* remove mnist files, pushed by accident
* added docstring and small fixes
* Update memory.py
* fixed path
Co-authored-by: William Falcon <waf2107@columbia.edu>
2020-01-29 14:52:23 -05:00
Alexey U. Gudchenko
06242c200a
Fix issue_703: backward compatibility with python3.6 ( #715 )
2020-01-20 14:50:57 -05:00
Jirka Borovec
ea59a99426
update org paths & convert logos ( #685 )
...
* fix typos
* update org paths
* update links from READMe to docs
* add svg logo
* add svg logo-text
* update logos
* testing temp paths
* prune links from readme
* optimize imports
* update logo
* update paths in README
* missing imports
2020-01-20 14:50:31 -05:00
VSJMilewski
d562172b4c
Allow for multiple example inputs when creating summary ( #543 )
2019-12-09 04:42:07 -08:00
Jirka Borovec
3a58937d8b
rename variables nb -> num ( #567 )
...
* rename nb -> num
* flake8
* batch_nb, epoch_nb, gpu_nb, split_nb
* add _num deprecations
2019-12-04 06:57:10 -05:00
Jirka Borovec
9785a3e78e
Refactor: name modules ( #548 )
...
* refactor: rename some modules
* add deprecation warnings
* fix paths
2019-11-26 22:39:18 -05:00