lightning/pytorch_lightning/utilities
Lezwon Castelino d2cd7cb0f9
Add option for weight tying on TPU's (#5441)
* added on_post_move_to_device

* added tests

* docs and refactors

* Update tests/backends/test_tpu_backend.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update docs/source/tpu.rst

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update docs/source/tpu.rst

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update pytorch_lightning/core/decorators.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update pytorch_lightning/core/decorators.py

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>

* Update docs/source/tpu.rst

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update pytorch_lightning/core/decorators.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update pytorch_lightning/core/decorators.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update pytorch_lightning/core/decorators.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update pytorch_lightning/core/decorators.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* Update pytorch_lightning/core/hooks.py

Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>

* moved weight sharing module back to test

updated tpu available

* add count to warning

* fix doctest

* import trainer in doctest

* import trainer in doctest

* do not test code as no TPU device

* param count to layer count

* formatting

* update docs

* update import

* update

* resolve tests

* remove legacy accelerator

Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
Co-authored-by: Rohit Gupta <rohitgr1998@gmail.com>
Co-authored-by: tchaton <thomas@grid.ai>
Co-authored-by: Your Name <you@example.com>
2021-02-18 00:03:26 +00:00
..
__init__.py DeepSpeed Integration (#5954) 2021-02-17 15:23:42 -05:00
apply_func.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
argparse.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
argparse_utils.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
cloud_io.py Update cloud_io.py (#4936) 2020-12-04 11:00:06 +01:00
data.py Remove torch<=1.4.0 checks (#5998) 2021-02-16 17:53:40 -05:00
debugging.py Refactor setup_training and remove test_mode (#5388) 2021-02-03 19:41:42 +01:00
device_dtype_mixin.py Add option for weight tying on TPU's (#5441) 2021-02-18 00:03:26 +00:00
device_parser.py PoC: Accelerator refactor (#5743) 2021-02-12 15:48:56 -05:00
distributed.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
enums.py DeepSpeed Integration (#5954) 2021-02-17 15:23:42 -05:00
exceptions.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
imports.py DeepSpeed Integration (#5954) 2021-02-17 15:23:42 -05:00
memory.py Make move_metrics_to_cpu work recursively (#6007) 2021-02-16 21:52:42 +00:00
model_helpers.py Fix isort failures in utilities (#5530) 2021-01-15 13:57:40 -05:00
model_utils.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
parsing.py flake8 + yapf 2021-02-04 20:55:58 +01:00
seed.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
upgrade_checkpoint.py notices (#4118) 2020-10-13 07:18:07 -04:00
warning_utils.py formatting to PL utils (#5713) 2021-01-30 15:28:59 +01:00
warnings.py Add deprecation warning to ModelCheckpoint when logging val_loss with no monitor (#6012) 2021-02-17 10:46:58 +00:00
xla_device.py Increase TPU check timeout (#5598) 2021-02-04 20:55:41 +01:00
xla_device_utils.py Increase TPU check timeout (#5598) 2021-02-04 20:55:41 +01:00