lightning/tests/tests_fabric/accelerators
byi8220 0afd4e1375
Replace `using_pjrt()` xla runtime `device_type()` check with in xla.py for `torch-xla>=2.5` (#20442)
* Replace `using_pjrt()` xla runtime `device_type()` check with in xla.py

Fixes https://github.com/Lightning-AI/pytorch-lightning/issues/20419

`torch_xla.runtime.using_pjrt()` is removed in https://github.com/pytorch/xla/pull/7787

This PR replaces references to that function with a check to [`device_type()`](https://github.com/pytorch/xla/blob/master/torch_xla/runtime.py#L83) to recreate the behavior of that function, minus the manual initialization

* Added tests/refactored for version compat

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* precommit

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2024-11-25 11:47:19 +01:00
..
__init__.py Rename LightningLite to Fabric (#16244) 2023-01-04 10:57:18 -05:00
test_cpu.py ruff: replace isort with ruff +TPU (#17684) 2023-09-26 11:54:55 -04:00
test_cuda.py bump python 3.9+ (#20413) 2024-11-25 09:20:17 +01:00
test_mps.py Drop support for PyTorch 1.11 (#18691) 2023-10-04 20:30:44 +02:00
test_registry.py bump python 3.9+ (#20413) 2024-11-25 09:20:17 +01:00
test_xla.py Replace `using_pjrt()` xla runtime `device_type()` check with in xla.py for `torch-xla>=2.5` (#20442) 2024-11-25 11:47:19 +01:00