lightning/tests/utilities/test_apply_func_torchtext.py

48 lines
2.1 KiB
Python
Raw Normal View History

2020-10-13 11:18:07 +00:00
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
import pytest
import torch
from pytorch_lightning.utilities.apply_func import move_data_to_device
from pytorch_lightning.utilities.imports import _TORCHTEXT_LEGACY
from tests.helpers.runif import RunIf
from tests.helpers.torchtext_utils import get_dummy_torchtext_data_iterator
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
@pytest.mark.parametrize("include_lengths", [False, True])
@pytest.mark.parametrize("device", [torch.device("cuda", 0)])
@pytest.mark.skipif(not _TORCHTEXT_LEGACY, reason="torchtext.legacy is deprecated.")
@RunIf(min_gpus=1)
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
def test_batch_move_data_to_device_torchtext_include_lengths(include_lengths, device):
data_iterator, _ = get_dummy_torchtext_data_iterator(num_samples=3, batch_size=3, include_lengths=include_lengths)
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
data_iter = iter(data_iterator)
batch = next(data_iter)
with pytest.deprecated_call(match="The `torchtext.legacy.Batch` object is deprecated"):
batch_on_device = move_data_to_device(batch, device)
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
if include_lengths:
# tensor with data
assert batch_on_device.text[0].device == device
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
# tensor with length of data
assert batch_on_device.text[1].device == device
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
else:
assert batch_on_device.text.device == device
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
@pytest.mark.parametrize("include_lengths", [False, True])
@pytest.mark.skipif(not _TORCHTEXT_LEGACY, reason="torchtext.legacy is deprecated.")
Bugfix/torchtext include lengths (#2689) * Test using torchtext.data.Field with include_lengths=True/False * Fix issue that Tensors in a Batch generated by torchtext with torchtext.data.Field configured as include_lengths=True * Add description for fix of issue #2688 * changes to accomodate CodeFactor issues * Another attemt to make last CodeFactor issue pass (it's a false alarm) * temporarly disable test of test_grad_tracking to check if testing will pass * reenable test in test_grad_norm * Update CHANGELOG.md Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> * Renamed get_torchtext_data_iterator to _get_torchtext_data_iterator as suggested by @borda * Update pytorch_lightning/utilities/apply_func.py Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> * adding tests more specific to batch_move_data_to_device with tochtext Batch * added check that Tensors were moved to target device * removed tests using RNN models to be moved into a separate PR * fixing FLAKE8 errors that showed up after merge from master branch modified: tests/base/datamodules.py modified: tests/callbacks/test_model_checkpoint.py * parameterized test to reduce code duplication * Added check only if length tensor exist. Removed left over comments. * rearranged device parameterization and added pytest.param * Try to figure out why only one device is tested on Linux machines * Testing on CPU and GPU devices (GPU test is skip if no cuda device is available. * added test for TPU device (experimental) * Adding test parameterization for TPU test (experimental) * change import statement to limit what is imported for a TPU environment * made test work with TPU * Change to trigger CI * Change to trigger CI * uncommented TPU test to check CI * reenabling TPU test * small change to trigger CI build * small change to trigger CI build * small change to trigger CI build * adding tests/utilities/test_apply_func_torchtext.py to CI TPU test * try to make test not skipped on CI with TPU * remove testing on TPU * undo an accidental change to test_tpu.py (file should not have been touched) * small change to trigger CI build * small change to trigger CI build * Update tests/utilities/test_apply_func_torchtext.py * Revert to previous version * Apply suggestions from code review * Change to trigger CI Co-authored-by: Thomas Schaaf <tschaaf@mmm.com> Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com> Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com> Co-authored-by: Thomas Schaaf <tschaaf@cs.cmu.edu>
2020-07-31 11:53:08 +00:00
def test_batch_move_data_to_device_torchtext_include_lengths_cpu(include_lengths):
test_batch_move_data_to_device_torchtext_include_lengths(include_lengths, torch.device("cpu"))