RapidFuzz/tests/test_fuzz.py

149 lines
4.7 KiB
Python
Raw Normal View History

2020-08-22 19:07:08 +00:00
#!/usr/bin/env python
# -*- coding: utf-8 -*-
2020-05-24 07:57:08 +00:00
import unittest
2020-12-13 15:55:45 +00:00
import pytest
2020-05-24 07:57:08 +00:00
from rapidfuzz import process, fuzz, utils
scorers = [
fuzz.ratio,
fuzz.partial_ratio,
fuzz.token_sort_ratio,
fuzz.token_set_ratio,
fuzz.token_ratio,
fuzz.partial_token_sort_ratio,
fuzz.partial_token_set_ratio,
fuzz.partial_token_ratio,
fuzz.WRatio,
fuzz.QRatio
]
2020-05-24 07:57:08 +00:00
class RatioTest(unittest.TestCase):
2020-12-13 15:55:45 +00:00
s1 = "new york mets"
s1a = "new york mets"
s2 = "new YORK mets"
s3 = "the wonderful new york mets"
s4 = "new york mets vs atlanta braves"
s5 = "atlanta braves vs new york mets"
s6 = "new york mets - atlanta braves"
def testNoProcessor(self):
self.assertEqual(fuzz.ratio(self.s1, self.s1a), 100)
self.assertNotEqual(fuzz.ratio(self.s1, self.s2), 100)
2020-05-24 07:57:08 +00:00
def testPartialRatio(self):
2020-12-13 15:55:45 +00:00
self.assertEqual(fuzz.partial_ratio(self.s1, self.s3), 100)
2020-05-24 07:57:08 +00:00
def testTokenSortRatio(self):
2020-12-13 15:55:45 +00:00
self.assertEqual(fuzz.token_sort_ratio(self.s1, self.s1a), 100)
2020-05-24 07:57:08 +00:00
def testPartialTokenSortRatio(self):
2020-12-13 15:55:45 +00:00
self.assertEqual(fuzz.partial_token_sort_ratio(self.s1, self.s1a), 100)
self.assertEqual(fuzz.partial_token_sort_ratio(self.s4, self.s5), 100)
2020-05-24 07:57:08 +00:00
def testTokenSetRatio(self):
self.assertEqual(fuzz.token_set_ratio(self.s4, self.s5),100)
def testPartialTokenSetRatio(self):
self.assertEqual(fuzz.partial_token_set_ratio(self.s4, self.s5),100)
def testQuickRatioEqual(self):
self.assertEqual(fuzz.QRatio(self.s1, self.s1a), 100)
def testQuickRatioCaseInsensitive(self):
self.assertEqual(fuzz.QRatio(self.s1, self.s2), 100)
def testQuickRatioNotEqual(self):
self.assertNotEqual(fuzz.QRatio(self.s1, self.s3), 100)
def testWRatioEqual(self):
self.assertEqual(fuzz.WRatio(self.s1, self.s1a), 100)
def testWRatioCaseInsensitive(self):
self.assertEqual(fuzz.WRatio(self.s1, self.s2), 100)
def testWRatioPartialMatch(self):
# a partial match is scaled by .9
self.assertEqual(fuzz.WRatio(self.s1, self.s3), 90)
def testWRatioMisorderedMatch(self):
# misordered full matches are scaled by .95
self.assertEqual(fuzz.WRatio(self.s4, self.s5), 95)
def testWRatioUnicode(self):
self.assertEqual(fuzz.WRatio(self.s1, self.s1a), 100)
def testQRatioUnicode(self):
self.assertEqual(fuzz.WRatio(self.s1, self.s1a), 100)
def testIssue76(self):
self.assertAlmostEqual(fuzz.partial_ratio("physics 2 vid", "study physics physics 2"), 81.81818, places=4)
self.assertEqual(fuzz.partial_ratio("physics 2 vid", "study physics physics 2 video"), 100)
2020-05-24 07:57:08 +00:00
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
@pytest.mark.parametrize("scorer", scorers)
def test_empty_string(scorer):
"""
when both strings are empty this is a perfect match
"""
assert scorer("", "") == 100
2020-05-24 07:57:08 +00:00
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
@pytest.mark.parametrize("scorer", scorers)
def test_none_string(scorer):
"""
when None is passed to a scorer the result should always be 0
"""
assert scorer("test", None) == 0
assert scorer(None, "test") == 0
2020-05-24 07:57:08 +00:00
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
@pytest.mark.parametrize("scorer", scorers)
def test_simple_unicode_tests(scorer):
"""
some very simple tests using unicode with scorers
to catch relatively obvious implementation errors
"""
s1 = u"ÁÄ"
s2 = "ABCD"
assert scorer(s1, s2) == 0
assert scorer(s1, s1) == 100
2020-05-24 07:57:08 +00:00
2020-12-13 15:55:45 +00:00
@pytest.mark.parametrize("processor", [True, utils.default_process, lambda s: utils.default_process(s)])
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
@pytest.mark.parametrize("scorer", scorers)
def test_scorer_case_insensitive(processor, scorer):
"""
each scorer should be able to preprocess strings properly
"""
assert scorer(RatioTest.s1, RatioTest.s2, processor=processor) == 100
2020-12-13 15:55:45 +00:00
@pytest.mark.parametrize("processor", [False, None, lambda s: s])
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
def test_ratio_case_censitive(processor):
2020-12-13 15:55:45 +00:00
assert fuzz.ratio(RatioTest.s1, RatioTest.s2, processor=processor) != 100
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
2020-12-13 15:55:45 +00:00
@pytest.mark.parametrize("scorer", scorers)
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
def test_custom_processor(scorer):
2020-12-13 15:55:45 +00:00
"""
Any scorer should accept any type as s1 and s2, as long as it is a string
after preprocessing.
"""
s1 = ["chicago cubs vs new york mets", "CitiField", "2011-05-11", "8pm"]
s2 = ["chicago cubs vs new york mets", "CitiFields", "2012-05-11", "9pm"]
s3 = ["different string", "CitiFields", "2012-05-11", "9pm"]
assert scorer(s1, s2, processor=lambda event: event[0]) == 100
assert scorer(s2, s3, processor=lambda event: event[0]) != 100
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
2020-12-13 15:55:45 +00:00
@pytest.mark.parametrize("scorer", scorers)
Release v1.0.0 (#68) - all normalized string_metrics can now be used as scorer for process.extract/extractOne - Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future. - increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future - improved docstrings of functions - Added bitparallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2). - Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bitparallel implementation. - Improved performance of `fuzz.partial_ratio` -> Since `fuzz.ratio` and `fuzz.partial_ratio` are used in most scorers, this improves the overall performance. - Improved performance of `process.extract` and `process.extractOne` - the `rapidfuzz.levenshtein` module is now deprecated and will be removed in v2.0.0 These functions are now placed in `rapidfuzz.string_metric`. `distance`, `normalized_distance`, `weighted_distance` and `weighted_normalized_distance` are combined into `levenshtein` and `normalized_levenshtein`. - added normalized version of the hamming distance in `string_metric.normalized_hamming` - process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff - multiple bugs in extractOne when used with a scorer, thats not from RapidFuzz - fixed bug in `token_ratio` - fixed bug in result normalisation causing zero division
2021-02-12 15:37:44 +00:00
def test_help(scorer):
2020-12-13 15:55:45 +00:00
"""
test that all help texts can be printed without throwing an exception,
since they are implemented in C++ aswell
"""
help(scorer)
2020-05-24 07:57:08 +00:00
if __name__ == '__main__':
unittest.main()