From 5f0b08e8ab79fd3553472bfbcaf75cca1a29b1a1 Mon Sep 17 00:00:00 2001 From: Max Bachmann Date: Sat, 29 Jan 2022 19:28:26 +0100 Subject: [PATCH] restructure documentation --- CODE_OF_CONDUCT.md | 132 ++++++++++++++++++ .../index.rst} | 0 docs/License.rst | 26 +--- docs/References.rst | 4 + docs/Usage/distance/Levenshtein.rst | 91 +++++++++++- docs/Usage/distance/index.rst | 4 +- docs/changelog_link.rst | 2 + docs/conf.py | 3 +- docs/index.rst | 29 ++-- docs/refs.bib | 39 ++++++ docs/requirements.txt | 5 + src/cython/distance/Indel.pyx | 10 -- src/cython/distance/Levenshtein.pyx | 117 ++-------------- 13 files changed, 295 insertions(+), 167 deletions(-) create mode 100644 CODE_OF_CONDUCT.md rename docs/{Contributing.rst => Contributing/index.rst} (100%) create mode 100644 docs/References.rst create mode 100644 docs/changelog_link.rst create mode 100644 docs/refs.bib create mode 100644 docs/requirements.txt diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 0000000..a43ace8 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,132 @@ + +# Contributor Covenant Code of Conduct + +## Our Pledge + +We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, caste, color, religion, or sexual +identity and orientation. + +We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community. + +## Our Standards + +Examples of behavior that contributes to a positive environment for our +community include: + +* Demonstrating empathy and kindness toward other people +* Being respectful of differing opinions, viewpoints, and experiences +* Giving and gracefully accepting constructive feedback +* Accepting responsibility and apologizing to those affected by our mistakes, + and learning from the experience +* Focusing on what is best not just for us as individuals, but for the overall + community + +Examples of unacceptable behavior include: + +* The use of sexualized language or imagery, and sexual attention or advances of + any kind +* Trolling, insulting or derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or email address, + without their explicit permission +* Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Enforcement Responsibilities + +Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful. + +Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate. + +## Scope + +This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported to the community leaders responsible for enforcement at conduct@maxbachmann.de. +All complaints will be reviewed and investigated promptly and fairly. + +All community leaders are obligated to respect the privacy and security of the +reporter of any incident. + +## Enforcement Guidelines + +Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct: + +### 1. Correction + +**Community Impact**: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community. + +**Consequence**: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested. + +### 2. Warning + +**Community Impact**: A violation through a single incident or series of +actions. + +**Consequence**: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or permanent +ban. + +### 3. Temporary Ban + +**Community Impact**: A serious violation of community standards, including +sustained inappropriate behavior. + +**Consequence**: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban. + +### 4. Permanent Ban + +**Community Impact**: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals. + +**Consequence**: A permanent ban from any sort of public interaction within the +community. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], +version 2.1, available at +[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. + +Community Impact Guidelines were inspired by +[Mozilla's code of conduct enforcement ladder][Mozilla CoC]. + +For answers to common questions about this code of conduct, see the FAQ at +[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at +[https://www.contributor-covenant.org/translations][translations]. + +[homepage]: https://www.contributor-covenant.org +[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html +[Mozilla CoC]: https://github.com/mozilla/diversity +[FAQ]: https://www.contributor-covenant.org/faq +[translations]: https://www.contributor-covenant.org/translations diff --git a/docs/Contributing.rst b/docs/Contributing/index.rst similarity index 100% rename from docs/Contributing.rst rename to docs/Contributing/index.rst diff --git a/docs/License.rst b/docs/License.rst index d9adef7..b1dc9ba 100644 --- a/docs/License.rst +++ b/docs/License.rst @@ -1,31 +1,9 @@ License ======= -RapidFuzz is licensed under the MIT license since I believe that everyone should be able to use it without being forced to adopt the GPL license. Thats why the library is based on an older version of fuzzywuzzy that was MIT licensed as well. -This old version of fuzzywuzzy can be found `here `__. +RapidFuzz is free and open-source software licensed under the MIT license. MIT License ----------- -.. code-block:: text - - Copyright © 2021 Max Bachmann - Copyright © 2011 Adam Cohen - - Permission is hereby granted, free of charge, to any person obtaining a copy - of this software and associated documentation files (the "Software"), to - deal in the Software without restriction, including without limitation the - rights to use, copy, modify, merge, publish, distribute, sublicense, and/or - sell copies of the Software, and to permit persons to whom the Software is - furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in - all copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS - IN THE SOFTWARE. +.. literalinclude:: ../LICENSE diff --git a/docs/References.rst b/docs/References.rst new file mode 100644 index 0000000..3d0c6fb --- /dev/null +++ b/docs/References.rst @@ -0,0 +1,4 @@ +References +========== + +.. bibliography:: diff --git a/docs/Usage/distance/Levenshtein.rst b/docs/Usage/distance/Levenshtein.rst index fdac952..d59a4ac 100644 --- a/docs/Usage/distance/Levenshtein.rst +++ b/docs/Usage/distance/Levenshtein.rst @@ -1,6 +1,16 @@ Levenshtein ----------- +.. automodule:: rapidfuzz.distance.Levenshtein + +This implementation supports the usage of different weights for +Insertion/Deletion/Substitution. The uniform Levenshtein distance refers to ``weights=(1,1,1)`` +and the Indel distance refers to ``weights=(1,1,2)``. All other weights are refered to +as generic Levenshtein distance. + +Functions +^^^^^^^^^ + distance ~~~~~~~~ .. autofunction:: rapidfuzz.distance.Levenshtein.distance @@ -23,4 +33,83 @@ editops opcodes ~~~~~~~ -.. autofunction:: rapidfuzz.distance.Levenshtein.opcodes \ No newline at end of file +.. autofunction:: rapidfuzz.distance.Levenshtein.opcodes + + +Performance +^^^^^^^^^^^ +Since the Levenshtein module uses different implementations based on the weights +used, this leads to different performance characteristics. The following sections +show the performance for the different possible weights. + +Uniform +~~~~~~~ +The following image shows a benchmark of the uniform Levenshtein distance in +multiple Python libraries. All of them are implemented either in C/C++ or Cython. +The graph shows, that python-Levenshtein is the only library with a time +complexity of ``O(NM)``, while all other libraries have a time complexity of +``O([N/64]M)``. Especially for long strings RapidFuzz is a lot faster than +all the other tested libraries. + +.. image:: img/uniform_levenshtein.svg + :align: center + + +Indel +~~~~~ +The following image shows a benchmark of the Indel distance in RapidFuzz +and python-Levenshtein. Similar to the normal Levenshtein distance +python-Levenshtein uses an implementation with a time complexity of ``O(NM)``, +while RapidFuzz has a time complexity of ``O([N/64]M)``. + +.. image:: img/indel_levenshtein.svg + :align: center + + +Implementation Notes +^^^^^^^^^^^^^^^^^^^^ +Depending on the used input parameters, different optimized implementation are used +to improve the performance. These implementations are described in the following +sections. + +Uniform +~~~~~~~ +The implementation for the uniform Levenshtein distance has a worst-case +performance of ``O([N/64]M)``. It uses the following optimized implementations: + +- if score_cutoff is 0 the similarity can be calculated using a direct comparision, + since no difference between the strings is allowed. The time complexity of + this algorithm is ``O(N)``. + +- A common prefix/suffix of the two compared strings does not affect + the Levenshtein distance, so the affix is removed before calculating the + similarity. + +- If score_cutoff is ≤ 3 the mbleven algorithm is used. This algorithm + checks all possible edit operations that are possible under + the threshold `score_cutoff`. The time complexity of this algorithm is ``O(N)``. + +- If the length of the shorter string is ≤ 64 after removing the common affix + Hyyrös' algorithm is used, which calculates the Levenshtein distance in + parallel. The algorithm is described by :cite:t:`2003:hyrroe`. The time complexity of this + algorithm is ``O(N)``. + +- If the length of the shorter string is ≥ 64 after removing the common affix + a blockwise implementation of Hyyrös' algorithm is used, which calculates + the Levenshtein distance in parallel (64 characters at a time). + The algorithm is described by :cite:t:`2003:hyrroe`. The time complexity of this + algorithm is ``O([N/64]M)``. + + +Indel +~~~~~ +The Indel distance is available as a stand alone implementation. Further details +can be found in `here `__. + + +Generic +~~~~~~~ +The implementation for other weights is based on Wagner-Fischer. +It has a performance of ``O(N * M)`` and has a memory usage of ``O(N)``. +Further details can be found in :cite:t:`1974:WagFisch`. + diff --git a/docs/Usage/distance/index.rst b/docs/Usage/distance/index.rst index 12d1162..e080afd 100644 --- a/docs/Usage/distance/index.rst +++ b/docs/Usage/distance/index.rst @@ -2,12 +2,12 @@ distance ======== Editops --------------------------- +------- .. autoclass:: rapidfuzz.distance.Editops :members: Opcodes --------------------------- +------- .. autoclass:: rapidfuzz.distance.Opcodes :members: diff --git a/docs/changelog_link.rst b/docs/changelog_link.rst new file mode 100644 index 0000000..05d3e29 --- /dev/null +++ b/docs/changelog_link.rst @@ -0,0 +1,2 @@ + +.. mdinclude:: ../CHANGELOG.md \ No newline at end of file diff --git a/docs/conf.py b/docs/conf.py index 7eddbba..7c306f4 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -31,8 +31,9 @@ release = '2.0.0' # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ - 'sphinx.ext.autodoc', 'sphinx.ext.napoleon' + 'sphinx.ext.autodoc', 'sphinx.ext.napoleon', 'sphinxcontrib.bibtex', 'm2r2' ] +bibtex_bibfiles = ['refs.bib'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] diff --git a/docs/index.rst b/docs/index.rst index 34fd81d..aa57197 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -1,29 +1,18 @@ -Welcome to RapidFuzz's documentation! -===================================== +RapidFuzz +========= .. toctree:: - :maxdepth: 2 + :hidden: Installation - -.. toctree:: - :maxdepth: 3 - Usage/index .. toctree:: - :maxdepth: 1 - - Contributing - -.. toctree:: - :maxdepth: 2 + :caption: Development + :hidden: + Contributing/index + changelog_link License - -Indices and tables -================== - -* :ref:`genindex` -* :ref:`modindex` -* :ref:`search` + References + GitHub Repository \ No newline at end of file diff --git a/docs/refs.bib b/docs/refs.bib new file mode 100644 index 0000000..3b8a34a --- /dev/null +++ b/docs/refs.bib @@ -0,0 +1,39 @@ +@article{2003:hyrroe, + author = {Hyyr\"{o}, Heikki}, + title = {A Bit-Vector Algorithm for Computing Levenshtein and Damerau Edit Distances}, + year = {2003}, + issue_date = {Spring 2003}, + publisher = {Publishing Association Nordic Journal of Computing}, + address = {FIN}, + volume = {10}, + number = {1}, + issn = {1236-6064}, + abstract = {The edit distance between strings A and B is defined as the minimum number of edit operations needed in converting A into B or vice versa. The Levenshtein edit distance allows three types of operations: an insertion, a deletion or a substitution of a character. The Damerau edit distance allows the previous three plus in addition a transposition between two adjacent characters. To our best knowledge the best current practical algorithms for computing these edit distances run in time O(dm) and O(⌈m/w⌉(n + σ)), where d is the edit distance between the two strings, m and n are their lengths (m ≤ n), w is the computer word size and σ is the size of the alphabet. In this paper we present an algorithm that runs in time O(⌈d/w⌉m + ⌈n/w⌉σ) or O(⌈d/w⌉n + ⌈m/w⌉σ). The structure of the algorithm is such, that in practice it is mostly suitable for testing whether the edit distance between two strings is within some pre-determined error threshold. We also present some initial test results with thresholded edit distance computation. In them our algorithm works faster than the original algorithm of Myers.}, + journal = {Nordic J. of Computing}, + month = {mar}, + pages = {29–39}, + numpages = {11}, + keywords = {bit-parallelism, approximate string matching, Levenshtein edit distance, Damerau edit distance} +} + +@article{1974:WagFisch, + author = {Wagner, Robert A. and Fischer, Michael J.}, + title = {The String-to-String Correction Problem}, + year = {1974}, + issue_date = {Jan. 1974}, + publisher = {Association for Computing Machinery}, + address = {New York, NY, USA}, + volume = {21}, + number = {1}, + issn = {0004-5411}, + url = {https://doi.org/10.1145/321796.321811}, + doi = {10.1145/321796.321811}, + abstract = {The string-to-string correction problem is to determine the distance between two strings as measured by the minimum cost sequence of “edit operations” needed to change the one string into the other. The edit operations investigated allow changing one symbol of a string into another single symbol, deleting one symbol from a string, or inserting a single symbol into a string. An algorithm is presented which solves this problem in time proportional to the product of the lengths of the two strings. Possible applications are to the problems of automatic spelling correction and determining the longest subsequence of characters common to two strings.}, + journal = {J. ACM}, + month = {jan}, + pages = {168–173}, + numpages = {6} +} + + + diff --git a/docs/requirements.txt b/docs/requirements.txt new file mode 100644 index 0000000..6b1c4dd --- /dev/null +++ b/docs/requirements.txt @@ -0,0 +1,5 @@ +Sphinx +furo +numpy +sphinxcontrib-bibtex +m2r2 \ No newline at end of file diff --git a/src/cython/distance/Indel.pyx b/src/cython/distance/Indel.pyx index 9e4f397..71de593 100644 --- a/src/cython/distance/Indel.pyx +++ b/src/cython/distance/Indel.pyx @@ -81,11 +81,6 @@ def distance(s1, s2, *, processor=None, score_cutoff=None): distance : int distance between s1 and s2 - Raises - ------ - ValueError - If unsupported weights are provided a ValueError is thrown - Notes ----- The following implementation is used with a worst-case performance of ``O([N/64]M)``. @@ -186,11 +181,6 @@ def similarity(s1, s2, *, processor=None, score_cutoff=None): similarity : int similarity between s1 and s2 - Raises - ------ - ValueError - If unsupported weights are provided a ValueError is thrown - See Also -------- Indel.distance : Indel distance diff --git a/src/cython/distance/Levenshtein.pyx b/src/cython/distance/Levenshtein.pyx index 72801d3..ac40c86 100644 --- a/src/cython/distance/Levenshtein.pyx +++ b/src/cython/distance/Levenshtein.pyx @@ -1,6 +1,13 @@ # distutils: language=c++ # cython: language_level=3, binding=True, linetrace=True +""" +The Levenshtein (edit) distance is a string metric to measure the +difference between two strings/sequences s1 and s2. +It's defined as the minimum number of insertions, deletions or +substitutions required to transform s1 into s2. +""" + from _initialize import Editops from _initialize cimport Editops, RfEditops from array import array @@ -95,114 +102,6 @@ def distance(s1, s2, *, weights=(1,1,1), processor=None, score_cutoff=None): ValueError If unsupported weights are provided a ValueError is thrown - Notes - ----- - Depending on the input parameters different optimized implementation are used - to improve the performance. - - Insertion = Deletion = Substitution: - This is known as uniform Levenshtein distance and is the distance most commonly - referred to as Levenshtein distance. The following implementation is used - with a worst-case performance of ``O([N/64]M)``. - - - if max is 0 the similarity can be calculated using a direct comparision, - since no difference between the strings is allowed. The time complexity of - this algorithm is ``O(N)``. - - - A common prefix/suffix of the two compared strings does not affect - the Levenshtein distance, so the affix is removed before calculating the - similarity. - - - If max is ≤ 3 the mbleven algorithm is used. This algorithm - checks all possible edit operations that are possible under - the threshold `max`. The time complexity of this algorithm is ``O(N)``. - - - If the length of the shorter string is ≤ 64 after removing the common affix - Hyyrös' algorithm is used, which calculates the Levenshtein distance in - parallel. The algorithm is described by [1]_. The time complexity of this - algorithm is ``O(N)``. - - - If the length of the shorter string is ≥ 64 after removing the common affix - a blockwise implementation of Myers' algorithm is used, which calculates - the Levenshtein distance in parallel (64 characters at a time). - The algorithm is described by [3]_. The time complexity of this - algorithm is ``O([N/64]M)``. - - The following image shows a benchmark of the Levenshtein distance in multiple - Python libraries. All of them are implemented either in C/C++ or Cython. - The graph shows, that python-Levenshtein is the only library with a time - complexity of ``O(NM)``, while all other libraries have a time complexity of - ``O([N/64]M)``. Especially for long strings RapidFuzz is a lot faster than - all the other tested libraries. - - .. image:: img/uniform_levenshtein.svg - - - Insertion = Deletion, Substitution >= Insertion + Deletion: - Since every Substitution can be performed as Insertion + Deletion, this variant - of the Levenshtein distance only uses Insertions and Deletions. Therefore this - variant is often referred to as InDel-Distance. The following implementation - is used with a worst-case performance of ``O([N/64]M)``. - - - if max is 0 the similarity can be calculated using a direct comparision, - since no difference between the strings is allowed. The time complexity of - this algorithm is ``O(N)``. - - - if max is 1 and the two strings have a similar length, the similarity can be - calculated using a direct comparision aswell, since a substitution would cause - a edit distance higher than max. The time complexity of this algorithm - is ``O(N)``. - - - A common prefix/suffix of the two compared strings does not affect - the Levenshtein distance, so the affix is removed before calculating the - similarity. - - - If max is ≤ 4 the mbleven algorithm is used. This algorithm - checks all possible edit operations that are possible under - the threshold `max`. As a difference to the normal Levenshtein distance this - algorithm can even be used up to a threshold of 4 here, since the higher weight - of substitutions decreases the amount of possible edit operations. - The time complexity of this algorithm is ``O(N)``. - - - If the length of the shorter string is ≤ 64 after removing the common affix - Hyyrös' lcs algorithm is used, which calculates the Indel distance in - parallel. The algorithm is described by [4]_ and is extended with support - for UTF32 in this implementation. The time complexity of this - algorithm is ``O(N)``. - - - If the length of the shorter string is ≥ 64 after removing the common affix - a blockwise implementation of the Hyyrös' lcs algorithm is used, which calculates - the Levenshtein distance in parallel (64 characters at a time). - The algorithm is described by [4]_. The time complexity of this - algorithm is ``O([N/64]M)``. - - The following image shows a benchmark of the Indel distance in RapidFuzz - and python-Levenshtein. Similar to the normal Levenshtein distance - python-Levenshtein uses a implementation with a time complexity of ``O(NM)``, - while RapidFuzz has a time complexity of ``O([N/64]M)``. - - .. image:: img/indel_levenshtein.svg - - - Other weights: - The implementation for other weights is based on Wagner-Fischer. - It has a performance of ``O(N * M)`` and has a memory usage of ``O(N)``. - Further details can be found in [2]_. - - References - ---------- - .. [1] Hyyrö, Heikki. "A Bit-Vector Algorithm for Computing - Levenshtein and Damerau Edit Distances." - Nordic Journal of Computing, Volume 10 (2003): 29-39. - .. [2] Wagner, Robert & Fischer, Michael - "The String-to-String Correction Problem." - J. ACM. 21. (1974): 168-173 - .. [3] Myers, Gene. "A fast bit-vector algorithm for approximate - string matching based on dynamic programming." - Journal of the ACM (JACM) 46.3 (1999): 395-415. - .. [4] Hyyrö, Heikki. "Bit-Parallel LCS-length Computation Revisited" - Proc. 15th Australasian Workshop on Combinatorial Algorithms (AWOCA 2004). - Examples -------- Find the Levenshtein distance between two strings: @@ -411,7 +310,7 @@ def normalized_similarity(s1, s2, *, weights=(1,1,1), processor=None, score_cuto >>> Levenshtein.normalized_similarity("lewenstein", "levenshtein", weights=(1,1,2)) 0.85714285714285 - When a different processor is used s1 and s2 do not have to be strings + When a different processor is used s1 and s2 do not have to be strings >>> Levenshtein.normalized_similarity(["lewenstein"], ["levenshtein"], processor=lambda s: s[0]) 0.81818181818181