5f0b08e8ab | ||
---|---|---|
.github | ||
bench | ||
docs | ||
extern | ||
src | ||
tests | ||
.coveragerc | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
CHANGELOG.md | ||
CITATION.bib | ||
CMakeLists.txt | ||
CODE_OF_CONDUCT.md | ||
LICENSE | ||
MANIFEST.in | ||
Makefile | ||
README.md | ||
make.bat | ||
pyproject.toml | ||
setup.py |
README.md
Rapid fuzzy string matching in Python and C++ using the Levenshtein Distance
Description • Installation • Usage • License
Description
RapidFuzz is a fast string matching library for Python and C++, which is using the string similarity calculations from FuzzyWuzzy. However there are a couple of aspects that set RapidFuzz apart from FuzzyWuzzy:
- It is MIT licensed so it can be used whichever License you might want to choose for your project, while you're forced to adopt the GPL license when using FuzzyWuzzy
- It provides many string_metrics like hamming or jaro_winkler, which are not included in FuzzyWuzzy
- It is mostly written in C++ and on top of this comes with a lot of Algorithmic improvements to make string matching even faster, while still providing the same results. For detailed benchmarks check the documentation
- Fixes multiple bugs in the
partial_ratio
implementation
Requirements
- Python 3.6 or later
- On Windows the Visual C++ 2019 redistributable is required
Installation
There are several ways to install RapidFuzz, the recommended methods
are to either use pip
(the Python package manager) or
conda
(an open-source, cross-platform, package manager)
with pip
RapidFuzz can be installed with pip
the following way:
pip install rapidfuzz
There are pre-built binaries (wheels) of RapidFuzz for MacOS (10.9 and later), Linux x86_64 and Windows. Wheels for armv6l (Raspberry Pi Zero) and armv7l (Raspberry Pi) are available on piwheels.
✖️ failure "ImportError: DLL load failed"
If you run into this error on Windows the reason is most likely, that the Visual C++ 2019 redistributable is not installed, which is required to find C++ Libraries (The C++ 2019 version includes the 2015, 2017 and 2019 version).
with conda
RapidFuzz can be installed with conda
:
conda install -c conda-forge rapidfuzz
from git
RapidFuzz can be installed directly from the source distribution by cloning the repository. This requires a C++14 capable compiler.
git clone --recursive https://github.com/maxbachmann/rapidfuzz.git
cd rapidfuzz
pip install .
Usage
Some simple functions are shown below. A complete documentation of all functions can be found here.
Scorers
Scorers in RapidFuzz can be found in the modules fuzz
and string_metric
.
Simple Ratio
> fuzz.ratio("this is a test", "this is a test!")
96.55171966552734
Partial Ratio
> fuzz.partial_ratio("this is a test", "this is a test!")
100.0
Token Sort Ratio
> fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
90.90908813476562
> fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
100.0
Token Set Ratio
> fuzz.token_sort_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
83.8709716796875
> fuzz.token_set_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
100.0
Process
The process module makes it compare strings to lists of strings. This is generally more performant than using the scorers directly from Python. Here are some examples on the usage of processors in RapidFuzz:
> from rapidfuzz import process, fuzz
> choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
> process.extract("new york jets", choices, scorer=fuzz.WRatio, limit=2)
[('New York Jets', 100, 1), ('New York Giants', 78.57142639160156, 2)]
> process.extractOne("cowboys", choices, scorer=fuzz.WRatio)
("Dallas Cowboys", 90, 3)
The full documentation of processors can be found here
Benchmark
The following benchmark gives a quick performance comparision between RapidFuzz and FuzzyWuzzy. More detailed benchmarks for the string metrics can be found in the documentation. For this simple comparision I generated a list of 10.000 strings with length 10, that is compared to a sample of 100 elements from this list:
words = [
''.join(random.choice(string.ascii_letters + string.digits) for _ in range(10))
for _ in range(10_000)
]
samples = words[::len(words) // 100]
The first benchmark compares the performance of the scorers in FuzzyWuzzy and RapidFuzz when they are used directly from Python in the following way:
for sample in samples:
for word in words:
scorer(sample, word)
The following graph shows how many elements are processed per second with each of the scorers. There are big performance differences between the different scorers. However each of the scorers is faster in RapidFuzz
The second benchmark compares the performance when the scorers are used in combination with extractOne in the following way:
for sample in samples:
extractOne(sample, word, scorer=scorer)
The following graph shows how many elements are processed per second with each of the scorers. In RapidFuzz the usage of scorers through processors like extractOne
is a lot faster than directly using it. Thats why they should be used whenever possible.
License
RapidFuzz is licensed under the MIT license since I believe that everyone should be able to use it without being forced to adopt the GPL license. Thats why the library is based on an older version of fuzzywuzzy that was MIT licensed as well. This old version of fuzzywuzzy can be found here.