Rapid fuzzy string matching in Python using various string metrics
Go to file
Max Bachmann 88a86a1028 deactivate bitparallel LCS
The algorithm to find the longest common subsequence after calculating it in bitparall
appears to have a bug. Deactivate it until this bug is fixed
2021-02-19 15:20:31 +01:00
.github/workflows rename master branch to main 2021-02-14 15:00:57 +01:00
bench Release v1.0.0 (#68) 2021-02-12 16:48:10 +01:00
docs rename master branch to main 2021-02-14 15:00:57 +01:00
extern/variant boost is no longer required for rapidfuzz-cpp 2020-10-09 09:40:26 +02:00
src deactivate bitparallel LCS 2021-02-19 15:20:31 +01:00
tests deactivate bitparallel LCS 2021-02-19 15:20:31 +01:00
.gitattributes Release v1.0.0 (#68) 2021-02-12 16:48:10 +01:00
.gitignore implement process.extractOne in C++ (#53) 2020-11-15 20:18:46 +01:00
.gitmodules move cpp into submodule 2020-04-13 08:50:35 +02:00
LICENSE add license preamble 2020-04-10 15:06:57 +02:00
MANIFEST.in Release v1.0.0 (#68) 2021-02-12 16:48:10 +01:00
Makefile Release v1.0.0 (#68) 2021-02-12 16:48:10 +01:00
README.md rename master branch to main 2021-02-14 15:00:57 +01:00
VERSION deactivate bitparallel LCS 2021-02-19 15:20:31 +01:00
make.bat Release v1.0.0 (#68) 2021-02-12 16:48:10 +01:00
setup.py Release v1.0.0 (#68) 2021-02-12 16:48:10 +01:00

README.md

RapidFuzz

Rapid fuzzy string matching in Python and C++ using the Levenshtein Distance

Continous Integration PyPI package version Conda Version Python versions
Gitter chat Documentation GitHub license

DescriptionInstallationUsageLicense


Description

RapidFuzz is a fast string matching library for Python and C++, which is using the string similarity calculations from FuzzyWuzzy. However there are two aspects that set RapidFuzz apart from FuzzyWuzzy:

  1. It is MIT licensed so it can be used whichever License you might want to choose for your project, while you're forced to adopt the GPL license when using FuzzyWuzzy
  2. It is mostly written in C++ and on top of this comes with a lot of Algorithmic improvements to make string matching even faster, while still providing the same results. More details on these performance improvements in form of benchmarks can be found here

Requirements

Installation

There are several ways to install RapidFuzz, the recommended methods are to either use pip(the Python package manager) or conda (an open-source, cross-platform, package manager)

with pip

RapidFuzz can be installed with pip the following way:

pip install rapidfuzz

There are pre-built binaries (wheels) of RapidFuzz for MacOS (10.9 and later), Linux x86_64 and Windows. Wheels for armv6l (Raspberry Pi Zero) and armv7l (Raspberry Pi) are available on piwheels.

✖️   failure "ImportError: DLL load failed"

If you run into this error on Windows the reason is most likely, that the Visual C++ 2019 redistributable is not installed, which is required to find C++ Libraries (The C++ 2019 version includes the 2015, 2017 and 2019 version).

with conda

RapidFuzz can be installed with conda:

conda install -c conda-forge rapidfuzz

from git

RapidFuzz can be installed directly from the source distribution by cloning the repository. This requires a C++14 capable compiler.

git clone https://github.com/maxbachmann/rapidfuzz.git
cd rapidfuzz
pip install .

Usage

Some simple functions are shown below. A complete documentation of all functions can be found here.

Scorers

Scorers in RapidFuzz can be found in the modules fuzz and string_metric.

Simple Ratio

> fuzz.ratio("this is a test", "this is a test!")
96.55171966552734

Partial Ratio

> fuzz.partial_ratio("this is a test", "this is a test!")
100.0

Token Sort Ratio

> fuzz.ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
90.90908813476562
> fuzz.token_sort_ratio("fuzzy wuzzy was a bear", "wuzzy fuzzy was a bear")
100.0

Token Set Ratio

> fuzz.token_sort_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
83.8709716796875
> fuzz.token_set_ratio("fuzzy was a bear", "fuzzy fuzzy was a bear")
100.0

Process

The process module makes it compare strings to lists of strings. This is generally more performant than using the scorers directly from Python. Here are some examples on the usage of processors in RapidFuzz:

> from rapidfuzz import process, fuzz
> choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
> process.extract("new york jets", choices, scorer=fuzz.WRatio, limit=2)
[('New York Jets', 100, 1), ('New York Giants', 78.57142639160156, 2)]
> process.extractOne("cowboys", choices, scorer=fuzz.WRatio)
("Dallas Cowboys", 90, 3)

The full documentation of processors can be found here

Benchmark

I generated a list of 10000 strings with length 10, that is compared to a sample of 100 elements from this list:

words = [
  ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(10))
  for _ in range(10_000)
]
samples = words[::len(words) // 100]

The first benchmark compares the performance of the scorers in FuzzyWuzzy and RapidFuzz when they are used directly from Python in the following way:

for sample in samples:
  for word in words:
    scorer(sample, word)

The following graph shows how many elements are processed per second with each of the scorers. There are big performance differences between the different scorers. However each of the scorers is faster in RapidFuzz

scorer Benchmark

The second benchmark compares the performance when the scorers are used in combination with extractOne in the following way:

for sample in samples:
  extractOne(sample, word, scorer=scorer)

The following graph shows how many elements are processed per second with each of the scorers. In RapidFuzz the usage of scorers through processors like extractOne is a lot faster than directly using it. Thats why they should be used whenever possible.

extractOne Benchmark

License

RapidFuzz is licensed under the MIT license since I believe that everyone should be able to use it without being forced to adopt the GPL license. Thats why the library is based on an older version of fuzzywuzzy that was MIT licensed as well. This old version of fuzzywuzzy can be found here.