spaCy/spacy/lang/hu
Adriane Boyd 565e0eef73 Add tokenizer option for token match with affixes
To fix the slow tokenizer URL (#4374) and allow `token_match` to take
priority over prefixes and suffixes by default, introduce a new
tokenizer option for a token match pattern that's applied after prefixes
and suffixes but before infixes.
2020-05-05 10:35:33 +02:00
..
__init__.py Move lookup tables out of the core library (#4346) 2019-10-01 00:01:27 +02:00
examples.py
punctuation.py Revert changes to token_match priority from #4374 2020-03-09 12:09:41 +01:00
stop_words.py
tokenizer_exceptions.py Add tokenizer option for token match with affixes 2020-05-05 10:35:33 +02:00