Generalize handling of tokenizer special cases (#4259)
* Generalize handling of tokenizer special cases
Handle tokenizer special cases more generally by using the Matcher
internally to match special cases after the affix/token_match
tokenization is complete.
Instead of only matching special cases while processing balanced or
nearly balanced prefixes and suffixes, this recognizes special cases in
a wider range of contexts:
* Allows arbitrary numbers of prefixes/affixes around special cases
* Allows special cases separated by infixes
Existing tests/settings that couldn't be preserved as before:
* The emoticon '")' is no longer a supported special case
* The emoticon ':)' in "example:)" is a false positive again
When merged with #4258 (or the relevant cache bugfix), the affix and
token_match properties should be modified to flush and reload all
special cases to use the updated internal tokenization with the Matcher.
* Remove accidentally added test case
* Really remove accidentally added test
* Reload special cases when necessary
Reload special cases when affixes or token_match are modified. Skip
reloading during initialization.
* Update error code number
* Fix offset and whitespace in Matcher special cases
* Fix offset bugs when merging and splitting tokens
* Set final whitespace on final token in inserted special case
* Improve cache flushing in tokenizer
* Separate cache and specials memory (temporarily)
* Flush cache when adding special cases
* Repeated `self._cache = PreshMap()` and `self._specials = PreshMap()`
are necessary due to this bug:
https://github.com/explosion/preshed/issues/21
* Remove reinitialized PreshMaps on cache flush
* Update UD bin scripts
* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)
* Use special Matcher only for cases with affixes
* Reinsert specials cache checks during normal tokenization for special
cases as much as possible
* Additionally include specials cache checks while splitting on infixes
* Since the special Matcher needs consistent affix-only tokenization
for the special cases themselves, introduce the argument
`with_special_cases` in order to do tokenization with or without
specials cache checks
* After normal tokenization, postprocess with special cases Matcher for
special cases containing affixes
* Replace PhraseMatcher with Aho-Corasick
Replace PhraseMatcher with the Aho-Corasick algorithm over numpy arrays
of the hash values for the relevant attribute. The implementation is
based on FlashText.
The speed should be similar to the previous PhraseMatcher. It is now
possible to easily remove match IDs and matches don't go missing with
large keyword lists / vocabularies.
Fixes #4308.
* Restore support for pickling
* Fix internal keyword add/remove for numpy arrays
* Add test for #4248, clean up test
* Improve efficiency of special cases handling
* Use PhraseMatcher instead of Matcher
* Improve efficiency of merging/splitting special cases in document
* Process merge/splits in one pass without repeated token shifting
* Merge in place if no splits
* Update error message number
* Remove UD script modifications
Only used for timing/testing, should be a separate PR
* Remove final traces of UD script modifications
* Update UD bin scripts
* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)
* Add missing loop for match ID set in search loop
* Remove cruft in matching loop for partial matches
There was a bit of unnecessary code left over from FlashText in the
matching loop to handle partial token matches, which we don't have with
PhraseMatcher.
* Replace dict trie with MapStruct trie
* Fix how match ID hash is stored/added
* Update fix for match ID vocab
* Switch from map_get_unless_missing to map_get
* Switch from numpy array to Token.get_struct_attr
Access token attributes directly in Doc instead of making a copy of the
relevant values in a numpy array.
Add unsatisfactory warning for hash collision with reserved terminal
hash key. (Ideally it would change the reserved terminal hash and redo
the whole trie, but for now, I'm hoping there won't be collisions.)
* Restructure imports to export find_matches
* Implement full remove()
Remove unnecessary trie paths and free unused maps.
Parallel to Matcher, raise KeyError when attempting to remove a match ID
that has not been added.
* Switch to PhraseMatcher.find_matches
* Switch to local cdef functions for span filtering
* Switch special case reload threshold to variable
Refer to variable instead of hard-coded threshold
* Move more of special case retokenize to cdef nogil
Move as much of the special case retokenization to nogil as possible.
* Rewrap sort as stdsort for OS X
* Rewrap stdsort with specific types
* Switch to qsort
* Fix merge
* Improve cmp functions
* Fix realloc
* Fix realloc again
* Initialize span struct while retokenizing
* Temporarily skip retokenizing
* Revert "Move more of special case retokenize to cdef nogil"
This reverts commit 0b7e52c797cd8ff1548f214bd4186ebb3a7ce8b1.
* Revert "Switch to qsort"
This reverts commit a98d71a942fc9bca531cf5eb05cf89fa88153b60.
* Fix specials check while caching
* Modify URL test with emoticons
The multiple suffix tests result in the emoticon `:>`, which is now
retokenized into one token as a special case after the suffixes are
split off.
* Refactor _apply_special_cases()
* Use cdef ints for span info used in multiple spots
* Modify _filter_special_spans() to prefer earlier
Parallel to #4414, modify _filter_special_spans() so that the earlier
span is preferred for overlapping spans of the same length.
* Replace MatchStruct with Entity
Replace MatchStruct with Entity since the existing Entity struct is
nearly identical.
* Replace Entity with more general SpanC
* Replace MatchStruct with SpanC
* Add error in debug-data if no dev docs are available (see #4575)
* Update azure-pipelines.yml
* Revert "Update azure-pipelines.yml"
This reverts commit ed1060cf59e5895b5fe92ad5b894fd1078ec4c49.
* Use latest wasabi
* Reorganise install_requires
* add dframcy to universe.json (#4580)
* Update universe.json [ci skip]
* Fix multiprocessing for as_tuples=True (#4582)
* Fix conllu script (#4579)
* force extensions to avoid clash between example scripts
* fix arg order and default file encoding
* add example config for conllu script
* newline
* move extension definitions to main function
* few more encodings fixes
* Add load_from_docbin example [ci skip]
TODO: upload the file somewhere
* Update README.md
* Add warnings about 3.8 (resolves #4593) [ci skip]
* Fixed typo: Added space between "recognize" and "various" (#4600)
* Fix DocBin.merge() example (#4599)
* Replace function registries with catalogue (#4584)
* Replace functions registries with catalogue
* Update __init__.py
* Fix test
* Revert unrelated flag [ci skip]
* Bugfix/dep matcher issue 4590 (#4601)
* add contributor agreement for prilopes
* add test for issue #4590
* fix on_match params for DependencyMacther (#4590)
* Minor updates to language example sentences (#4608)
* Add punctuation to Spanish example sentences
* Combine multilanguage examples for lang xx
* Add punctuation to nb examples
* Always realloc to a larger size
Avoid potential (unlikely) edge case and cymem error seen in #4604.
* Add error in debug-data if no dev docs are available (see #4575)
* Update debug-data for GoldCorpus / Example
* Ignore None label in misaligned NER data
2019-11-13 20:24:35 +00:00
"""
Example sentences to test spaCy and its language models .
>> > from spacy . lang . de . examples import sentences
>> > docs = nlp . pipe ( sentences )
"""
# combined examples from de/en/es/fr/it/nl/pl/pt/ru
sentences = [
" Die ganze Stadt ist ein Startup: Shenzhen ist das Silicon Valley für Hardware-Firmen " ,
" Wie deutsche Startups die Technologie vorantreiben wollen: Künstliche Intelligenz " ,
" Trend zum Urlaub in Deutschland beschert Gastwirten mehr Umsatz " ,
" Bundesanwaltschaft erhebt Anklage gegen mutmaßlichen Schweizer Spion " ,
" San Francisco erwägt Verbot von Lieferrobotern " ,
" Autonome Fahrzeuge verlagern Haftpflicht auf Hersteller " ,
" Wo bist du? " ,
" Was ist die Hauptstadt von Deutschland? " ,
" Apple is looking at buying U.K. startup for $1 billion " ,
" Autonomous cars shift insurance liability toward manufacturers " ,
" San Francisco considers banning sidewalk delivery robots " ,
" London is a big city in the United Kingdom. " ,
" Where are you? " ,
" Who is the president of France? " ,
" What is the capital of the United States? " ,
" When was Barack Obama born? " ,
" Apple está buscando comprar una startup del Reino Unido por mil millones de dólares. " ,
" Los coches autónomos delegan la responsabilidad del seguro en sus fabricantes. " ,
" San Francisco analiza prohibir los robots delivery. " ,
" Londres es una gran ciudad del Reino Unido. " ,
" El gato come pescado. " ,
" Veo al hombre con el telescopio. " ,
" La araña come moscas. " ,
" El pingüino incuba en su nido. " ,
" Apple cherche à acheter une start-up anglaise pour 1 milliard de dollars " ,
" Les voitures autonomes déplacent la responsabilité de l ' assurance vers les constructeurs " ,
" San Francisco envisage d ' interdire les robots coursiers sur les trottoirs " ,
" Londres est une grande ville du Royaume-Uni " ,
" L’ Italie choisit ArcelorMittal pour reprendre la plus grande aciérie d’ Europe " ,
" Apple lance HomePod parce qu ' il se sent menacé par l ' Echo d ' Amazon " ,
" La France ne devrait pas manquer d ' électricité cet été, même en cas de canicule " ,
" Nouvelles attaques de Trump contre le maire de Londres " ,
" Où es-tu ? " ,
" Qui est le président de la France ? " ,
" Où est la capitale des États-Unis ? " ,
" Quand est né Barack Obama ? " ,
" Apple vuole comprare una startup del Regno Unito per un miliardo di dollari " ,
" Le automobili a guida autonoma spostano la responsabilità assicurativa verso i produttori " ,
" San Francisco prevede di bandire i robot di consegna porta a porta " ,
" Londra è una grande città del Regno Unito. " ,
" Apple overweegt om voor 1 miljard een U.K. startup te kopen " ,
" Autonome auto ' s verschuiven de verzekeringverantwoordelijkheid naar producenten " ,
" San Francisco overweegt robots op voetpaden te verbieden " ,
" Londen is een grote stad in het Verenigd Koninkrijk " ,
" Poczuł przyjemną woń mocnej kawy. " ,
" Istnieje wiele dróg oddziaływania substancji psychoaktywnej na układ nerwowy. " ,
" Powitał mnie biało-czarny kot, płosząc siedzące na płocie trzy dorodne dudki. " ,
" Nowy abonament pod lupą Komisji Europejskiej " ,
" Czy w ciągu ostatnich 48 godzin spożyłeś leki zawierające paracetamol? " ,
" Kto ma ochotę zapoznać się z innymi niż w książkach przygodami Muminków i ich przyjaciół, temu polecam komiks Tove Jansson „Muminki i morze”. " ,
" Apple está querendo comprar uma startup do Reino Unido por 100 milhões de dólares. " ,
2022-01-30 07:43:29 +00:00
" Carros autônomos empurram a responsabilidade do seguro para os fabricantes.. " ,
Generalize handling of tokenizer special cases (#4259)
* Generalize handling of tokenizer special cases
Handle tokenizer special cases more generally by using the Matcher
internally to match special cases after the affix/token_match
tokenization is complete.
Instead of only matching special cases while processing balanced or
nearly balanced prefixes and suffixes, this recognizes special cases in
a wider range of contexts:
* Allows arbitrary numbers of prefixes/affixes around special cases
* Allows special cases separated by infixes
Existing tests/settings that couldn't be preserved as before:
* The emoticon '")' is no longer a supported special case
* The emoticon ':)' in "example:)" is a false positive again
When merged with #4258 (or the relevant cache bugfix), the affix and
token_match properties should be modified to flush and reload all
special cases to use the updated internal tokenization with the Matcher.
* Remove accidentally added test case
* Really remove accidentally added test
* Reload special cases when necessary
Reload special cases when affixes or token_match are modified. Skip
reloading during initialization.
* Update error code number
* Fix offset and whitespace in Matcher special cases
* Fix offset bugs when merging and splitting tokens
* Set final whitespace on final token in inserted special case
* Improve cache flushing in tokenizer
* Separate cache and specials memory (temporarily)
* Flush cache when adding special cases
* Repeated `self._cache = PreshMap()` and `self._specials = PreshMap()`
are necessary due to this bug:
https://github.com/explosion/preshed/issues/21
* Remove reinitialized PreshMaps on cache flush
* Update UD bin scripts
* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)
* Use special Matcher only for cases with affixes
* Reinsert specials cache checks during normal tokenization for special
cases as much as possible
* Additionally include specials cache checks while splitting on infixes
* Since the special Matcher needs consistent affix-only tokenization
for the special cases themselves, introduce the argument
`with_special_cases` in order to do tokenization with or without
specials cache checks
* After normal tokenization, postprocess with special cases Matcher for
special cases containing affixes
* Replace PhraseMatcher with Aho-Corasick
Replace PhraseMatcher with the Aho-Corasick algorithm over numpy arrays
of the hash values for the relevant attribute. The implementation is
based on FlashText.
The speed should be similar to the previous PhraseMatcher. It is now
possible to easily remove match IDs and matches don't go missing with
large keyword lists / vocabularies.
Fixes #4308.
* Restore support for pickling
* Fix internal keyword add/remove for numpy arrays
* Add test for #4248, clean up test
* Improve efficiency of special cases handling
* Use PhraseMatcher instead of Matcher
* Improve efficiency of merging/splitting special cases in document
* Process merge/splits in one pass without repeated token shifting
* Merge in place if no splits
* Update error message number
* Remove UD script modifications
Only used for timing/testing, should be a separate PR
* Remove final traces of UD script modifications
* Update UD bin scripts
* Update imports for `bin/`
* Add all currently supported languages
* Update subtok merger for new Matcher validation
* Modify blinded check to look at tokens instead of lemmas (for corpora
with tokens but not lemmas like Telugu)
* Add missing loop for match ID set in search loop
* Remove cruft in matching loop for partial matches
There was a bit of unnecessary code left over from FlashText in the
matching loop to handle partial token matches, which we don't have with
PhraseMatcher.
* Replace dict trie with MapStruct trie
* Fix how match ID hash is stored/added
* Update fix for match ID vocab
* Switch from map_get_unless_missing to map_get
* Switch from numpy array to Token.get_struct_attr
Access token attributes directly in Doc instead of making a copy of the
relevant values in a numpy array.
Add unsatisfactory warning for hash collision with reserved terminal
hash key. (Ideally it would change the reserved terminal hash and redo
the whole trie, but for now, I'm hoping there won't be collisions.)
* Restructure imports to export find_matches
* Implement full remove()
Remove unnecessary trie paths and free unused maps.
Parallel to Matcher, raise KeyError when attempting to remove a match ID
that has not been added.
* Switch to PhraseMatcher.find_matches
* Switch to local cdef functions for span filtering
* Switch special case reload threshold to variable
Refer to variable instead of hard-coded threshold
* Move more of special case retokenize to cdef nogil
Move as much of the special case retokenization to nogil as possible.
* Rewrap sort as stdsort for OS X
* Rewrap stdsort with specific types
* Switch to qsort
* Fix merge
* Improve cmp functions
* Fix realloc
* Fix realloc again
* Initialize span struct while retokenizing
* Temporarily skip retokenizing
* Revert "Move more of special case retokenize to cdef nogil"
This reverts commit 0b7e52c797cd8ff1548f214bd4186ebb3a7ce8b1.
* Revert "Switch to qsort"
This reverts commit a98d71a942fc9bca531cf5eb05cf89fa88153b60.
* Fix specials check while caching
* Modify URL test with emoticons
The multiple suffix tests result in the emoticon `:>`, which is now
retokenized into one token as a special case after the suffixes are
split off.
* Refactor _apply_special_cases()
* Use cdef ints for span info used in multiple spots
* Modify _filter_special_spans() to prefer earlier
Parallel to #4414, modify _filter_special_spans() so that the earlier
span is preferred for overlapping spans of the same length.
* Replace MatchStruct with Entity
Replace MatchStruct with Entity since the existing Entity struct is
nearly identical.
* Replace Entity with more general SpanC
* Replace MatchStruct with SpanC
* Add error in debug-data if no dev docs are available (see #4575)
* Update azure-pipelines.yml
* Revert "Update azure-pipelines.yml"
This reverts commit ed1060cf59e5895b5fe92ad5b894fd1078ec4c49.
* Use latest wasabi
* Reorganise install_requires
* add dframcy to universe.json (#4580)
* Update universe.json [ci skip]
* Fix multiprocessing for as_tuples=True (#4582)
* Fix conllu script (#4579)
* force extensions to avoid clash between example scripts
* fix arg order and default file encoding
* add example config for conllu script
* newline
* move extension definitions to main function
* few more encodings fixes
* Add load_from_docbin example [ci skip]
TODO: upload the file somewhere
* Update README.md
* Add warnings about 3.8 (resolves #4593) [ci skip]
* Fixed typo: Added space between "recognize" and "various" (#4600)
* Fix DocBin.merge() example (#4599)
* Replace function registries with catalogue (#4584)
* Replace functions registries with catalogue
* Update __init__.py
* Fix test
* Revert unrelated flag [ci skip]
* Bugfix/dep matcher issue 4590 (#4601)
* add contributor agreement for prilopes
* add test for issue #4590
* fix on_match params for DependencyMacther (#4590)
* Minor updates to language example sentences (#4608)
* Add punctuation to Spanish example sentences
* Combine multilanguage examples for lang xx
* Add punctuation to nb examples
* Always realloc to a larger size
Avoid potential (unlikely) edge case and cymem error seen in #4604.
* Add error in debug-data if no dev docs are available (see #4575)
* Update debug-data for GoldCorpus / Example
* Ignore None label in misaligned NER data
2019-11-13 20:24:35 +00:00
" São Francisco considera banir os robôs de entrega que andam pelas calçadas. " ,
" Londres é a maior cidade do Reino Unido. " ,
# Translations from English:
" Apple рассматривает возможность покупки стартапа из Соединённого Королевства за $1 млрд " ,
" Беспилотные автомобили перекладывают страховую ответственность на производителя " ,
" В Сан-Франциско рассматривается возможность запрета роботов-курьеров, которые перемещаются по тротуару" ,
" Лондон — это большой город в Соединённом Королевстве " ,
# Native Russian sentences:
# Colloquial:
" Да, нет, наверное! " , # Typical polite refusal
" Обратите внимание на необыкновенную красоту этого города-героя Москвы, столицы нашей Родины! " , # From a tour guide speech
# Examples of Bookish Russian:
# Quote from "The Golden Calf"
" Рио-де-Жанейро — это моя мечта, и не смейте касаться её своими грязными лапами! " ,
# Quotes from "Ivan Vasilievich changes his occupation"
" Ты пошто боярыню обидел, смерд?!! " ,
" Оставь меня, старушка, я в печали! " ,
# Quotes from Dostoevsky:
" Уж коли я, такой же, как и ты, человек грешный, над тобой умилился и пожалел тебя, кольми паче б о г " ,
" В мечтах я нередко, говорит, доходил до страстных помыслов о служении человечеству и может быть действительно пошел бы на крест за людей, если б это вдруг как-нибудь потребовалось, а между тем я двух дней не в состоянии прожить ни с кем в одной комнате, о чем знаю из опыта" ,
" Зато всегда так происходило, что чем более я ненавидел людей в частности, тем пламеннее становилась любовь моя к человечеству вообще " ,
# Quotes from Chekhov:
" Ненужные дела и разговоры всё о б одном отхватывают на свою долю лучшую часть времени, лучшие силы, и в конце концов остается какая-то куцая, бескрылая жизнь, какая-то чепуха, и уйти и бежать нельзя, точно сидишь в сумасшедшем доме или в арестантских ротах! " ,
# Quotes from Turgenev:
" Нравится тебе женщина, старайся добиться толку; а нельзя — ну, не надо, отвернись — земля не клином сошлась " ,
" Узенькое местечко, которое я занимаю, до того крохотно в сравнении с остальным пространством, где меня нет и где дела до меня нет; и часть времени, которую мне удастся прожить, так ничтожна перед вечностью, где меня не было и не будет... " ,
# Quotes from newspapers:
# Komsomolskaya Pravda:
" Н а заседании президиума правительства Москвы принято решение присвоить статус инвестиционного приоритетного проекта города Москвы киностудии Союзмультфильм" ,
" Глава Минобороны Сергей Шойгу заявил, что обстановка на этом стратегическом направлении требует непрерывного совершенствования боевого состава войск " ,
# Argumenty i Facty:
" Н а реплику лже-Говина — дескать, он (Волков) будет лучшим революционером — Стамп с энтузиазмом ответил: Непременно!" ,
]