Commit Graph

14031 Commits

Author SHA1 Message Date
Adriane Boyd bf0cdae8d4
Add token_splitter component (#6726)
* Add long_token_splitter component

Add a `long_token_splitter` component for use with transformer
pipelines. This component splits up long tokens like URLs into smaller
tokens. This is particularly relevant for pretrained pipelines with
`strided_spans`, since the user can't change the length of the span
`window` and may not wish to preprocess the input texts.

The `long_token_splitter` splits tokens that are at least
`long_token_length` tokens long into smaller tokens of `split_length`
size.

Notes:

* Since this is intended for use as the first component in a pipeline,
the token splitter does not try to preserve any token annotation.
* API docs to come when the API is stable.

* Adjust API, add test

* Fix name in factory
2021-01-17 19:54:41 +08:00
Adriane Boyd 185fc62f4d
Remove unused is_base_form for mk lemmatizer (#6743)
Remove unimplemented/incorrect is_base_form for Macedonian lemmatizer.
2021-01-17 09:41:35 +01:00
Adriane Boyd 43a752a2a0
Fix assertion in default get oracle sequence usage (#6738)
Remove assertion for default debug value in 
`get_oracle_sequence_from_state`.
2021-01-16 16:07:39 +01:00
Matthew Honnibal f0c696b4aa Fix failed merge of #6694 patch 2021-01-16 13:44:11 +11:00
Adriane Boyd c8b4370865
Add all strings from source models (#6736)
Add all strings from the source model when adding a pipe from a source
model.

Minor:

* Skip `disable=["vocab", "tokenizer"]` when loading a source model from
the config, since this doesn't do anything and is misleading.
2021-01-16 12:26:15 +11:00
Adriane Boyd 9328dd5625
Handle unset token.morph in Morphologizer (#6704)
* Handle unset token.morph in Morphologizer

Handle unset `token.morph` in `Morphologizer.initialize` and
`Morphologizer.get_loss`. If both `token.morph` and `token.pos` are
unset, treat the annotation as missing rather than empty.

* Add token.has_morph()
2021-01-15 17:20:10 +01:00
Matthew Honnibal 7b3f0c6f1b
Questionable fix for parser training bug with misaligned sentences (#6694)
* Questionable fix for parser training bug with misaligned sentences

* Fix

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-01-15 14:18:24 +01:00
Ines Montani 330f9818c0
Merge pull request #6729 from explosion/chore/tidy-up 2021-01-15 13:27:59 +11:00
Ines Montani f9e4ac1283 Fix test 2021-01-15 12:51:02 +11:00
Ines Montani b0b743597c Tidy up and auto-format 2021-01-15 11:57:36 +11:00
Ines Montani e8a97a2bd6
Merge pull request #6720 from adrianeboyd/feature/improved-init-training-config-validation 2021-01-15 11:45:24 +11:00
Ines Montani 57369909c0
Merge pull request #6727 from adrianeboyd/chore/update-develop-from-master-rc3 2021-01-15 11:44:28 +11:00
Ines Montani 8ba5d88b4b
Merge pull request #6691 from svlandeg/feature/missing-dep 2021-01-15 11:43:36 +11:00
Adriane Boyd 681a6195f7 Validate seed and gpu_allocator manually 2021-01-14 16:57:57 +01:00
Adriane Boyd 0c936004d1 Merge remote-tracking branch 'upstream/master' into chore/update-develop-from-master-rc3 2021-01-14 11:49:58 +01:00
Matthew Honnibal 92310a5e26
Merge branch 'develop' into feature/missing-dep 2021-01-14 17:39:01 +11:00
Adriane Boyd e649242927
Prevent overlapping noun chunks for Spanish (#6712)
* Prevent overlapping noun chunks in Spanish noun chunk iterator
* Clean up similar code in Danish noun chunk iterator
2021-01-14 17:33:31 +11:00
Adriane Boyd 9957ed7897
Override language defaults for null token and URL match (#6705)
* Override language defaults for null token and URL match

When the serialized `token_match` or `url_match` is `None`, override the
language defaults to preserve `None` on deserialization.

* Fix fixtures in tests
2021-01-14 17:31:29 +11:00
Matthew Honnibal f277bfdf0f
Add SpanGroup and Graph container types to represent arbitrary annotations (#6696)
* Draft out initial Spans data structure

* Initial span group commit

* Basic span group support on Doc

* Basic test for span group

* Compile span_group.pyx

* Draft addition of SpanGroup to DocBin

* Add deserialization for SpanGroup

* Add tests for serializing SpanGroup

* Fix serialization of SpanGroup

* Add EdgeC and GraphC structs

* Add draft Graph data structure

* Compile graph

* More work on Graph

* Update GraphC

* Upd graph

* Fix walk functions

* Let Graph take nodes and edges on construction

* Fix walking and getting

* Add graph tests

* Fix import

* Add module with the SpanGroups dict thingy

* Update test

* Rename 'span_groups' attribute

* Try to fix c++11 compilation

* Fix test

* Update DocBin

* Try to fix compilation

* Try to fix graph

* Improve SpanGroup docstrings

* Add doc.spans to documentation

* Fix serialization

* Tidy up and add docs

* Update docs [ci skip]

* Add SpanGroup.has_overlap

* WIP updated Graph API

* Start testing new Graph API

* Update Graph tests

* Update Graph

* Add docstring

Co-authored-by: Ines Montani <ines@ines.io>
2021-01-14 17:30:41 +11:00
Adriane Boyd 54e8e3c208
Update model-related dependencies (#6725)
* Update pymorphy2 error messages for Russian and Ukrainian
* Add pymorphy2 to pex
* Update spacy-pkuseg version for pex
2021-01-14 17:29:44 +11:00
Ines Montani 29c3ca7e34 Fix SVG integration [ci skip] 2021-01-14 13:33:41 +11:00
svlandeg fec9b81aa2 Merge remote-tracking branch 'upstream/develop' into feature/missing-dep 2021-01-13 17:46:12 +01:00
svlandeg ed53bb979d cleanup 2021-01-13 14:20:05 +01:00
svlandeg 86a4e316b8 fix sent_starts 2021-01-13 13:47:25 +01:00
Antonio Miras b4bd8f347a
spaCy Universe: New project; SpacyDotNet (#6702)
* Universe: SpacyDotNet a .NET Core spaCy wrapper

* Signed contributor agreement

Co-authored-by: Antonio Miras <antonio@amiras.net>
2021-01-13 12:47:30 +11:00
Ines Montani 31a92b28ae
Merge pull request #6715 from adrianeboyd/feature/before-after-init-callbacks
Add initialize.before_init and after_init callbacks
2021-01-13 12:17:00 +11:00
Ines Montani 97d5a7ba99 Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2021-01-13 12:03:02 +11:00
Ines Montani 8d6448ccf7 Add config resolver test 2021-01-13 12:02:59 +11:00
svlandeg 232e953b14 pytest.approx with absolute eps 2021-01-12 20:32:57 +01:00
svlandeg 5b598bd1d5 formatting 2021-01-12 17:28:41 +01:00
svlandeg a581d82f33 introduce token.has_head and refer to MISSING_DEP_ (WIP) 2021-01-12 17:17:06 +01:00
Adriane Boyd 5fb8b7037a Expand initialize/training config validation
Validate both `[initialize]` and `[training]` in `debug data` and
`nlp.initialize()` with separate config validation error blocks that
indicate which block of the config is being validated.
2021-01-12 17:17:00 +01:00
Adriane Boyd a45d89f09a Add initialize.before_init and after_init callbacks
Add `initialize.before_init` and `initialize.after_init` callbacks to
the config. The `initialize.before_init` callback is a place to
implement one-time tokenizer customizations that are then saved with the
model.
2021-01-12 13:07:44 +01:00
Adriane Boyd ad43cbb042
Sync missing and misaligned values in Tagger loss (#6689)
Use `None` for both missing and misaligned annotation in
`Tagger.get_loss`, reverting to the default missing value in the loss
function.
2021-01-10 11:30:37 +11:00
Matthew Honnibal c04bab6bae
Fix train loop to avoid swallowing tracebacks (#6693)
* Avoid swallowing tracebacks in train loop

* Format

* Handle first
2021-01-09 08:25:47 +08:00
Sofie Van Landeghem a612a5ba3f
fix small typos (#6698) 2021-01-08 09:39:47 +01:00
Alex Combessie 9cc880014c
Remove questionable French stopwords (#6310)
* Remove questionable French stopwords

* Create alexcombessie.md
2021-01-08 11:36:22 +11:00
Cristiana S Parada 7a0222f260
Update stop_words.py in Portuguese (a,o,e) (#6345)
* Update stop_words.py

Added three aditional stopwords: "a" and "o" that means "the", and "e" that means "and"

* Create cristianasp.md

* zero edit to push CI

Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
2021-01-08 11:35:38 +11:00
Lorena Ciutacu f11002f1f1
add new Romanian stopwords (#6621)
* add contributor agreement

* update ro stopwords list

* add new stopwords
2021-01-08 11:34:47 +11:00
svlandeg dd12c6c8fd allow missing information in deps and heads annotations 2021-01-07 19:10:32 +01:00
svlandeg 1abeca90a6 refer to _parser_internals.nonproj.DELIMITER 2021-01-07 18:58:13 +01:00
Yohei Tamura 411c842a71
convert tuple to list, because the type mismatches (#6625) 2021-01-07 16:42:12 +11:00
Sofie Van Landeghem 75d9019343
Fix types of Tok2Vec encoding architectures (#6442)
* fix TorchBiLSTMEncoder documentation

* ensure the types of the encoding Tok2vec layers are correct

* update references from v1 to v2 for the new architectures
2021-01-07 16:39:27 +11:00
ophelielacroix e3222fdec9
Add (noun chunks) syntax iterators for Danish (#6246)
* add syntax iterators for danish

* add test noun chunks for danish syntax iterators

* add contributor agreement

* update da syntax iterators to remove nested chunks

* add tests for da noun chunks

* Fix test

* add missing import
* fix example

* Prevent overlapping noun chunks

Prevent overlapping noun chunks by tracking the end index of the
previous noun chunk span.

Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
2021-01-07 16:33:00 +11:00
Sofie Van Landeghem 8c1a23209f
Getting scores out of beam_parser (#6684)
* clean up of ner tests

* beam_parser tests

* implement get_beam_parses and scored_parses for the dep parser

* we don't have to add the parse if there are no arcs
2021-01-07 16:28:27 +11:00
Sofie Van Landeghem 3983bc6b1e
Fix Transformer width in TextCatEnsemble (#6431)
* add convenience method to determine tok2vec width in a model

* fix transformer tok2vec dimensions in TextCatEnsemble architecture

* init function should not be nested to avoid pickle issues
2021-01-06 12:44:04 +01:00
Sofie Van Landeghem 402dbc5bae
Getting scores out of beam_ner (#6575)
* small fixes and formatting

* bring test_issue4313 up-to-date, currently fails

* formatting

* add get_beam_parses method back

* add scored_ents function

* delete tag map
2021-01-06 12:02:32 +01:00
Sofie Van Landeghem 82ae95267a
Docs for pretrain architectures (#6605)
* document pretraining architectures

* formatting

* bit more info

* small fixes
2021-01-06 16:12:30 +11:00
Sofie Van Landeghem 6f7e7d88b9
remove cause without apostrophe from norm exceptions (#6636) 2021-01-06 12:30:30 +08:00
Adriane Boyd bf9096437e
Set default lemmas in retokenizer (#6667)
Instead of unsetting lemmas on retokenized tokens, set the default
lemmas to:

* merge: concatenate any existing lemmas with `SPACY` preserved
* split: use the new `ORTH` values if lemmas were previously set,
  otherwise leave unset
2021-01-06 12:29:44 +08:00