.. |
data
|
…
|
|
de
|
Add language data for German
|
2016-09-25 15:44:45 +02:00 |
en
|
Move tokenizer data for German into spacy.de.language_data
|
2016-09-25 15:37:33 +02:00 |
fi
|
…
|
|
it
|
…
|
|
munge
|
…
|
|
serialize
|
* Whitespace
|
2016-01-29 03:59:22 +01:00 |
syntax
|
Fix Issue #469: Incorrectly cased root label in noun chunk iterator
|
2016-09-27 13:13:01 +02:00 |
tests
|
Add test for Issue #435
|
2016-09-27 13:52:38 +02:00 |
tokens
|
Allow entities to be set by Span, or by 4-tuple (with entity ID)
|
2016-09-24 01:17:43 +02:00 |
zh
|
* Work on Chinese support
|
2016-05-05 11:39:12 +02:00 |
__init__.pxd
|
…
|
|
__init__.py
|
Add spacy.blank() method, that doesn't load data. Don't try to load data if path is falsey
|
2016-09-26 11:07:46 +02:00 |
about.py
|
* Increment version
|
2016-05-09 13:20:00 +02:00 |
attrs.pxd
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
attrs.pyx
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
cfile.pxd
|
…
|
|
cfile.pyx
|
Handle pathlib.Path objects in CFile
|
2016-09-24 22:01:46 +02:00 |
deprecated.py
|
Finish refactoring data loading
|
2016-09-24 20:26:17 +02:00 |
download.py
|
Add parameter to download() for application to not exit if a Model exists. The default behavior is unchanged.
|
2016-09-14 10:04:09 -04:00 |
gold.pxd
|
…
|
|
gold.pyx
|
don't require read_json_file to expect particular annotations
|
2016-05-02 15:29:30 +02:00 |
language.py
|
Temporarily patch handling of defaul templates for tagger. Need to move these to language_data.
|
2016-09-27 13:21:28 +02:00 |
lemmatizer.py
|
Pass lemmatizer morphological features, so that rules are sensitive to base/inflected distinction, which is how the WordNet data is designed. See Issue #435
|
2016-09-27 13:52:11 +02:00 |
lexeme.pxd
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
lexeme.pyx
|
Fix Issue #371: Lexeme objects were unhashable.
|
2016-09-27 13:22:30 +02:00 |
matcher.pyx
|
Finish refactoring data loading
|
2016-09-24 20:26:17 +02:00 |
morphology.pxd
|
…
|
|
morphology.pyx
|
…
|
|
multi_words.py
|
…
|
|
orth.pxd
|
remove text-unidecode dependency
|
2016-02-24 08:01:59 +01:00 |
orth.pyx
|
introduce lang field for LexemeC to hold language id
|
2016-03-10 13:01:34 +01:00 |
parts_of_speech.pxd
|
…
|
|
parts_of_speech.pyx
|
…
|
|
scorer.py
|
* Accept punct_labels as an argument to the scorer
|
2016-02-02 22:59:06 +01:00 |
strings.pxd
|
remove internal redundancy and overhead from StringStore
|
2016-03-24 15:25:27 +01:00 |
strings.pyx
|
remove ujson as default non-dev dependency (still works as fallback if installed), because ujson doesn't ship wheels
|
2016-04-12 11:28:07 +02:00 |
structs.pxd
|
Initial, limited support for quantified patterns in Matcher, and tracking of ent_id attribute in Token and Span. The quantifiers need a lot more testing, and there are some known problems. The main known problem is that the zero-plus and one-plus quantifiers won't work if a token can match both the quantified pattern expression AND the tail of the match.
|
2016-09-21 14:54:55 +02:00 |
symbols.pxd
|
German noun chunk iterator now doesn't return tokens more than once
|
2016-05-03 16:58:59 +02:00 |
symbols.pyx
|
German noun chunk iterator now doesn't return tokens more than once
|
2016-05-03 16:58:59 +02:00 |
tagger.pxd
|
…
|
|
tagger.pyx
|
Finish refactoring data loading
|
2016-09-24 20:26:17 +02:00 |
tokenizer.pxd
|
Finish refactoring data loading
|
2016-09-24 20:26:17 +02:00 |
tokenizer.pyx
|
Refactor so that the tokenizer data is read from Python data, rather than from disk
|
2016-09-25 14:49:53 +02:00 |
typedefs.pxd
|
…
|
|
typedefs.pyx
|
…
|
|
util.py
|
Refactor so that the tokenizer data is read from Python data, rather than from disk
|
2016-09-25 14:49:53 +02:00 |
vocab.pxd
|
Refactor so that the tokenizer data is read from Python data, rather than from disk
|
2016-09-25 14:49:53 +02:00 |
vocab.pyx
|
Refactor so that the tokenizer data is read from Python data, rather than from disk
|
2016-09-25 14:49:53 +02:00 |