spaCy/spacy/tests
Matthew Honnibal 81a47c01d8 Fix test for empty sentence string. 2016-09-27 19:21:22 +02:00
..
integration Update tests 2016-09-24 01:17:03 +02:00
matcher Update tests 2016-09-24 01:17:03 +02:00
morphology add __init__.py to empty package dirs 2016-03-14 11:28:03 +01:00
munge Mostly finished loading refactoring. Design is in place, but doesn't work yet. 2016-09-24 15:42:01 +02:00
parser Update tests 2016-09-24 01:17:03 +02:00
print add __init__.py to empty package dirs 2016-03-14 11:28:03 +01:00
serialize Finish refactoring data loading 2016-09-24 20:26:17 +02:00
spans * Add span.sent property, re Issue #366 2016-05-06 00:17:38 +02:00
tagger Add test for Issue #445, fixed in 3cb4d455d, with improved lemmatizer logic 2016-09-27 18:39:46 +02:00
tokenizer * Fix Issue #360: Tokenizer failed when the infix regex matched the start of the string while trying to tokenize multi-infix tokens. 2016-05-09 13:23:47 +02:00
tokens Fix test for empty sentence string. 2016-09-27 19:21:22 +02:00
unit German noun chunk iterator now doesn't return tokens more than once 2016-05-03 16:58:59 +02:00
vectors add __init__.py to empty package dirs 2016-03-14 11:28:03 +01:00
vocab Fix Issue #371: Lexeme objects were unhashable. 2016-09-27 13:22:30 +02:00
website add __init__.py to empty package dirs 2016-03-14 11:28:03 +01:00
__init__.py add __init__.py to empty package dirs 2016-03-14 11:28:03 +01:00
conftest.py Add .blank() method to Parser. Start housing default dep labels and entity types within the Defaults class. 2016-09-26 11:57:54 +02:00
prag_sbd.py
sun.tokens
sun.txt
test_docs.py
test_matcher.py Refactor so that the tokenizer data is read from Python data, rather than from disk 2016-09-25 14:49:53 +02:00
test_pickle.py
tokenizer.sed