.. |
_depr_group_by.py
|
* Refactor around Word objects, adapting tests. Tests passing, except for string views.
|
2014-08-23 19:55:06 +02:00 |
my_test.py
|
* Initial commit. Tests passing for punctuation handling. Need contractions, file transport, tokenize function, etc.
|
2014-07-05 20:51:42 +02:00 |
sun.tokens
|
* Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes.
|
2014-07-07 01:15:59 +02:00 |
sun.txt
|
* Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes.
|
2014-07-07 01:15:59 +02:00 |
test_asciify.py
|
* Refactor to use tokens class.
|
2014-09-10 18:27:44 +02:00 |
test_canon_case.py
|
* Add tests for canon_case
|
2014-09-01 23:26:49 +02:00 |
test_contractions.py
|
* Fixed contraction tests. Need to correct problem with the way case stats and tag stats are supposed to work.
|
2014-08-27 20:22:33 +02:00 |
test_detokenize.py
|
* Add detokenize method and test
|
2014-10-18 18:07:29 +11:00 |
test_flag_features.py
|
* Switch to new data model, tests passing
|
2014-10-10 08:11:31 +11:00 |
test_infix.py
|
* Test hyphenation etc
|
2014-10-14 20:26:16 +11:00 |
test_is_punct.py
|
* Switch to new data model, tests passing
|
2014-10-10 08:11:31 +11:00 |
test_lexeme_flags.py
|
* Add offsets to Tokens class. Some changes to interfaces, and reorganization of spacy.Lang
|
2014-10-14 16:17:45 +11:00 |
test_non_sparse.py
|
* Add tests for non_sparse string transform
|
2014-09-01 23:27:31 +02:00 |
test_only_punct.py
|
* Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token.
|
2014-09-16 18:01:46 +02:00 |
test_orth.py
|
* Add offsets to Tokens class. Some changes to interfaces, and reorganization of spacy.Lang
|
2014-10-14 16:17:45 +11:00 |
test_post_punct.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
test_pre_punct.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
test_rules.py
|
* Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word.
|
2014-08-18 19:14:00 +02:00 |
test_shape.py
|
* Add tests for word shape features
|
2014-09-01 23:26:17 +02:00 |
test_special_affix.py
|
* Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token.
|
2014-09-16 18:01:46 +02:00 |
test_string_loading.py
|
* Add extra tests
|
2014-09-25 18:29:42 +02:00 |
test_surround_punct.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
test_tokenizer.py
|
* Add offsets to Tokens class. Some changes to interfaces, and reorganization of spacy.Lang
|
2014-10-14 16:17:45 +11:00 |
test_vocab.py
|
* Add offsets to Tokens class. Some changes to interfaces, and reorganization of spacy.Lang
|
2014-10-14 16:17:45 +11:00 |
test_whitespace.py
|
* Have tokenizer emit tokens for whitespace other than single spaces
|
2014-10-14 20:25:57 +11:00 |
test_wiki_sun.py
|
* Pass tests. Need to implement more feature functions.
|
2014-08-30 20:36:06 +02:00 |
tokenizer.sed
|
* Working tokenization. en doesn't match PTB perfectly. Need to reorganize before adding more schemes.
|
2014-07-07 01:15:59 +02:00 |