spaCy/spacy
Matthew Honnibal f11030aadc * Remove out-dated TODO comment 2015-10-26 12:33:38 +11:00
..
de * Add rule to ensure ordinals are preserved as single tokens 2015-09-22 12:26:05 +10:00
en * Fix exception for python 2 2015-10-23 01:56:13 +02:00
fi * More work on language-generic parsing 2015-08-28 02:02:33 +02:00
it * Delete extra wordnets 2015-09-13 10:31:37 +10:00
munge * Fix Python3 problem in align_raw 2015-07-28 16:06:53 +02:00
serialize * Fix regression in packer 2015-07-27 21:53:38 +02:00
syntax * Fix whitespace attachment thing 2015-10-13 09:46:38 +02:00
tests * Add tests for pickling vectors, re: Issue #125 2015-10-26 12:31:05 +11:00
tokens added __repr__ that prints text in ipython for doc, token, and span objects 2015-10-21 14:11:46 +03:00
__init__.pxd
__init__.py
_ml.pxd * Very scrappy, likely buggy first-cut pickle implementation, to work on Issue #125: allow pickle for Apache Spark. The current implementation sends stuff to temp files, and does almost nothing to ensure all modifiable state is actually preserved. The Language() instance is a deep tree of extension objects, and if pickling during training, some of the C-data state is hard to preserve. 2015-10-13 13:44:41 +11:00
_ml.pyx * Fix Model.__reduce__ 2015-10-13 15:14:38 +11:00
_nn.py
_nn.pyx
_theano.pxd
_theano.pyx
attrs.pxd * Refactor symbols, so that frequency rank can be derived from the orth id of a word. 2015-10-13 13:44:39 +11:00
attrs.pyx * Map empty string to NULL_ATTR in attrs 2015-10-13 13:44:40 +11:00
cfile.pxd * Add cfile.pyx 2015-07-23 01:10:36 +02:00
cfile.pyx * Fix CFile for Python2 2015-07-25 22:55:53 +02:00
gold.pxd * Remove unused import 2015-07-25 18:11:16 +02:00
gold.pyx * Use io module insteads of deprecated codecs module 2015-10-10 14:13:01 +11:00
language.py * Fix language.py for change in StringStore load API 2015-10-23 03:48:12 +11:00
lemmatizer.py Fixing encoding issue #4 2015-10-21 20:45:56 +03:00
lexeme.pxd * Fix ugly py_check_flag and py_set_flag functions in Lexeme 2015-09-15 13:06:18 +10:00
lexeme.pyx * Fix vectors bugs for OOV words 2015-09-22 02:10:25 +02:00
matcher.pyx * Fix Issue #118: Matcher behaves unpredictably when matches overlap. 2015-10-19 16:45:32 +11:00
morphology.pxd * Ensure Morphology can be pickled, to address Issue #125. 2015-10-13 13:44:41 +11:00
morphology.pyx * Ensure Morphology can be pickled, to address Issue #125. 2015-10-13 13:44:41 +11:00
multi_words.py
orth.pxd * Host IS_ flags in attrs.pxd, and add properties for them on Token and Lexeme objects 2015-07-26 16:37:16 +02:00
orth.pyx * Fix type declaration in asciied function 2015-10-09 13:46:57 +11:00
parts_of_speech.pxd * Fix parts_of_speech now that symbols list has been reformed 2015-10-13 13:44:40 +11:00
parts_of_speech.pyx * Fix NAMES list in spacy/parts_of_speech.pyx 2015-10-13 14:18:45 +11:00
scorer.py * Fix training under python3 2015-07-28 14:09:30 +02:00
senses.pxd
senses.pyx
strings.pxd * Replace UniStr, using unicode objects instead 2015-07-22 04:52:05 +02:00
strings.pyx * Load/dump strings with a json file, instead of the hacky strings file we were using. 2015-10-22 21:13:03 +11:00
structs.pxd * Remove const qualifier on LexemeC.repvec 2015-09-15 14:42:51 +10:00
symbols.pxd * Use lower case strings for dependency label names in symbols enum 2015-10-13 13:44:40 +11:00
symbols.pyx * Use lower case strings for dependency label names in symbols enum 2015-10-13 13:44:40 +11:00
tagger.pxd * Tagger training now working. Still need to test load/save of model. Morphology still broken. 2015-08-27 09:16:11 +02:00
tagger.pyx * Very scrappy, likely buggy first-cut pickle implementation, to work on Issue #125: allow pickle for Apache Spark. The current implementation sends stuff to temp files, and does almost nothing to ensure all modifiable state is actually preserved. The Language() instance is a deep tree of extension objects, and if pickling during training, some of the C-data state is hard to preserve. 2015-10-13 13:44:41 +11:00
tokenizer.pxd Add __reduce__ to Tokenizer so that English pickles. 2015-10-23 22:24:03 -07:00
tokenizer.pyx Add __reduce__ to Tokenizer so that English pickles. 2015-10-23 22:24:03 -07:00
typedefs.pxd * Fix type declarations for attr_t. Remove unused id_t. 2015-07-18 22:39:57 +02:00
typedefs.pyx
util.py Merge pull request #126 from tomtung/master 2015-10-10 14:14:57 +11:00
vocab.pxd * Start trying to pickle Vocab 2015-10-13 13:44:41 +11:00
vocab.pyx * Remove out-dated TODO comment 2015-10-26 12:33:38 +11:00