.. |
__init__.py
|
* Basic punct tests updated and passing
|
2014-08-27 19:38:57 +02:00 |
de.pxd
|
* Add German tokenizer files
|
2014-09-25 18:29:13 +02:00 |
de.pyx
|
* Add German tokenizer files
|
2014-09-25 18:29:13 +02:00 |
en.pxd
|
* Refactor tokenization, splitting it into a clearer life-cycle.
|
2014-09-16 13:16:02 +02:00 |
en.pyx
|
* Switch to new data model, tests passing
|
2014-10-10 08:11:31 +11:00 |
lang.pxd
|
* Preparations in place to handle hyphenation etc
|
2014-10-10 20:23:23 +11:00 |
lang.pyx
|
* Preparations in place to handle hyphenation etc
|
2014-10-10 20:23:23 +11:00 |
lexeme.pxd
|
* Rename LexStr_casefix to LexStr_norm and LexInt_i to LexInt_id
|
2014-10-14 15:19:07 +11:00 |
lexeme.pyx
|
* Rename LexStr_casefix to LexStr_norm and LexInt_i to LexInt_id
|
2014-10-14 15:19:07 +11:00 |
orth.py
|
* Switch to new data model, tests passing
|
2014-10-10 08:11:31 +11:00 |
ptb3.pxd
|
* Adding PTB3 tokenizer back in, so can understand how much boilerplate is in the docs for multiple tokenizers
|
2014-08-29 02:30:27 +02:00 |
ptb3.pyx
|
* Switch to using a Python ref counted gateway to malloc/free, to prevent memory leaks
|
2014-09-17 20:02:26 +02:00 |
tokens.pxd
|
* Have tokens track tuples that record the start offset and pos tag as well as a lexeme pointer
|
2014-10-14 15:21:03 +11:00 |
tokens.pyx
|
* Have tokens track tuples that record the start offset and pos tag as well as a lexeme pointer
|
2014-10-14 15:21:03 +11:00 |
typedefs.pxd
|
* Add typedefs file
|
2014-09-17 23:10:32 +02:00 |
util.py
|
* Update Lexicon class to expect a list of lexeme dict descriptions
|
2014-10-09 14:51:35 +11:00 |
word.pxd
|
* Switch to new data model, tests passing
|
2014-10-10 08:11:31 +11:00 |
word.pyx
|
* Slight cleaning of tokenizer code
|
2014-10-10 19:17:22 +11:00 |