💫 Industrial-strength Natural Language Processing (NLP) in Python
Go to file
Matthew Honnibal a2047fa5aa * Add 's suffix to tokenization table 2014-08-18 23:21:37 +02:00
data * Add 's suffix to tokenization table 2014-08-18 23:21:37 +02:00
include * Add ext stuff, while I figure out how to get it working as a different project 2014-07-05 20:51:04 +02:00
spacy * Roll back to using unicode, and never Py_UNICODE. No dependence on murmurhash either. 2014-08-18 20:48:48 +02:00
tests * Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word. 2014-08-18 19:14:00 +02:00
.gitignore * Update gitignore 2014-08-02 22:06:30 +01:00
README.md Initial commit 2014-07-04 01:15:40 +10:00
fabfile.py * Add build/setup stuff 2014-07-05 20:49:34 +02:00
requirements.txt * Switch to using sparsehash and murmurhash libraries out of pip 2014-07-25 15:47:27 +01:00
setup.py * Refactor spacy so that chunks return arrays of lexemes, so that there is properly one lexeme per word. 2014-08-18 19:14:00 +02:00

README.md

spaCy

Lightning fast, full-cream NL tokenization. Tokens are pointers to rich Lexeme structs.