mirror of https://github.com/explosion/spaCy.git
1d59fdbd39
* Adapt tokenization methods from `pyvi` to preserve text encoding and whitespace * Add serialization support similar to Chinese and Japanese Note: as for Chinese and Japanese, some settings are duplicated in `config.cfg` and `tokenizer/cfg`. |
||
---|---|---|
.. | ||
__init__.py | ||
lex_attrs.py | ||
stop_words.py |