Add English language data, so that the tokenizer doesn't require the data download

This commit is contained in:
Matthew Honnibal 2016-09-25 14:49:00 +02:00
parent 82b8cc5efb
commit d7e9acdcdf
1 changed files with 5037 additions and 0 deletions

5037
spacy/en/language_data.py Normal file

File diff suppressed because it is too large Load Diff