Merge pull request #1367 from h4iku/fix-spacy101-typo

Fix small typos in spaCy 101 usage document
This commit is contained in:
Ines Montani 2017-09-27 10:00:11 +02:00 committed by GitHub
commit 55b758154d
3 changed files with 4 additions and 4 deletions

View File

@ -22,7 +22,7 @@ p
+aside-code.
from spacy.lang.en import English
from spacy.lang.en import German
from spacy.lang.de import German
nlp_en = English() # includes English data
nlp_de = German() # includes German data

View File

@ -112,4 +112,4 @@ p
| only works if you actually #[em know] that the document contains that
| word. To prevent this problem, spaCy will also export the #[code Vocab]
| when you save a #[code Doc] or #[code nlp] object. This will give you
| the object and its encoded annotations, plus they "key" to decode it.
| the object and its encoded annotations, plus the "key" to decode it.

View File

@ -65,7 +65,7 @@ p
| not designed specifically for chat bots, and only provides the
| underlying text processing capabilities.
+item #[strong spaCy is not research software].
| It's is built on the latest research, but it's designed to get
| It's built on the latest research, but it's designed to get
| things done. This leads to fairly different design decisions than
| #[+a("https://github./nltk/nltk") NLTK]
| or #[+a("https://stanfordnlp.github.io/CoreNLP/") CoreNLP], which were
@ -87,7 +87,7 @@ p
+aside
| If one of spaCy's functionalities #[strong needs a model], it means that
| you need to have one our the available
| you need to have one of the available
| #[+a("/docs/usage/models") statistical models] installed. Models are used
| to #[strong predict] linguistic annotations for example, if a word is
| a verb or a noun.