2016-10-31 18:04:15 +00:00
|
|
|
|
//- 💫 DOCS > API > ANNOTATION SPECS
|
|
|
|
|
|
|
|
|
|
include ../../_includes/_mixins
|
|
|
|
|
|
|
|
|
|
p This document describes the target annotations spaCy is trained to predict.
|
|
|
|
|
|
|
|
|
|
+h(2, "tokenization") Tokenization
|
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| Tokenization standards are based on the
|
|
|
|
|
| #[+a("https://catalog.ldc.upenn.edu/LDC2013T19") OntoNotes 5] corpus.
|
|
|
|
|
| The tokenizer differs from most by including tokens for significant
|
|
|
|
|
| whitespace. Any sequence of whitespace characters beyond a single space
|
|
|
|
|
| (#[code ' ']) is included as a token.
|
|
|
|
|
|
|
|
|
|
+aside-code("Example").
|
|
|
|
|
from spacy.en import English
|
|
|
|
|
nlp = English(parser=False)
|
|
|
|
|
tokens = nlp('Some\nspaces and\ttab characters')
|
|
|
|
|
print([t.orth_ for t in tokens])
|
|
|
|
|
# ['Some', '\n', 'spaces', ' ', 'and', '\t', 'tab', 'characters']
|
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| The whitespace tokens are useful for much the same reason punctuation is
|
|
|
|
|
| – it's often an important delimiter in the text. By preserving it in the
|
|
|
|
|
| token output, we are able to maintain a simple alignment between the
|
|
|
|
|
| tokens and the original string, and we ensure that no information is
|
|
|
|
|
| lost during processing.
|
|
|
|
|
|
|
|
|
|
+h(2, "sentence-boundary") Sentence boundary detection
|
|
|
|
|
|
|
|
|
|
p
|
|
|
|
|
| Sentence boundaries are calculated from the syntactic parse tree, so
|
|
|
|
|
| features such as punctuation and capitalisation play an important but
|
|
|
|
|
| non-decisive role in determining the sentence boundaries. Usually this
|
|
|
|
|
| means that the sentence boundaries will at least coincide with clause
|
|
|
|
|
| boundaries, even given poorly punctuated text.
|
|
|
|
|
|
|
|
|
|
+h(2, "pos-tagging") Part-of-speech Tagging
|
|
|
|
|
|
2016-12-18 16:42:10 +00:00
|
|
|
|
include _annotation/_pos-tags
|
2016-10-31 18:04:15 +00:00
|
|
|
|
|
|
|
|
|
+h(2, "lemmatization") Lemmatization
|
|
|
|
|
|
|
|
|
|
p A "lemma" is the uninflected form of a word. In English, this means:
|
|
|
|
|
|
|
|
|
|
+list
|
|
|
|
|
+item #[strong Adjectives]: The form like "happy", not "happier" or "happiest"
|
|
|
|
|
+item #[strong Adverbs]: The form like "badly", not "worse" or "worst"
|
|
|
|
|
+item #[strong Nouns]: The form like "dog", not "dogs"; like "child", not "children"
|
|
|
|
|
+item #[strong Verbs]: The form like "write", not "writes", "writing", "wrote" or "written"
|
|
|
|
|
|
2016-12-19 12:41:47 +00:00
|
|
|
|
+aside("About spaCy's custom pronoun lemma")
|
|
|
|
|
| Unlike verbs and common nouns, there's no clear base form of a personal
|
|
|
|
|
| pronoun. Should the lemma of "me" be "I", or should we normalize person
|
|
|
|
|
| as well, giving "it" — or maybe "he"? spaCy's solution is to introduce a
|
|
|
|
|
| novel symbol, #[code.u-nowrap -PRON-], which is used as the lemma for
|
|
|
|
|
| all personal pronouns.
|
|
|
|
|
|
2016-10-31 18:04:15 +00:00
|
|
|
|
p
|
|
|
|
|
| The lemmatization data is taken from
|
|
|
|
|
| #[+a("https://wordnet.princeton.edu") WordNet]. However, we also add a
|
|
|
|
|
| special case for pronouns: all pronouns are lemmatized to the special
|
|
|
|
|
| token #[code -PRON-].
|
|
|
|
|
|
|
|
|
|
+h(2, "dependency-parsing") Syntactic Dependency Parsing
|
|
|
|
|
|
2016-12-19 12:42:44 +00:00
|
|
|
|
+table(["Language", "Converter", "Scheme"])
|
|
|
|
|
+row
|
|
|
|
|
+cell English
|
|
|
|
|
+cell #[+a("http://www.clearnlp.com") ClearNLP]
|
|
|
|
|
+cell #[+a("http://www.mathcs.emory.edu/~choi/doc/clear-dependency-2012.pdf") CLEAR Style]
|
|
|
|
|
|
|
|
|
|
+row
|
|
|
|
|
+cell German
|
|
|
|
|
+cell #[+a("https://github.com/wbwseeker/tiger2dep") TIGER]
|
|
|
|
|
+cell #[+a("http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/TIGERCorpus/annotation/index.html") TIGER]
|
2016-10-31 18:04:15 +00:00
|
|
|
|
|
|
|
|
|
+h(2, "named-entities") Named Entity Recognition
|
|
|
|
|
|
2016-12-18 16:42:10 +00:00
|
|
|
|
include _annotation/_named-entities
|