2.7 KiB
When you call nlp
on a text, spaCy first tokenizes the text to produce a Doc
object. The Doc
is then processed in several different steps – this is also
referred to as the processing pipeline. The pipeline used by the
default models consists of a tagger, a parser and an entity
recognizer. Each pipeline component returns the processed Doc
, which is then
passed on to the next component.
- Name: ID of the pipeline component.
- Component: spaCy's implementation of the component.
- Creates: Objects, attributes and properties modified and set by the component.
Name | Component | Creates | Description |
---|---|---|---|
tokenizer | Tokenizer |
Doc |
Segment text into tokens. |
tagger | Tagger |
Doc[i].tag |
Assign part-of-speech tags. |
parser | DependencyParser |
Doc[i].head , Doc[i].dep , Doc.sents , Doc.noun_chunks |
Assign dependency labels. |
ner | EntityRecognizer |
Doc.ents , Doc[i].ent_iob , Doc[i].ent_type |
Detect and label named entities. |
textcat | TextCategorizer |
Doc.cats |
Assign document labels. |
... | custom components | Doc._.xxx , Token._.xxx , Span._.xxx |
Assign custom attributes, methods or properties. |
The processing pipeline always depends on the statistical model and its capabilities. For example, a pipeline can only include an entity recognizer component if the model includes data to make predictions of entity labels. This is why each model will specify the pipeline to use in its meta data, as a simple list containing the component names:
"pipeline": ["tagger", "parser", "ner"]
import Accordion from 'components/accordion.js'
No