27 KiB
title | tag | source | teaser | api_base_class | api_string_name | api_trainable |
---|---|---|---|---|---|---|
DependencyParser | class | spacy/pipeline/dep_parser.pyx | Pipeline component for syntactic dependency parsing | /api/pipe | parser | true |
A transition-based dependency parser component. The dependency parser jointly learns sentence segmentation and labelled dependency parsing, and can optionally learn to merge tokens that had been over-segmented by the tokenizer. The parser uses a variant of the non-monotonic arc-eager transition-system described by Honnibal and Johnson (2014), with the addition of a "break" transition to perform the sentence segmentation. Nivre (2005)'s pseudo-projective dependency transformation is used to allow the parser to predict non-projective parses.
The parser is trained using an imitation learning objective. It follows the actions predicted by the current weights, and at each state, determines which actions are compatible with the optimal parse that could be reached from the current state. The weights are updated such that the scores assigned to the set of optimal actions is increased, while scores assigned to other actions are decreased. Note that more than one action may be optimal for a given state.
Assigned Attributes
Dependency predictions are assigned to the Token.dep
and Token.head
fields.
Beside the dependencies themselves, the parser decides sentence boundaries,
which are saved in Token.is_sent_start
and accessible via Doc.sents
.
Location | Value |
---|---|
Token.dep |
The type of dependency relation (hash). |
Token.dep_ |
The type of dependency relation. |
Token.head |
The syntactic parent, or "governor", of this token. |
Token.is_sent_start |
A boolean value indicating whether the token starts a sentence. After the parser runs this will be True or False for all tokens. |
Doc.sents |
An iterator over sentences in the Doc , determined by Token.is_sent_start values. |
Config and implementation
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
config
argument on nlp.add_pipe
or in your
config.cfg
for training. See the
model architectures documentation for details on the
architectures and their arguments and hyperparameters.
Example
from spacy.pipeline.dep_parser import DEFAULT_PARSER_MODEL config = { "moves": None, "update_with_oracle_cut_size": 100, "learn_tokens": False, "min_action_freq": 30, "model": DEFAULT_PARSER_MODEL, } nlp.add_pipe("parser", config=config)
Setting | Description |
---|---|
moves |
A list of transition names. Inferred from the data if not provided. Defaults to None . |
update_with_oracle_cut_size |
During training, cut long sequences into shorter segments by creating intermediate states based on the gold-standard history. The model is not very sensitive to this parameter, so you usually won't need to change it. Defaults to 100 . |
learn_tokens |
Whether to learn to merge subtokens that are split relative to the gold standard. Experimental. Defaults to False . |
min_action_freq |
The minimum frequency of labelled actions to retain. Rarer labelled actions have their label backed-off to "dep". While this primarily affects the label accuracy, it can also affect the attachment structure, as the labels are used to represent the pseudo-projectivity transformation. Defaults to 30 . |
model |
The Model powering the pipeline component. Defaults to TransitionBasedParser. |
%%GITHUB_SPACY/spacy/pipeline/dep_parser.pyx
DependencyParser.__init__
Example
# Construction via add_pipe with default model parser = nlp.add_pipe("parser") # Construction via add_pipe with custom model config = {"model": {"@architectures": "my_parser"}} parser = nlp.add_pipe("parser", config=config) # Construction from class from spacy.pipeline import DependencyParser parser = DependencyParser(nlp.vocab, model)
Create a new pipeline instance. In your application, you would normally use a
shortcut for this and instantiate the component using its string name and
nlp.add_pipe
.
Name | Description |
---|---|
vocab |
The shared vocabulary. |
model |
The Model powering the pipeline component. |
name |
String name of the component instance. Used to add entries to the losses during training. |
moves |
A list of transition names. Inferred from the data if not provided. |
keyword-only | |
update_with_oracle_cut_size |
During training, cut long sequences into shorter segments by creating intermediate states based on the gold-standard history. The model is not very sensitive to this parameter, so you usually won't need to change it. Defaults to 100 . |
learn_tokens |
Whether to learn to merge subtokens that are split relative to the gold standard. Experimental. Defaults to False . |
min_action_freq |
The minimum frequency of labelled actions to retain. Rarer labelled actions have their label backed-off to "dep". While this primarily affects the label accuracy, it can also affect the attachment structure, as the labels are used to represent the pseudo-projectivity transformation. |
scorer |
The scoring method. Defaults to Scorer.score_deps for the attribute "dep" ignoring the labels p and punct and Scorer.score_spans for the attribute "sents" . |
DependencyParser.__call__
Apply the pipe to one document. The document is modified in place, and returned.
This usually happens under the hood when the nlp
object is called on a text
and all pipeline components are applied to the Doc
in order. Both
__call__
and
pipe
delegate to the
predict
and
set_annotations
methods.
Example
doc = nlp("This is a sentence.") parser = nlp.add_pipe("parser") # This usually happens under the hood processed = parser(doc)
Name | Description |
---|---|
doc |
The document to process. |
RETURNS | The processed document. |
DependencyParser.pipe
Apply the pipe to a stream of documents. This usually happens under the hood
when the nlp
object is called on a text and all pipeline components are
applied to the Doc
in order. Both __call__
and
pipe
delegate to the
predict
and
set_annotations
methods.
Example
parser = nlp.add_pipe("parser") for doc in parser.pipe(docs, batch_size=50): pass
Name | Description |
---|---|
docs |
A stream of documents. |
keyword-only | |
batch_size |
The number of documents to buffer. Defaults to 128 . |
YIELDS | The processed documents in order. |
DependencyParser.initialize
Initialize the component for training. get_examples
should be a function that
returns an iterable of Example
objects. At least one example
should be supplied. The data examples are used to initialize the model of
the component and can either be the full training data or a representative
sample. Initialization includes validating the network,
inferring missing shapes and
setting up the label scheme based on the data. This method is typically called
by Language.initialize
and lets you customize
arguments it receives via the
[initialize.components]
block in the
config.
This method was previously called begin_training
.
Example
parser = nlp.add_pipe("parser") parser.initialize(lambda: examples, nlp=nlp)
### config.cfg [initialize.components.parser] [initialize.components.parser.labels] @readers = "spacy.read_labels.v1" path = "corpus/labels/parser.json
Name | Description |
---|---|
get_examples |
Function that returns gold-standard annotations in the form of Example objects. Must contain at least one Example . |
keyword-only | |
nlp |
The current nlp object. Defaults to None . |
labels |
The label information to add to the component, as provided by the label_data property after initialization. To generate a reusable JSON file from your data, you should run the init labels command. If no labels are provided, the get_examples callback is used to extract the labels from the data, which may be a lot slower. |
DependencyParser.predict
Apply the component's model to a batch of Doc
objects, without
modifying them.
Example
parser = nlp.add_pipe("parser") scores = parser.predict([doc1, doc2])
Name | Description |
---|---|
docs |
The documents to predict. |
RETURNS | A helper class for the parse state (internal). |
DependencyParser.set_annotations
Modify a batch of Doc
objects, using pre-computed scores.
Example
parser = nlp.add_pipe("parser") scores = parser.predict([doc1, doc2]) parser.set_annotations([doc1, doc2], scores)
Name | Description |
---|---|
docs |
The documents to modify. |
scores |
The scores to set, produced by DependencyParser.predict . Returns an internal helper class for the parse state. |
DependencyParser.update
Learn from a batch of Example
objects, updating the pipe's
model. Delegates to predict
and
get_loss
.
Example
parser = nlp.add_pipe("parser") optimizer = nlp.initialize() losses = parser.update(examples, sgd=optimizer)
Name | Description |
---|---|
examples |
A batch of Example objects to learn from. |
keyword-only | |
drop |
The dropout rate. |
sgd |
An optimizer. Will be created via create_optimizer if not set. |
losses |
Optional record of the loss during training. Updated using the component name as the key. |
RETURNS | The updated losses dictionary. |
DependencyParser.get_loss
Find the loss and gradient of loss for the batch of documents and their predicted scores.
Example
parser = nlp.add_pipe("parser") scores = parser.predict([eg.predicted for eg in examples]) loss, d_loss = parser.get_loss(examples, scores)
Name | Description |
---|---|
examples |
The batch of examples. |
scores |
Scores representing the model's predictions. |
RETURNS | The loss and the gradient, i.e. (loss, gradient) . |
DependencyParser.create_optimizer
Create an Optimizer
for the pipeline
component.
Example
parser = nlp.add_pipe("parser") optimizer = parser.create_optimizer()
Name | Description |
---|---|
RETURNS | The optimizer. |
DependencyParser.use_params
Modify the pipe's model, to use the given parameter values. At the end of the context, the original parameters are restored.
Example
parser = DependencyParser(nlp.vocab) with parser.use_params(optimizer.averages): parser.to_disk("/best_model")
Name | Description |
---|---|
params |
The parameter values to use in the model. |
DependencyParser.add_label
Add a new label to the pipe. Note that you don't have to call this method if you
provide a representative data sample to the initialize
method. In this case, all labels found in the sample will be automatically added
to the model, and the output dimension will be
inferred automatically.
Example
parser = nlp.add_pipe("parser") parser.add_label("MY_LABEL")
Name | Description |
---|---|
label |
The label to add. |
RETURNS | 0 if the label is already present, otherwise 1 . |
DependencyParser.set_output
Change the output dimension of the component's model by calling the model's
attribute resize_output
. This is a function that takes the original model and
the new output dimension nO
, and changes the model in place. When resizing an
already trained model, care should be taken to avoid the "catastrophic
forgetting" problem.
Example
parser = nlp.add_pipe("parser") parser.set_output(512)
Name | Description |
---|---|
nO |
The new output dimension. |
DependencyParser.to_disk
Serialize the pipe to disk.
Example
parser = nlp.add_pipe("parser") parser.to_disk("/path/to/parser")
Name | Description |
---|---|
path |
A path to a directory, which will be created if it doesn't exist. Paths may be either strings or Path -like objects. |
keyword-only | |
exclude |
String names of serialization fields to exclude. |
DependencyParser.from_disk
Load the pipe from disk. Modifies the object in place and returns it.
Example
parser = nlp.add_pipe("parser") parser.from_disk("/path/to/parser")
Name | Description |
---|---|
path |
A path to a directory. Paths may be either strings or Path -like objects. |
keyword-only | |
exclude |
String names of serialization fields to exclude. |
RETURNS | The modified DependencyParser object. |
DependencyParser.to_bytes
Example
parser = nlp.add_pipe("parser") parser_bytes = parser.to_bytes()
Serialize the pipe to a bytestring.
Name | Description |
---|---|
keyword-only | |
exclude |
String names of serialization fields to exclude. |
RETURNS | The serialized form of the DependencyParser object. |
DependencyParser.from_bytes
Load the pipe from a bytestring. Modifies the object in place and returns it.
Example
parser_bytes = parser.to_bytes() parser = nlp.add_pipe("parser") parser.from_bytes(parser_bytes)
Name | Description |
---|---|
bytes_data |
The data to load from. |
keyword-only | |
exclude |
String names of serialization fields to exclude. |
RETURNS | The DependencyParser object. |
DependencyParser.labels
The labels currently added to the component.
Example
parser.add_label("MY_LABEL") assert "MY_LABEL" in parser.labels
Name | Description |
---|---|
RETURNS | The labels added to the component. |
DependencyParser.label_data
The labels currently added to the component and their internal meta information.
This is the data generated by init labels
and used by
DependencyParser.initialize
to initialize
the model with a pre-defined label set.
Example
labels = parser.label_data parser.initialize(lambda: [], nlp=nlp, labels=labels)
Name | Description |
---|---|
RETURNS | The label data added to the component. |
Serialization fields
During serialization, spaCy will export several data fields used to restore
different aspects of the object. If needed, you can exclude them from
serialization by passing in the string names via the exclude
argument.
Example
data = parser.to_disk("/path", exclude=["vocab"])
Name | Description |
---|---|
vocab |
The shared Vocab . |
cfg |
The config file. You usually don't want to exclude this. |
model |
The binary model data. You usually don't want to exclude this. |