spaCy/website/docs/api/textcategorizer.md

18 KiB

title tag source new
TextCategorizer class spacy/pipeline/pipes.pyx 2

This class is a subclass of Pipe and follows the same API. The pipeline component is available in the processing pipeline via the ID "textcat".

Default config

This is the default configuration used to initialize the model powering the pipeline component. See the model architectures documentation for details on the architectures and their arguments and hyperparameters. To learn more about how to customize the config and train custom models, check out the training config docs.

https://github.com/explosion/spaCy/blob/develop/spacy/pipeline/defaults/textcat_defaults.cfg

TextCategorizer.__init__

Example

# Construction via create_pipe
textcat = nlp.create_pipe("textcat")

# Construction via create_pipe with custom model
config = {"model": {"@architectures": "my_textcat"}}
parser = nlp.create_pipe("textcat", config)

# Construction from class with custom model from file
from spacy.pipeline import TextCategorizer
model = util.load_config("model.cfg", create_objects=True)["model"]
textcat = TextCategorizer(nlp.vocab, model)

Create a new pipeline instance. In your application, you would normally use a shortcut for this and instantiate the component using its string name and nlp.create_pipe.

Name Type Description
vocab Vocab The shared vocabulary.
model Model The Model powering the pipeline component.
**cfg - Configuration parameters.
RETURNS TextCategorizer The newly constructed object.

TextCategorizer.__call__

Apply the pipe to one document. The document is modified in place, and returned. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order. Both __call__ and pipe delegate to the predict and set_annotations methods.

Example

textcat = TextCategorizer(nlp.vocab)
doc = nlp("This is a sentence.")
# This usually happens under the hood
processed = textcat(doc)
Name Type Description
doc Doc The document to process.
RETURNS Doc The processed document.

TextCategorizer.pipe

Apply the pipe to a stream of documents. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order. Both __call__ and pipe delegate to the predict and set_annotations methods.

Example

textcat = TextCategorizer(nlp.vocab)
for doc in textcat.pipe(docs, batch_size=50):
    pass
Name Type Description
stream Iterable[Doc] A stream of documents.
batch_size int The number of texts to buffer. Defaults to 128.
YIELDS Doc Processed documents in the order of the original text.

TextCategorizer.predict

Apply the pipeline's model to a batch of docs, without modifying them.

Example

textcat = TextCategorizer(nlp.vocab)
scores = textcat.predict([doc1, doc2])
Name Type Description
docs Iterable[Doc] The documents to predict.
RETURNS - The model's prediction for each document.

TextCategorizer.set_annotations

Modify a batch of documents, using pre-computed scores.

Example

textcat = TextCategorizer(nlp.vocab)
scores = textcat.predict(docs)
textcat.set_annotations(docs, scores)
Name Type Description
docs Iterable[Doc] The documents to modify.
scores - The scores to set, produced by TextCategorizer.predict.

TextCategorizer.update

Learn from a batch of documents and gold-standard information, updating the pipe's model. Delegates to predict and get_loss.

Example

textcat = TextCategorizer(nlp.vocab, textcat_model)
optimizer = nlp.begin_training()
losses = textcat.update(examples, sgd=optimizer)
Name Type Description
examples Iterable[Example] A batch of Example objects to learn from.
keyword-only
drop float The dropout rate.
set_annotations bool Whether or not to update the Example objects with the predictions, delegating to set_annotations.
sgd Optimizer The Optimizer object.
losses Dict[str, float] Optional record of the loss during training. The value keyed by the model's name is updated.
RETURNS Dict[str, float] The updated losses dictionary.

TextCategorizer.get_loss

Find the loss and gradient of loss for the batch of documents and their predicted scores.

Example

textcat = TextCategorizer(nlp.vocab)
scores = textcat.predict([eg.predicted for eg in examples])
loss, d_loss = textcat.get_loss(examples, scores)
Name Type Description
examples Iterable[Example] The batch of examples.
scores - Scores representing the model's predictions.
RETURNS tuple The loss and the gradient, i.e. (loss, gradient).

TextCategorizer.begin_training

Initialize the pipe for training, using data examples if available. Return an Optimizer object.

Example

textcat = TextCategorizer(nlp.vocab)
nlp.pipeline.append(textcat)
optimizer = textcat.begin_training(pipeline=nlp.pipeline)
Name Type Description
get_examples Iterable[Example] Optional gold-standard annotations in the form of Example objects.
pipeline List[(str, callable)] Optional list of pipeline components that this component is part of.
sgd Optimizer An optional Optimizer object. Will be created via create_optimizer if not set.
RETURNS Optimizer An optimizer.

TextCategorizer.create_optimizer

Create an optimizer for the pipeline component.

Example

textcat = TextCategorizer(nlp.vocab)
optimizer = textcat.create_optimizer()
Name Type Description
RETURNS Optimizer The Optimizer object.

TextCategorizer.use_params

Modify the pipe's model, to use the given parameter values.

Example

textcat = TextCategorizer(nlp.vocab)
with textcat.use_params(optimizer.averages):
    textcat.to_disk("/best_model")
Name Type Description
params dict The parameter values to use in the model. At the end of the context, the original parameters are restored.

TextCategorizer.add_label

Add a new label to the pipe.

Example

textcat = TextCategorizer(nlp.vocab)
textcat.add_label("MY_LABEL")
Name Type Description
label str The label to add.

TextCategorizer.to_disk

Serialize the pipe to disk.

Example

textcat = TextCategorizer(nlp.vocab)
textcat.to_disk("/path/to/textcat")
Name Type Description
path str / Path A path to a directory, which will be created if it doesn't exist. Paths may be either strings or Path-like objects.
exclude list String names of serialization fields to exclude.

TextCategorizer.from_disk

Load the pipe from disk. Modifies the object in place and returns it.

Example

textcat = TextCategorizer(nlp.vocab)
textcat.from_disk("/path/to/textcat")
Name Type Description
path str / Path A path to a directory. Paths may be either strings or Path-like objects.
exclude list String names of serialization fields to exclude.
RETURNS TextCategorizer The modified TextCategorizer object.

TextCategorizer.to_bytes

Example

textcat = TextCategorizer(nlp.vocab)
textcat_bytes = textcat.to_bytes()

Serialize the pipe to a bytestring.

Name Type Description
exclude list String names of serialization fields to exclude.
RETURNS bytes The serialized form of the TextCategorizer object.

TextCategorizer.from_bytes

Load the pipe from a bytestring. Modifies the object in place and returns it.

Example

textcat_bytes = textcat.to_bytes()
textcat = TextCategorizer(nlp.vocab)
textcat.from_bytes(textcat_bytes)
Name Type Description
bytes_data bytes The data to load from.
exclude list String names of serialization fields to exclude.
RETURNS TextCategorizer The TextCategorizer object.

TextCategorizer.labels

The labels currently added to the component.

Example

textcat.add_label("MY_LABEL")
assert "MY_LABEL" in textcat.labels
Name Type Description
RETURNS tuple The labels added to the component.

Serialization fields

During serialization, spaCy will export several data fields used to restore different aspects of the object. If needed, you can exclude them from serialization by passing in the string names via the exclude argument.

Example

data = textcat.to_disk("/path", exclude=["vocab"])
Name Description
vocab The shared Vocab.
cfg The config file. You usually don't want to exclude this.
model The binary model data. You usually don't want to exclude this.