2017-10-03 12:26:20 +00:00
|
|
|
|
//- 💫 DOCS > USAGE > TRAINING > TEXT CLASSIFICATION
|
|
|
|
|
|
2017-10-26 22:48:45 +00:00
|
|
|
|
+h(3, "example-textcat") Adding a text classifier to a spaCy model
|
2017-10-03 12:26:20 +00:00
|
|
|
|
+tag-new(2)
|
|
|
|
|
|
|
|
|
|
p
|
2017-10-26 22:48:45 +00:00
|
|
|
|
| This example shows how to train a multi-label convolutional neural
|
|
|
|
|
| network text classifier on IMDB movie reviews, using spaCy's new
|
|
|
|
|
| #[+api("textcategorizer") #[code TextCategorizer]] component. The
|
|
|
|
|
| dataset will be loaded automatically via Thinc's built-in dataset
|
|
|
|
|
| loader. Predictions are available via
|
|
|
|
|
| #[+api("doc#attributes") #[code Doc.cats]].
|
2017-10-03 12:26:20 +00:00
|
|
|
|
|
2017-10-27 10:30:59 +00:00
|
|
|
|
+github("spacy", "examples/training/train_textcat.py", 500)
|
2017-10-26 22:48:45 +00:00
|
|
|
|
|
|
|
|
|
+h(4) Step by step guide
|
|
|
|
|
|
|
|
|
|
+list("numbers")
|
|
|
|
|
+item
|
|
|
|
|
| #[strong Load the model] you want to start with, or create an
|
|
|
|
|
| #[strong empty model] using
|
|
|
|
|
| #[+api("spacy#blank") #[code spacy.blank]] with the ID of your
|
2017-10-26 22:51:29 +00:00
|
|
|
|
| language. If you're using an existing model, make sure to disable all
|
|
|
|
|
| other pipeline components during training using
|
|
|
|
|
| #[+api("language#disable_pipes") #[code nlp.disable_pipes]]. This
|
|
|
|
|
| way, you'll only be training the text classifier.
|
2017-10-26 22:48:45 +00:00
|
|
|
|
|
|
|
|
|
+item
|
|
|
|
|
| #[strong Add the text classifier] to the pipeline, and add the labels
|
|
|
|
|
| you want to train – for example, #[code POSITIVE].
|
|
|
|
|
|
|
|
|
|
+item
|
|
|
|
|
| #[strong Load and pre-process the dataset], shuffle the data and
|
|
|
|
|
| split off a part of it to hold back for evaluation. This way, you'll
|
|
|
|
|
| be able to see results on each training iteration.
|
|
|
|
|
|
|
|
|
|
+item
|
2017-11-06 23:23:19 +00:00
|
|
|
|
| #[strong Loop over] the training examples and partition them into
|
|
|
|
|
| batches using spaCy's
|
|
|
|
|
| #[+api("top-level#util.minibatch") #[code minibatch]] and
|
|
|
|
|
| #[+api("top-level#util.compounding") #[code compounding]] helpers.
|
2017-10-26 22:48:45 +00:00
|
|
|
|
|
|
|
|
|
+item
|
|
|
|
|
| #[strong Update the model] by calling
|
|
|
|
|
| #[+api("language#update") #[code nlp.update]], which steps
|
|
|
|
|
| through the examples and makes a #[strong prediction]. It then
|
2017-11-06 23:23:19 +00:00
|
|
|
|
| consults the annotations to see whether it was right. If it was
|
|
|
|
|
| wrong, it adjusts its weights so that the correct prediction will
|
|
|
|
|
| score higher next time.
|
2017-10-26 22:48:45 +00:00
|
|
|
|
|
|
|
|
|
+item
|
|
|
|
|
| Optionally, you can also #[strong evaluate the text classifier] on
|
|
|
|
|
| each iteration, by checking how it performs on the development data
|
|
|
|
|
| held back from the dataset. This lets you print the
|
|
|
|
|
| #[strong precision], #[strong recall] and #[strong F-score].
|
|
|
|
|
|
|
|
|
|
+item
|
|
|
|
|
| #[strong Save] the trained model using
|
|
|
|
|
| #[+api("language#to_disk") #[code nlp.to_disk]].
|
|
|
|
|
|
|
|
|
|
+item
|
|
|
|
|
| #[strong Test] the model to make sure the text classifier works as
|
|
|
|
|
| expected.
|