--- title: Scorer teaser: Compute evaluation scores tag: class source: spacy/scorer.py --- The `Scorer` computes and stores evaluation scores. It's typically created by [`Language.evaluate`](/api/language#evaluate). ## Scorer.\_\_init\_\_ {#init tag="method"} Create a new `Scorer`. > #### Example > > ```python > from spacy.scorer import Scorer > > scorer = Scorer() > ``` | Name | Type | Description | | ------------ | -------- | ------------------------------------------------------------ | | `eval_punct` | bool | Evaluate the dependency attachments to and from punctuation. | | **RETURNS** | `Scorer` | The newly created object. | ## Scorer.score {#score tag="method"} Update the evaluation scores from a single [`Doc`](/api/doc) / [`GoldParse`](/api/goldparse) pair. > #### Example > > ```python > scorer = Scorer() > scorer.score(doc, gold) > ``` | Name | Type | Description | | -------------- | ----------- | -------------------------------------------------------------------------------------------------------------------- | | `doc` | `Doc` | The predicted annotations. | | `gold` | `GoldParse` | The correct annotations. | | `verbose` | bool | Print debugging information. | | `punct_labels` | tuple | Dependency labels for punctuation. Used to evaluate dependency attachments to punctuation if `eval_punct` is `True`. | ## Properties | Name | Type | Description | | --------------------------------------------------- | ----- | ---------------------------------------------------------------------------------------------------------- | | `token_acc` | float | Tokenization accuracy. | | `tags_acc` | float | Part-of-speech tag accuracy (fine grained tags, i.e. `Token.tag`). | | `uas` | float | Unlabelled dependency score. | | `las` | float | Labelled dependency score. | | `ents_p` | float | Named entity accuracy (precision). | | `ents_r` | float | Named entity accuracy (recall). | | `ents_f` | float | Named entity accuracy (F-score). | | `ents_per_type` 2.1.5 | dict | Scores per entity label. Keyed by label, mapped to a dict of `p`, `r` and `f` scores. | | `textcat_f` 3.0 | float | F-score on positive label for binary classification, macro-averaged F-score otherwise. | | `textcat_auc` | float | Macro-averaged AUC ROC score for multilabel classification (`-1` if undefined). | | `textcats_f_per_cat` 3.0 | dict | F-scores per textcat label, keyed by label. | | `textcats_auc_per_cat` 3.0 | dict | ROC AUC scores per textcat label, keyed by label. | | `las_per_type` 2.2.3 | dict | Labelled dependency scores, keyed by label. | | `scores` | dict | All scores, keyed by type. |