spaCy/website/docs/usage/_benchmarks-models.mdx

41 lines
1.5 KiB
Plaintext

<figure>
| Pipeline | Parser | Tagger | NER |
| ---------------------------------------------------------- | -----: | -----: | ---: |
| [`en_core_web_trf`](/models/en#en_core_web_trf) (spaCy v3) | 95.1 | 97.8 | 89.8 |
| [`en_core_web_lg`](/models/en#en_core_web_lg) (spaCy v3) | 92.0 | 97.4 | 85.5 |
| `en_core_web_lg` (spaCy v2) | 91.9 | 97.2 | 85.5 |
<figcaption className="caption">
**Full pipeline accuracy** on the
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) corpus (reported on
the development set).
</figcaption>
</figure>
<figure>
| Named Entity Recognition System | OntoNotes | CoNLL '03 |
| -------------------------------- | --------: | --------: |
| spaCy RoBERTa (2020) | 89.8 | 91.6 |
| Stanza (StanfordNLP)<sup>1</sup> | 88.8 | 92.1 |
| Flair<sup>2</sup> | 89.7 | 93.1 |
<figcaption className="caption">
**Named entity recognition accuracy** on the
[OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19) and
[CoNLL-2003](https://www.aclweb.org/anthology/W03-0419.pdf) corpora. See
[NLP-progress](http://nlpprogress.com/english/named_entity_recognition.html) for
more results. Project template:
[`benchmarks/ner_conll03`](%%GITHUB_PROJECTS/benchmarks/ner_conll03). **1.**
[Qi et al. (2020)](https://arxiv.org/pdf/2003.07082.pdf). **2.**
[Akbik et al. (2018)](https://www.aclweb.org/anthology/C18-1139/).
</figcaption>
</figure>