mirror of https://github.com/explosion/spaCy.git
319eb508b5
* Add a `spacy evaluate speed` subcommand This subcommand reports the mean batch performance of a model on a data set with a 95% confidence interval. For reliability, it first performs some warmup rounds. Then it will measure performance on batches with randomly shuffled documents. To avoid having too many spaCy commands, `speed` is a subcommand of `evaluate` and accuracy evaluation is moved to its own `evaluate accuracy` subcommand. * Fix import cycle * Restore `spacy evaluate`, make `spacy benchmark speed` an alias * Add documentation for `spacy benchmark` * CREATES -> PRINTS * WPS -> words/s * Disable formatting of benchmark speed arguments * Fail with an error message when trying to speed bench empty corpus * Make it clearer that `benchmark accuracy` is a replacement for `evaluate` * Fix docstring webpage reference * tests: check `evaluate` output against `benchmark accuracy` |
||
---|---|---|
.. | ||
cli | ||
displacy | ||
kb | ||
lang | ||
matcher | ||
ml | ||
pipeline | ||
tests | ||
tokens | ||
training | ||
__init__.pxd | ||
__init__.py | ||
__main__.py | ||
about.py | ||
attrs.pxd | ||
attrs.pyx | ||
compat.py | ||
default_config.cfg | ||
default_config_pretraining.cfg | ||
errors.py | ||
glossary.py | ||
language.py | ||
lexeme.pxd | ||
lexeme.pyi | ||
lexeme.pyx | ||
lookups.py | ||
morphology.pxd | ||
morphology.pyx | ||
parts_of_speech.pxd | ||
parts_of_speech.pyx | ||
pipe_analysis.py | ||
py.typed | ||
schemas.py | ||
scorer.py | ||
strings.pxd | ||
strings.pyi | ||
strings.pyx | ||
structs.pxd | ||
symbols.pxd | ||
symbols.pyx | ||
tokenizer.pxd | ||
tokenizer.pyx | ||
ty.py | ||
typedefs.pxd | ||
typedefs.pyx | ||
util.py | ||
vectors.pyx | ||
vocab.pxd | ||
vocab.pyi | ||
vocab.pyx |