🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
from typing import Any, Dict, Iterable, List, Optional, Sequence, Set, Tuple, Union
|
|
|
|
from typing import cast, overload
|
2018-11-30 19:16:14 +00:00
|
|
|
from pathlib import Path
|
|
|
|
from collections import Counter
|
|
|
|
import sys
|
💫 Replace ujson, msgpack and dill/pickle/cloudpickle with srsly (#3003)
Remove hacks and wrappers, keep code in sync across our libraries and move spaCy a few steps closer to only depending on packages with binary wheels 🎉
See here: https://github.com/explosion/srsly
Serialization is hard, especially across Python versions and multiple platforms. After dealing with many subtle bugs over the years (encodings, locales, large files) our libraries like spaCy and Prodigy have steadily grown a number of utility functions to wrap the multiple serialization formats we need to support (especially json, msgpack and pickle). These wrapping functions ended up duplicated across our codebases, so we wanted to put them in one place.
At the same time, we noticed that having a lot of small dependencies was making maintainence harder, and making installation slower. To solve this, we've made srsly standalone, by including the component packages directly within it. This way we can provide all the serialization utilities we need in a single binary wheel.
srsly currently includes forks of the following packages:
ujson
msgpack
msgpack-numpy
cloudpickle
* WIP: replace json/ujson with srsly
* Replace ujson in examples
Use regular json instead of srsly to make code easier to read and follow
* Update requirements
* Fix imports
* Fix typos
* Replace msgpack with srsly
* Fix warning
2018-12-03 00:28:22 +00:00
|
|
|
import srsly
|
2020-08-17 19:38:20 +00:00
|
|
|
from wasabi import Printer, MESSAGES, msg
|
2020-07-11 11:03:53 +00:00
|
|
|
import typer
|
2018-11-30 19:16:14 +00:00
|
|
|
|
2020-07-11 11:03:53 +00:00
|
|
|
from ._util import app, Arg, Opt, show_validation_error, parse_config_overrides
|
2020-09-28 13:09:59 +00:00
|
|
|
from ._util import import_code, debug_cli
|
2020-09-29 21:07:11 +00:00
|
|
|
from ..training import Example
|
2020-09-28 13:09:59 +00:00
|
|
|
from ..training.initialize import get_sourced_components
|
|
|
|
from ..schemas import ConfigSchemaTraining
|
2020-07-30 21:30:54 +00:00
|
|
|
from ..pipeline._parser_internals import nonproj
|
2021-01-07 17:58:13 +00:00
|
|
|
from ..pipeline._parser_internals.nonproj import DELIMITER
|
2021-04-09 09:53:42 +00:00
|
|
|
from ..pipeline import Morphologizer
|
|
|
|
from ..morphology import Morphology
|
2020-06-21 19:35:01 +00:00
|
|
|
from ..language import Language
|
2020-09-29 21:07:11 +00:00
|
|
|
from ..util import registry, resolve_dot_names
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
from ..compat import Literal
|
2020-07-11 11:03:53 +00:00
|
|
|
from .. import util
|
2018-11-30 19:16:14 +00:00
|
|
|
|
|
|
|
|
2019-08-16 08:52:46 +00:00
|
|
|
# Minimum number of expected occurrences of NER label in data to train new label
|
2018-11-30 19:16:14 +00:00
|
|
|
NEW_LABEL_THRESHOLD = 50
|
2019-08-16 08:52:46 +00:00
|
|
|
# Minimum number of expected occurrences of dependency labels
|
|
|
|
DEP_LABEL_THRESHOLD = 20
|
2020-09-03 11:13:03 +00:00
|
|
|
# Minimum number of expected examples to train a new pipeline
|
2018-11-30 19:16:14 +00:00
|
|
|
BLANK_MODEL_MIN_THRESHOLD = 100
|
|
|
|
BLANK_MODEL_THRESHOLD = 2000
|
|
|
|
|
|
|
|
|
2020-07-12 11:53:41 +00:00
|
|
|
@debug_cli.command(
|
2020-09-29 19:39:28 +00:00
|
|
|
"data", context_settings={"allow_extra_args": True, "ignore_unknown_options": True}
|
2020-07-12 11:53:41 +00:00
|
|
|
)
|
2020-07-11 17:17:59 +00:00
|
|
|
@app.command(
|
|
|
|
"debug-data",
|
|
|
|
context_settings={"allow_extra_args": True, "ignore_unknown_options": True},
|
2020-07-12 11:53:41 +00:00
|
|
|
hidden=True, # hide this from main CLI help but still allow it to work with warning
|
2020-07-11 17:17:59 +00:00
|
|
|
)
|
2020-06-21 19:35:01 +00:00
|
|
|
def debug_data_cli(
|
2020-03-09 10:17:20 +00:00
|
|
|
# fmt: off
|
2020-07-11 11:03:53 +00:00
|
|
|
ctx: typer.Context, # This is only used to read additional arguments
|
2020-12-08 09:41:18 +00:00
|
|
|
config_path: Path = Arg(..., help="Path to config file", exists=True, allow_dash=True),
|
2021-03-12 08:51:26 +00:00
|
|
|
code_path: Optional[Path] = Opt(None, "--code-path", "--code", "-c", help="Path to Python file with additional code (registered functions) to be imported"),
|
2020-06-21 11:44:00 +00:00
|
|
|
ignore_warnings: bool = Opt(False, "--ignore-warnings", "-IW", help="Ignore warnings, only show stats and errors"),
|
|
|
|
verbose: bool = Opt(False, "--verbose", "-V", help="Print additional information and explanations"),
|
|
|
|
no_format: bool = Opt(False, "--no-format", "-NF", help="Don't pretty-print the results"),
|
2020-03-09 10:17:20 +00:00
|
|
|
# fmt: on
|
2018-11-30 19:16:14 +00:00
|
|
|
):
|
2019-09-12 13:26:20 +00:00
|
|
|
"""
|
2020-07-12 11:53:41 +00:00
|
|
|
Analyze, debug and validate your training and development data. Outputs
|
|
|
|
useful stats, and can help you find problems like invalid entity annotations,
|
|
|
|
cyclic dependencies, low data labels and more.
|
2020-09-04 10:58:50 +00:00
|
|
|
|
2021-01-30 09:09:38 +00:00
|
|
|
DOCS: https://spacy.io/api/cli#debug-data
|
2019-09-12 13:26:20 +00:00
|
|
|
"""
|
2020-07-12 11:53:41 +00:00
|
|
|
if ctx.command.name == "debug-data":
|
|
|
|
msg.warn(
|
|
|
|
"The debug-data command is now available via the 'debug data' "
|
|
|
|
"subcommand (without the hyphen). You can run python -m spacy debug "
|
|
|
|
"--help for an overview of the other available debugging commands."
|
|
|
|
)
|
2020-07-11 11:03:53 +00:00
|
|
|
overrides = parse_config_overrides(ctx.args)
|
|
|
|
import_code(code_path)
|
2020-06-21 19:35:01 +00:00
|
|
|
debug_data(
|
2020-07-11 11:03:53 +00:00
|
|
|
config_path,
|
|
|
|
config_overrides=overrides,
|
2020-06-21 19:35:01 +00:00
|
|
|
ignore_warnings=ignore_warnings,
|
|
|
|
verbose=verbose,
|
|
|
|
no_format=no_format,
|
|
|
|
silent=False,
|
|
|
|
)
|
|
|
|
|
2018-11-30 19:16:14 +00:00
|
|
|
|
2020-06-21 19:35:01 +00:00
|
|
|
def debug_data(
|
2020-07-11 11:03:53 +00:00
|
|
|
config_path: Path,
|
2020-06-21 19:35:01 +00:00
|
|
|
*,
|
2020-07-11 11:03:53 +00:00
|
|
|
config_overrides: Dict[str, Any] = {},
|
2020-06-21 19:35:01 +00:00
|
|
|
ignore_warnings: bool = False,
|
|
|
|
verbose: bool = False,
|
|
|
|
no_format: bool = True,
|
|
|
|
silent: bool = True,
|
|
|
|
):
|
|
|
|
msg = Printer(
|
|
|
|
no_print=silent, pretty=not no_format, ignore_warnings=ignore_warnings
|
|
|
|
)
|
2018-11-30 19:16:14 +00:00
|
|
|
# Make sure all files and paths exists if they are needed
|
2020-08-02 13:18:30 +00:00
|
|
|
with show_validation_error(config_path):
|
2020-08-14 12:06:22 +00:00
|
|
|
cfg = util.load_config(config_path, overrides=config_overrides)
|
2020-09-27 20:21:31 +00:00
|
|
|
nlp = util.load_model_from_config(cfg)
|
2020-09-29 21:07:11 +00:00
|
|
|
config = nlp.config.interpolate()
|
|
|
|
T = registry.resolve(config["training"], schema=ConfigSchemaTraining)
|
2020-08-04 21:39:19 +00:00
|
|
|
# Use original config here, not resolved version
|
|
|
|
sourced_components = get_sourced_components(cfg)
|
2020-09-28 13:09:59 +00:00
|
|
|
frozen_components = T["frozen_components"]
|
2020-08-04 21:39:19 +00:00
|
|
|
resume_components = [p for p in sourced_components if p not in frozen_components]
|
2020-07-22 11:42:59 +00:00
|
|
|
pipeline = nlp.pipe_names
|
|
|
|
factory_names = [nlp.get_pipe_meta(pipe).factory for pipe in nlp.pipe_names]
|
2020-07-11 11:03:53 +00:00
|
|
|
msg.divider("Data file validation")
|
2018-11-30 19:16:14 +00:00
|
|
|
|
|
|
|
# Create the gold corpus to be able to better analyze data
|
2020-09-29 21:07:11 +00:00
|
|
|
dot_names = [T["train_corpus"], T["dev_corpus"]]
|
|
|
|
train_corpus, dev_corpus = resolve_dot_names(config, dot_names)
|
2021-07-21 20:38:40 +00:00
|
|
|
|
|
|
|
nlp.initialize(lambda: train_corpus(nlp))
|
|
|
|
msg.good("Pipeline can be initialized with data")
|
|
|
|
|
2020-09-29 21:07:11 +00:00
|
|
|
train_dataset = list(train_corpus(nlp))
|
|
|
|
dev_dataset = list(dev_corpus(nlp))
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.good("Corpus is loadable")
|
|
|
|
|
2019-11-11 16:35:27 +00:00
|
|
|
# Create all gold data here to avoid iterating over the train_dataset constantly
|
2020-07-22 11:42:59 +00:00
|
|
|
gold_train_data = _compile_gold(train_dataset, factory_names, nlp, make_proj=True)
|
2020-07-04 14:25:34 +00:00
|
|
|
gold_train_unpreprocessed_data = _compile_gold(
|
2020-07-22 11:42:59 +00:00
|
|
|
train_dataset, factory_names, nlp, make_proj=False
|
2020-07-04 14:25:34 +00:00
|
|
|
)
|
2020-07-22 11:42:59 +00:00
|
|
|
gold_dev_data = _compile_gold(dev_dataset, factory_names, nlp, make_proj=True)
|
2019-08-16 08:52:46 +00:00
|
|
|
|
|
|
|
train_texts = gold_train_data["texts"]
|
|
|
|
dev_texts = gold_dev_data["texts"]
|
2020-09-28 13:09:59 +00:00
|
|
|
frozen_components = T["frozen_components"]
|
2018-11-30 19:16:14 +00:00
|
|
|
|
|
|
|
msg.divider("Training stats")
|
2020-09-28 13:09:59 +00:00
|
|
|
msg.text(f"Language: {nlp.lang}")
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.text(f"Training pipeline: {', '.join(pipeline)}")
|
2020-08-04 21:39:19 +00:00
|
|
|
if resume_components:
|
2020-09-03 11:13:03 +00:00
|
|
|
msg.text(f"Components from other pipelines: {', '.join(resume_components)}")
|
2020-08-04 21:39:19 +00:00
|
|
|
if frozen_components:
|
|
|
|
msg.text(f"Frozen components: {', '.join(frozen_components)}")
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.text(f"{len(train_dataset)} training docs")
|
2020-01-08 15:51:51 +00:00
|
|
|
msg.text(f"{len(dev_dataset)} evaluation docs")
|
2018-11-30 19:16:14 +00:00
|
|
|
|
2019-11-13 14:25:03 +00:00
|
|
|
if not len(gold_dev_data):
|
2019-11-02 15:08:11 +00:00
|
|
|
msg.fail("No evaluation docs")
|
2018-11-30 19:16:14 +00:00
|
|
|
overlap = len(train_texts.intersection(dev_texts))
|
|
|
|
if overlap:
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.warn(f"{overlap} training examples also in evaluation data")
|
2018-11-30 19:16:14 +00:00
|
|
|
else:
|
|
|
|
msg.good("No overlap between training and evaluation data")
|
2020-08-04 21:39:19 +00:00
|
|
|
# TODO: make this feedback more fine-grained and report on updated
|
|
|
|
# components vs. blank components
|
|
|
|
if not resume_components and len(train_dataset) < BLANK_MODEL_THRESHOLD:
|
2020-09-03 11:13:03 +00:00
|
|
|
text = f"Low number of examples to train a new pipeline ({len(train_dataset)})"
|
2019-11-11 16:35:27 +00:00
|
|
|
if len(train_dataset) < BLANK_MODEL_MIN_THRESHOLD:
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.fail(text)
|
|
|
|
else:
|
|
|
|
msg.warn(text)
|
|
|
|
msg.text(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"It's recommended to use at least {BLANK_MODEL_THRESHOLD} examples "
|
|
|
|
f"(minimum {BLANK_MODEL_MIN_THRESHOLD})",
|
2018-11-30 19:16:14 +00:00
|
|
|
show=verbose,
|
|
|
|
)
|
|
|
|
|
|
|
|
msg.divider("Vocab & Vectors")
|
2019-08-16 08:52:46 +00:00
|
|
|
n_words = gold_train_data["n_words"]
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.info(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"{n_words} total word(s) in the data ({len(gold_train_data['words'])} unique)"
|
2018-11-30 19:16:14 +00:00
|
|
|
)
|
2019-08-16 08:52:46 +00:00
|
|
|
if gold_train_data["n_misaligned_words"] > 0:
|
2019-12-22 00:53:56 +00:00
|
|
|
n_misaligned = gold_train_data["n_misaligned_words"]
|
|
|
|
msg.warn(f"{n_misaligned} misaligned tokens in the training data")
|
2019-08-16 08:52:46 +00:00
|
|
|
if gold_dev_data["n_misaligned_words"] > 0:
|
2019-12-22 00:53:56 +00:00
|
|
|
n_misaligned = gold_dev_data["n_misaligned_words"]
|
|
|
|
msg.warn(f"{n_misaligned} misaligned tokens in the dev data")
|
2019-08-16 08:52:46 +00:00
|
|
|
most_common_words = gold_train_data["words"].most_common(10)
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.text(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"10 most common words: {_format_labels(most_common_words, counts=True)}",
|
2018-11-30 19:16:14 +00:00
|
|
|
show=verbose,
|
|
|
|
)
|
|
|
|
if len(nlp.vocab.vectors):
|
|
|
|
msg.info(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"{len(nlp.vocab.vectors)} vectors ({nlp.vocab.vectors.n_keys} "
|
|
|
|
f"unique keys, {nlp.vocab.vectors_length} dimensions)"
|
2018-11-30 19:16:14 +00:00
|
|
|
)
|
2020-04-29 10:56:46 +00:00
|
|
|
n_missing_vectors = sum(gold_train_data["words_missing_vectors"].values())
|
|
|
|
msg.warn(
|
2021-04-27 07:16:35 +00:00
|
|
|
"{} words in training data without vectors ({:.0f}%)".format(
|
2021-05-17 11:27:04 +00:00
|
|
|
n_missing_vectors,
|
|
|
|
100 * (n_missing_vectors / gold_train_data["n_words"]),
|
2020-04-29 10:56:46 +00:00
|
|
|
),
|
|
|
|
)
|
|
|
|
msg.text(
|
2020-05-21 12:14:01 +00:00
|
|
|
"10 most common words without vectors: {}".format(
|
|
|
|
_format_labels(
|
|
|
|
gold_train_data["words_missing_vectors"].most_common(10),
|
|
|
|
counts=True,
|
|
|
|
)
|
|
|
|
),
|
|
|
|
show=verbose,
|
2020-04-29 10:56:46 +00:00
|
|
|
)
|
2018-11-30 19:16:14 +00:00
|
|
|
else:
|
2020-09-03 11:13:03 +00:00
|
|
|
msg.info("No word vectors present in the package")
|
2018-11-30 19:16:14 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
if "ner" in factory_names:
|
2018-11-30 19:16:14 +00:00
|
|
|
# Get all unique NER labels present in the data
|
2019-08-18 13:09:16 +00:00
|
|
|
labels = set(
|
2019-11-13 14:25:03 +00:00
|
|
|
label for label in gold_train_data["ner"] if label not in ("O", "-", None)
|
2019-08-18 13:09:16 +00:00
|
|
|
)
|
2019-08-16 08:52:46 +00:00
|
|
|
label_counts = gold_train_data["ner"]
|
2018-11-30 19:16:14 +00:00
|
|
|
model_labels = _get_labels_from_model(nlp, "ner")
|
|
|
|
has_low_data_warning = False
|
|
|
|
has_no_neg_warning = False
|
2018-12-08 10:49:43 +00:00
|
|
|
has_ws_ents_error = False
|
2020-01-06 13:59:28 +00:00
|
|
|
has_punct_ents_warning = False
|
2018-11-30 19:16:14 +00:00
|
|
|
|
|
|
|
msg.divider("Named Entity Recognition")
|
2021-04-09 09:53:42 +00:00
|
|
|
msg.info(f"{len(model_labels)} label(s)")
|
2018-11-30 19:16:14 +00:00
|
|
|
missing_values = label_counts["-"]
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.text(f"{missing_values} missing value(s) (tokens with '-' label)")
|
2021-04-09 09:53:42 +00:00
|
|
|
for label in labels:
|
2019-10-21 10:20:28 +00:00
|
|
|
if len(label) == 0:
|
2021-04-09 09:53:42 +00:00
|
|
|
msg.fail("Empty label found in train data")
|
|
|
|
labels_with_counts = [
|
|
|
|
(label, count)
|
|
|
|
for label, count in label_counts.most_common()
|
|
|
|
if label != "-"
|
|
|
|
]
|
|
|
|
labels_with_counts = _format_labels(labels_with_counts, counts=True)
|
|
|
|
msg.text(f"Labels in train data: {_format_labels(labels)}", show=verbose)
|
|
|
|
missing_labels = model_labels - labels
|
|
|
|
if missing_labels:
|
|
|
|
msg.warn(
|
|
|
|
"Some model labels are not present in the train data. The "
|
|
|
|
"model performance may be degraded for these labels after "
|
|
|
|
f"training: {_format_labels(missing_labels)}."
|
|
|
|
)
|
2019-08-16 08:52:46 +00:00
|
|
|
if gold_train_data["ws_ents"]:
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.fail(f"{gold_train_data['ws_ents']} invalid whitespace entity spans")
|
2018-12-08 10:49:43 +00:00
|
|
|
has_ws_ents_error = True
|
|
|
|
|
2020-01-06 13:59:28 +00:00
|
|
|
if gold_train_data["punct_ents"]:
|
2020-02-18 14:38:18 +00:00
|
|
|
msg.warn(f"{gold_train_data['punct_ents']} entity span(s) with punctuation")
|
2020-01-06 13:59:28 +00:00
|
|
|
has_punct_ents_warning = True
|
|
|
|
|
2021-04-09 09:53:42 +00:00
|
|
|
for label in labels:
|
2018-11-30 19:16:14 +00:00
|
|
|
if label_counts[label] <= NEW_LABEL_THRESHOLD:
|
|
|
|
msg.warn(
|
2021-04-09 09:53:42 +00:00
|
|
|
f"Low number of examples for label '{label}' ({label_counts[label]})"
|
2018-11-30 19:16:14 +00:00
|
|
|
)
|
|
|
|
has_low_data_warning = True
|
|
|
|
|
|
|
|
with msg.loading("Analyzing label distribution..."):
|
2019-11-11 16:35:27 +00:00
|
|
|
neg_docs = _get_examples_without_label(train_dataset, label)
|
2018-11-30 19:16:14 +00:00
|
|
|
if neg_docs == 0:
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.warn(f"No examples for texts WITHOUT new label '{label}'")
|
2018-11-30 19:16:14 +00:00
|
|
|
has_no_neg_warning = True
|
|
|
|
|
|
|
|
if not has_low_data_warning:
|
|
|
|
msg.good("Good amount of examples for all labels")
|
|
|
|
if not has_no_neg_warning:
|
2019-08-16 08:53:38 +00:00
|
|
|
msg.good("Examples without occurrences available for all labels")
|
2018-12-08 10:49:43 +00:00
|
|
|
if not has_ws_ents_error:
|
|
|
|
msg.good("No entities consisting of or starting/ending with whitespace")
|
2020-01-06 13:59:28 +00:00
|
|
|
if not has_punct_ents_warning:
|
|
|
|
msg.good("No entities consisting of or starting/ending with punctuation")
|
2018-11-30 19:16:14 +00:00
|
|
|
|
|
|
|
if has_low_data_warning:
|
|
|
|
msg.text(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"To train a new entity type, your data should include at "
|
|
|
|
f"least {NEW_LABEL_THRESHOLD} instances of the new label",
|
2018-11-30 19:16:14 +00:00
|
|
|
show=verbose,
|
|
|
|
)
|
|
|
|
if has_no_neg_warning:
|
|
|
|
msg.text(
|
|
|
|
"Training data should always include examples of entities "
|
|
|
|
"in context, as well as examples without a given entity "
|
|
|
|
"type.",
|
|
|
|
show=verbose,
|
|
|
|
)
|
2018-12-08 10:49:43 +00:00
|
|
|
if has_ws_ents_error:
|
|
|
|
msg.text(
|
|
|
|
"As of spaCy v2.1.0, entity spans consisting of or starting/ending "
|
|
|
|
"with whitespace characters are considered invalid."
|
|
|
|
)
|
2018-11-30 19:16:14 +00:00
|
|
|
|
2020-01-06 13:59:28 +00:00
|
|
|
if has_punct_ents_warning:
|
|
|
|
msg.text(
|
|
|
|
"Entity spans consisting of or starting/ending "
|
|
|
|
"with punctuation can not be trained with a noise level > 0."
|
|
|
|
)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
if "textcat" in factory_names:
|
2021-04-09 09:53:42 +00:00
|
|
|
msg.divider("Text Classification (Exclusive Classes)")
|
|
|
|
labels = _get_labels_from_model(nlp, "textcat")
|
|
|
|
msg.info(f"Text Classification: {len(labels)} label(s)")
|
|
|
|
msg.text(f"Labels: {_format_labels(labels)}", show=verbose)
|
2021-05-17 11:27:04 +00:00
|
|
|
missing_labels = labels - set(gold_train_data["cats"])
|
2021-04-09 09:53:42 +00:00
|
|
|
if missing_labels:
|
|
|
|
msg.warn(
|
|
|
|
"Some model labels are not present in the train data. The "
|
|
|
|
"model performance may be degraded for these labels after "
|
|
|
|
f"training: {_format_labels(missing_labels)}."
|
|
|
|
)
|
2021-05-17 11:27:04 +00:00
|
|
|
if set(gold_train_data["cats"]) != set(gold_dev_data["cats"]):
|
|
|
|
msg.warn(
|
|
|
|
"Potential train/dev mismatch: the train and dev labels are "
|
|
|
|
"not the same. "
|
|
|
|
f"Train labels: {_format_labels(gold_train_data['cats'])}. "
|
|
|
|
f"Dev labels: {_format_labels(gold_dev_data['cats'])}."
|
|
|
|
)
|
|
|
|
if len(labels) < 2:
|
|
|
|
msg.fail(
|
|
|
|
"The model does not have enough labels. 'textcat' requires at "
|
|
|
|
"least two labels due to mutually-exclusive classes, e.g. "
|
|
|
|
"LABEL/NOT_LABEL or POSITIVE/NEGATIVE for a binary "
|
|
|
|
"classification task."
|
|
|
|
)
|
|
|
|
if (
|
|
|
|
gold_train_data["n_cats_bad_values"] > 0
|
|
|
|
or gold_dev_data["n_cats_bad_values"] > 0
|
|
|
|
):
|
|
|
|
msg.fail(
|
|
|
|
"Unsupported values for cats: the supported values are "
|
|
|
|
"1.0/True and 0.0/False."
|
|
|
|
)
|
2021-04-09 09:53:42 +00:00
|
|
|
if gold_train_data["n_cats_multilabel"] > 0:
|
|
|
|
# Note: you should never get here because you run into E895 on
|
|
|
|
# initialization first.
|
2021-05-17 11:27:04 +00:00
|
|
|
msg.fail(
|
|
|
|
"The train data contains instances without mutually-exclusive "
|
|
|
|
"classes. Use the component 'textcat_multilabel' instead of "
|
|
|
|
"'textcat'."
|
2021-04-09 09:53:42 +00:00
|
|
|
)
|
|
|
|
if gold_dev_data["n_cats_multilabel"] > 0:
|
|
|
|
msg.fail(
|
2021-05-17 11:27:04 +00:00
|
|
|
"The dev data contains instances without mutually-exclusive "
|
|
|
|
"classes. Use the component 'textcat_multilabel' instead of "
|
|
|
|
"'textcat'."
|
2021-04-09 09:53:42 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
if "textcat_multilabel" in factory_names:
|
|
|
|
msg.divider("Text Classification (Multilabel)")
|
|
|
|
labels = _get_labels_from_model(nlp, "textcat_multilabel")
|
|
|
|
msg.info(f"Text Classification: {len(labels)} label(s)")
|
|
|
|
msg.text(f"Labels: {_format_labels(labels)}", show=verbose)
|
2021-05-17 11:27:04 +00:00
|
|
|
missing_labels = labels - set(gold_train_data["cats"])
|
2021-04-09 09:53:42 +00:00
|
|
|
if missing_labels:
|
|
|
|
msg.warn(
|
|
|
|
"Some model labels are not present in the train data. The "
|
|
|
|
"model performance may be degraded for these labels after "
|
|
|
|
f"training: {_format_labels(missing_labels)}."
|
2018-11-30 19:16:14 +00:00
|
|
|
)
|
2019-09-15 20:31:31 +00:00
|
|
|
if set(gold_train_data["cats"]) != set(gold_dev_data["cats"]):
|
2021-05-17 11:27:04 +00:00
|
|
|
msg.warn(
|
|
|
|
"Potential train/dev mismatch: the train and dev labels are "
|
|
|
|
"not the same. "
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Train labels: {_format_labels(gold_train_data['cats'])}. "
|
|
|
|
f"Dev labels: {_format_labels(gold_dev_data['cats'])}."
|
2019-09-15 20:31:31 +00:00
|
|
|
)
|
2021-05-17 11:27:04 +00:00
|
|
|
if (
|
|
|
|
gold_train_data["n_cats_bad_values"] > 0
|
|
|
|
or gold_dev_data["n_cats_bad_values"] > 0
|
|
|
|
):
|
|
|
|
msg.fail(
|
|
|
|
"Unsupported values for cats: the supported values are "
|
|
|
|
"1.0/True and 0.0/False."
|
|
|
|
)
|
2019-09-15 20:31:31 +00:00
|
|
|
if gold_train_data["n_cats_multilabel"] > 0:
|
|
|
|
if gold_dev_data["n_cats_multilabel"] == 0:
|
|
|
|
msg.warn(
|
|
|
|
"Potential train/dev mismatch: the train data contains "
|
|
|
|
"instances without mutually-exclusive classes while the "
|
2021-05-17 11:27:04 +00:00
|
|
|
"dev data contains only instances with mutually-exclusive "
|
|
|
|
"classes."
|
2019-09-15 20:31:31 +00:00
|
|
|
)
|
|
|
|
else:
|
2021-04-09 09:53:42 +00:00
|
|
|
msg.warn(
|
2019-09-15 20:31:31 +00:00
|
|
|
"The train data contains only instances with "
|
2021-04-09 09:53:42 +00:00
|
|
|
"mutually-exclusive classes. You can potentially use the "
|
|
|
|
"component 'textcat' instead of 'textcat_multilabel'."
|
2019-09-15 20:31:31 +00:00
|
|
|
)
|
|
|
|
if gold_dev_data["n_cats_multilabel"] > 0:
|
|
|
|
msg.fail(
|
|
|
|
"Train/dev mismatch: the dev data contains instances "
|
|
|
|
"without mutually-exclusive classes while the train data "
|
|
|
|
"contains only instances with mutually-exclusive classes."
|
|
|
|
)
|
2018-11-30 19:16:14 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
if "tagger" in factory_names:
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.divider("Part-of-speech Tagging")
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
label_list = [label for label in gold_train_data["tags"]]
|
2021-04-09 09:53:42 +00:00
|
|
|
model_labels = _get_labels_from_model(nlp, "tagger")
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
msg.info(f"{len(label_list)} label(s) in train data")
|
|
|
|
labels = set(label_list)
|
|
|
|
missing_labels = model_labels - labels
|
2021-04-09 09:53:42 +00:00
|
|
|
if missing_labels:
|
|
|
|
msg.warn(
|
|
|
|
"Some model labels are not present in the train data. The "
|
|
|
|
"model performance may be degraded for these labels after "
|
|
|
|
f"training: {_format_labels(missing_labels)}."
|
|
|
|
)
|
2018-11-30 19:16:14 +00:00
|
|
|
labels_with_counts = _format_labels(
|
2019-08-16 08:52:46 +00:00
|
|
|
gold_train_data["tags"].most_common(), counts=True
|
2018-11-30 19:16:14 +00:00
|
|
|
)
|
|
|
|
msg.text(labels_with_counts, show=verbose)
|
|
|
|
|
2021-04-09 09:53:42 +00:00
|
|
|
if "morphologizer" in factory_names:
|
|
|
|
msg.divider("Morphologizer (POS+Morph)")
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
label_list = [label for label in gold_train_data["morphs"]]
|
2021-04-09 09:53:42 +00:00
|
|
|
model_labels = _get_labels_from_model(nlp, "morphologizer")
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
msg.info(f"{len(label_list)} label(s) in train data")
|
|
|
|
labels = set(label_list)
|
|
|
|
missing_labels = model_labels - labels
|
2021-04-09 09:53:42 +00:00
|
|
|
if missing_labels:
|
|
|
|
msg.warn(
|
|
|
|
"Some model labels are not present in the train data. The "
|
|
|
|
"model performance may be degraded for these labels after "
|
|
|
|
f"training: {_format_labels(missing_labels)}."
|
|
|
|
)
|
|
|
|
labels_with_counts = _format_labels(
|
|
|
|
gold_train_data["morphs"].most_common(), counts=True
|
|
|
|
)
|
|
|
|
msg.text(labels_with_counts, show=verbose)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
if "parser" in factory_names:
|
2019-09-27 18:56:49 +00:00
|
|
|
has_low_data_warning = False
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.divider("Dependency Parsing")
|
2019-08-16 08:52:46 +00:00
|
|
|
|
|
|
|
# profile sentence length
|
|
|
|
msg.info(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Found {gold_train_data['n_sents']} sentence(s) with an average "
|
|
|
|
f"length of {gold_train_data['n_words'] / gold_train_data['n_sents']:.1f} words."
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
|
2019-10-18 08:59:16 +00:00
|
|
|
# check for documents with multiple sentences
|
|
|
|
sents_per_doc = gold_train_data["n_sents"] / len(gold_train_data["texts"])
|
|
|
|
if sents_per_doc < 1.1:
|
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"The training data contains {sents_per_doc:.2f} sentences per "
|
|
|
|
f"document. When there are very few documents containing more "
|
|
|
|
f"than one sentence, the parser will not learn how to segment "
|
|
|
|
f"longer texts into sentences."
|
2019-10-18 08:59:16 +00:00
|
|
|
)
|
|
|
|
|
2019-08-16 08:52:46 +00:00
|
|
|
# profile labels
|
|
|
|
labels_train = [label for label in gold_train_data["deps"]]
|
2019-08-18 13:09:16 +00:00
|
|
|
labels_train_unpreprocessed = [
|
|
|
|
label for label in gold_train_unpreprocessed_data["deps"]
|
|
|
|
]
|
2019-08-16 08:52:46 +00:00
|
|
|
labels_dev = [label for label in gold_dev_data["deps"]]
|
|
|
|
|
|
|
|
if gold_train_unpreprocessed_data["n_nonproj"] > 0:
|
2019-12-22 00:53:56 +00:00
|
|
|
n_nonproj = gold_train_unpreprocessed_data["n_nonproj"]
|
|
|
|
msg.info(f"Found {n_nonproj} nonprojective train sentence(s)")
|
2019-08-16 08:52:46 +00:00
|
|
|
if gold_dev_data["n_nonproj"] > 0:
|
2019-12-22 00:53:56 +00:00
|
|
|
n_nonproj = gold_dev_data["n_nonproj"]
|
|
|
|
msg.info(f"Found {n_nonproj} nonprojective dev sentence(s)")
|
2020-07-11 11:03:53 +00:00
|
|
|
msg.info(f"{len(labels_train_unpreprocessed)} label(s) in train data")
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.info(f"{len(labels_train)} label(s) in projectivized train data")
|
2018-11-30 19:16:14 +00:00
|
|
|
labels_with_counts = _format_labels(
|
2019-08-16 08:52:46 +00:00
|
|
|
gold_train_unpreprocessed_data["deps"].most_common(), counts=True
|
2018-11-30 19:16:14 +00:00
|
|
|
)
|
|
|
|
msg.text(labels_with_counts, show=verbose)
|
|
|
|
|
2019-08-16 08:52:46 +00:00
|
|
|
# rare labels in train
|
|
|
|
for label in gold_train_unpreprocessed_data["deps"]:
|
|
|
|
if gold_train_unpreprocessed_data["deps"][label] <= DEP_LABEL_THRESHOLD:
|
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Low number of examples for label '{label}' "
|
|
|
|
f"({gold_train_unpreprocessed_data['deps'][label]})"
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
has_low_data_warning = True
|
|
|
|
|
|
|
|
# rare labels in projectivized train
|
|
|
|
rare_projectivized_labels = []
|
|
|
|
for label in gold_train_data["deps"]:
|
2021-01-15 00:57:36 +00:00
|
|
|
if (
|
|
|
|
gold_train_data["deps"][label] <= DEP_LABEL_THRESHOLD
|
|
|
|
and DELIMITER in label
|
|
|
|
):
|
2019-08-18 13:09:16 +00:00
|
|
|
rare_projectivized_labels.append(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"{label}: {gold_train_data['deps'][label]}"
|
2019-08-18 13:09:16 +00:00
|
|
|
)
|
2019-08-16 08:52:46 +00:00
|
|
|
|
|
|
|
if len(rare_projectivized_labels) > 0:
|
2019-08-18 13:09:16 +00:00
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Low number of examples for {len(rare_projectivized_labels)} "
|
|
|
|
"label(s) in the projectivized dependency trees used for "
|
|
|
|
"training. You may want to projectivize labels such as punct "
|
|
|
|
"before training in order to improve parser performance."
|
2019-08-18 13:09:16 +00:00
|
|
|
)
|
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Projectivized labels with low numbers of examples: ",
|
|
|
|
", ".join(rare_projectivized_labels),
|
2019-08-18 13:09:16 +00:00
|
|
|
show=verbose,
|
|
|
|
)
|
|
|
|
has_low_data_warning = True
|
2019-08-16 08:52:46 +00:00
|
|
|
|
|
|
|
# labels only in train
|
|
|
|
if set(labels_train) - set(labels_dev):
|
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
"The following labels were found only in the train data:",
|
|
|
|
", ".join(set(labels_train) - set(labels_dev)),
|
2019-08-18 13:09:16 +00:00
|
|
|
show=verbose,
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
# labels only in dev
|
|
|
|
if set(labels_dev) - set(labels_train):
|
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
"The following labels were found only in the dev data:",
|
|
|
|
", ".join(set(labels_dev) - set(labels_train)),
|
2019-08-18 13:09:16 +00:00
|
|
|
show=verbose,
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
if has_low_data_warning:
|
|
|
|
msg.text(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"To train a parser, your data should include at "
|
|
|
|
f"least {DEP_LABEL_THRESHOLD} instances of each label.",
|
2019-08-16 08:52:46 +00:00
|
|
|
show=verbose,
|
|
|
|
)
|
|
|
|
|
|
|
|
# multiple root labels
|
|
|
|
if len(gold_train_unpreprocessed_data["roots"]) > 1:
|
|
|
|
msg.warn(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Multiple root labels "
|
|
|
|
f"({', '.join(gold_train_unpreprocessed_data['roots'])}) "
|
|
|
|
f"found in training data. spaCy's parser uses a single root "
|
|
|
|
f"label ROOT so this distinction will not be available."
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
# these should not happen, but just in case
|
|
|
|
if gold_train_data["n_nonproj"] > 0:
|
|
|
|
msg.fail(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Found {gold_train_data['n_nonproj']} nonprojective "
|
|
|
|
f"projectivized train sentence(s)"
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
if gold_train_data["n_cycles"] > 0:
|
|
|
|
msg.fail(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Found {gold_train_data['n_cycles']} projectivized train sentence(s) with cycles"
|
2019-08-16 08:52:46 +00:00
|
|
|
)
|
|
|
|
|
2018-11-30 19:16:14 +00:00
|
|
|
msg.divider("Summary")
|
|
|
|
good_counts = msg.counts[MESSAGES.GOOD]
|
|
|
|
warn_counts = msg.counts[MESSAGES.WARN]
|
|
|
|
fail_counts = msg.counts[MESSAGES.FAIL]
|
|
|
|
if good_counts:
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.good(f"{good_counts} {'check' if good_counts == 1 else 'checks'} passed")
|
2018-11-30 19:16:14 +00:00
|
|
|
if warn_counts:
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.warn(f"{warn_counts} {'warning' if warn_counts == 1 else 'warnings'}")
|
2018-11-30 19:16:14 +00:00
|
|
|
if fail_counts:
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.fail(f"{fail_counts} {'error' if fail_counts == 1 else 'errors'}")
|
2018-11-30 19:16:14 +00:00
|
|
|
sys.exit(1)
|
|
|
|
|
|
|
|
|
2020-06-21 19:35:01 +00:00
|
|
|
def _load_file(file_path: Path, msg: Printer) -> None:
|
2018-11-30 19:16:14 +00:00
|
|
|
file_name = file_path.parts[-1]
|
|
|
|
if file_path.suffix == ".json":
|
2019-12-22 00:53:56 +00:00
|
|
|
with msg.loading(f"Loading {file_name}..."):
|
2018-12-08 10:49:43 +00:00
|
|
|
data = srsly.read_json(file_path)
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.good(f"Loaded {file_name}")
|
2018-11-30 19:16:14 +00:00
|
|
|
return data
|
|
|
|
elif file_path.suffix == ".jsonl":
|
2019-12-22 00:53:56 +00:00
|
|
|
with msg.loading(f"Loading {file_name}..."):
|
2018-12-08 10:49:43 +00:00
|
|
|
data = srsly.read_jsonl(file_path)
|
2019-12-22 00:53:56 +00:00
|
|
|
msg.good(f"Loaded {file_name}")
|
2018-11-30 19:16:14 +00:00
|
|
|
return data
|
|
|
|
msg.fail(
|
2019-12-22 00:53:56 +00:00
|
|
|
f"Can't load file extension {file_path.suffix}",
|
2018-11-30 19:16:14 +00:00
|
|
|
"Expected .json or .jsonl",
|
|
|
|
exits=1,
|
|
|
|
)
|
|
|
|
|
|
|
|
|
2020-06-21 19:35:01 +00:00
|
|
|
def _compile_gold(
|
2020-07-22 11:42:59 +00:00
|
|
|
examples: Sequence[Example],
|
|
|
|
factory_names: List[str],
|
|
|
|
nlp: Language,
|
|
|
|
make_proj: bool,
|
2020-06-21 19:35:01 +00:00
|
|
|
) -> Dict[str, Any]:
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
data: Dict[str, Any] = {
|
2018-11-30 19:16:14 +00:00
|
|
|
"ner": Counter(),
|
|
|
|
"cats": Counter(),
|
|
|
|
"tags": Counter(),
|
2021-04-09 09:53:42 +00:00
|
|
|
"morphs": Counter(),
|
2018-11-30 19:16:14 +00:00
|
|
|
"deps": Counter(),
|
|
|
|
"words": Counter(),
|
2019-08-16 08:52:46 +00:00
|
|
|
"roots": Counter(),
|
2018-12-08 10:49:43 +00:00
|
|
|
"ws_ents": 0,
|
2020-01-06 13:59:28 +00:00
|
|
|
"punct_ents": 0,
|
2018-11-30 19:16:14 +00:00
|
|
|
"n_words": 0,
|
2019-08-16 08:52:46 +00:00
|
|
|
"n_misaligned_words": 0,
|
2020-04-29 10:56:46 +00:00
|
|
|
"words_missing_vectors": Counter(),
|
2019-08-16 08:52:46 +00:00
|
|
|
"n_sents": 0,
|
|
|
|
"n_nonproj": 0,
|
|
|
|
"n_cycles": 0,
|
2019-09-15 20:31:31 +00:00
|
|
|
"n_cats_multilabel": 0,
|
2021-05-17 11:27:04 +00:00
|
|
|
"n_cats_bad_values": 0,
|
2018-11-30 19:16:14 +00:00
|
|
|
"texts": set(),
|
|
|
|
}
|
2020-06-29 12:33:00 +00:00
|
|
|
for eg in examples:
|
|
|
|
gold = eg.reference
|
|
|
|
doc = eg.predicted
|
2020-12-15 08:43:14 +00:00
|
|
|
valid_words = [x.text for x in gold]
|
2019-08-16 08:52:46 +00:00
|
|
|
data["words"].update(valid_words)
|
|
|
|
data["n_words"] += len(valid_words)
|
2020-12-15 08:43:14 +00:00
|
|
|
align = eg.alignment
|
|
|
|
for token in doc:
|
|
|
|
if token.orth_.isspace():
|
|
|
|
continue
|
|
|
|
if align.x2y.lengths[token.i] != 1:
|
|
|
|
data["n_misaligned_words"] += 1
|
2018-11-30 19:16:14 +00:00
|
|
|
data["texts"].add(doc.text)
|
2020-04-29 10:56:46 +00:00
|
|
|
if len(nlp.vocab.vectors):
|
2020-12-15 08:43:14 +00:00
|
|
|
for word in [t.text for t in doc]:
|
2020-04-29 10:56:46 +00:00
|
|
|
if nlp.vocab.strings[word] not in nlp.vocab.vectors:
|
|
|
|
data["words_missing_vectors"].update([word])
|
2020-07-22 11:42:59 +00:00
|
|
|
if "ner" in factory_names:
|
2020-06-29 12:33:00 +00:00
|
|
|
for i, label in enumerate(eg.get_aligned_ner()):
|
2019-08-16 08:52:46 +00:00
|
|
|
if label is None:
|
|
|
|
continue
|
2018-12-08 10:49:43 +00:00
|
|
|
if label.startswith(("B-", "U-", "L-")) and doc[i].is_space:
|
|
|
|
# "Illegal" whitespace entity
|
|
|
|
data["ws_ents"] += 1
|
2020-03-09 10:17:20 +00:00
|
|
|
if label.startswith(("B-", "U-", "L-")) and doc[i].text in [
|
|
|
|
".",
|
|
|
|
"'",
|
|
|
|
"!",
|
|
|
|
"?",
|
|
|
|
",",
|
|
|
|
]:
|
2020-01-06 13:59:28 +00:00
|
|
|
# punctuation entity: could be replaced by whitespace when training with noise,
|
|
|
|
# so add a warning to alert the user to this unexpected side effect.
|
|
|
|
data["punct_ents"] += 1
|
2018-11-30 19:16:14 +00:00
|
|
|
if label.startswith(("B-", "U-")):
|
|
|
|
combined_label = label.split("-")[1]
|
|
|
|
data["ner"][combined_label] += 1
|
|
|
|
elif label == "-":
|
|
|
|
data["ner"]["-"] += 1
|
2021-04-09 09:53:42 +00:00
|
|
|
if "textcat" in factory_names or "textcat_multilabel" in factory_names:
|
2018-11-30 19:16:14 +00:00
|
|
|
data["cats"].update(gold.cats)
|
2021-05-17 11:27:04 +00:00
|
|
|
if any(val not in (0, 1) for val in gold.cats.values()):
|
|
|
|
data["n_cats_bad_values"] += 1
|
|
|
|
if list(gold.cats.values()).count(1) != 1:
|
2019-09-15 20:31:31 +00:00
|
|
|
data["n_cats_multilabel"] += 1
|
2020-07-22 11:42:59 +00:00
|
|
|
if "tagger" in factory_names:
|
2020-06-29 12:33:00 +00:00
|
|
|
tags = eg.get_aligned("TAG", as_string=True)
|
|
|
|
data["tags"].update([x for x in tags if x is not None])
|
2021-04-09 09:53:42 +00:00
|
|
|
if "morphologizer" in factory_names:
|
|
|
|
pos_tags = eg.get_aligned("POS", as_string=True)
|
|
|
|
morphs = eg.get_aligned("MORPH", as_string=True)
|
|
|
|
for pos, morph in zip(pos_tags, morphs):
|
|
|
|
# POS may align (same value for multiple tokens) when morph
|
|
|
|
# doesn't, so if either is misaligned (None), treat the
|
|
|
|
# annotation as missing so that truths doesn't end up with an
|
|
|
|
# unknown morph+POS combination
|
|
|
|
if pos is None or morph is None:
|
|
|
|
pass
|
|
|
|
# If both are unset, the annotation is missing (empty morph
|
|
|
|
# converted from int is "_" rather than "")
|
|
|
|
elif pos == "" and morph == "":
|
|
|
|
pass
|
|
|
|
# Otherwise, generate the combined label
|
|
|
|
else:
|
|
|
|
label_dict = Morphology.feats_to_dict(morph)
|
|
|
|
if pos:
|
|
|
|
label_dict[Morphologizer.POS_FEAT] = pos
|
|
|
|
label = eg.reference.vocab.strings[
|
|
|
|
eg.reference.vocab.morphology.add(label_dict)
|
|
|
|
]
|
|
|
|
data["morphs"].update([label])
|
2020-07-22 11:42:59 +00:00
|
|
|
if "parser" in factory_names:
|
2020-06-29 12:33:00 +00:00
|
|
|
aligned_heads, aligned_deps = eg.get_aligned_parse(projectivize=make_proj)
|
|
|
|
data["deps"].update([x for x in aligned_deps if x is not None])
|
|
|
|
for i, (dep, head) in enumerate(zip(aligned_deps, aligned_heads)):
|
2019-08-16 08:52:46 +00:00
|
|
|
if head == i:
|
|
|
|
data["roots"].update([dep])
|
|
|
|
data["n_sents"] += 1
|
2020-06-29 12:33:00 +00:00
|
|
|
if nonproj.is_nonproj_tree(aligned_heads):
|
2019-08-16 08:52:46 +00:00
|
|
|
data["n_nonproj"] += 1
|
2020-06-29 12:33:00 +00:00
|
|
|
if nonproj.contains_cycle(aligned_heads):
|
2019-08-16 08:52:46 +00:00
|
|
|
data["n_cycles"] += 1
|
2018-11-30 19:16:14 +00:00
|
|
|
return data
|
|
|
|
|
|
|
|
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
@overload
|
|
|
|
def _format_labels(labels: Iterable[str], counts: Literal[False] = False) -> str:
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
|
|
@overload
|
|
|
|
def _format_labels(
|
|
|
|
labels: Iterable[Tuple[str, int]],
|
|
|
|
counts: Literal[True],
|
|
|
|
) -> str:
|
|
|
|
...
|
|
|
|
|
|
|
|
|
|
|
|
def _format_labels(
|
|
|
|
labels: Union[Iterable[str], Iterable[Tuple[str, int]]],
|
|
|
|
counts: bool = False,
|
|
|
|
) -> str:
|
2018-11-30 19:16:14 +00:00
|
|
|
if counts:
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
return ", ".join(
|
|
|
|
[f"'{l}' ({c})" for l, c in cast(Iterable[Tuple[str, int]], labels)]
|
|
|
|
)
|
|
|
|
return ", ".join([f"'{l}'" for l in cast(Iterable[str], labels)])
|
2018-11-30 19:16:14 +00:00
|
|
|
|
|
|
|
|
2020-06-21 19:35:01 +00:00
|
|
|
def _get_examples_without_label(data: Sequence[Example], label: str) -> int:
|
2018-11-30 19:16:14 +00:00
|
|
|
count = 0
|
2020-06-26 17:34:12 +00:00
|
|
|
for eg in data:
|
2020-04-29 10:56:46 +00:00
|
|
|
labels = [
|
|
|
|
label.split("-")[1]
|
2020-06-29 12:33:00 +00:00
|
|
|
for label in eg.get_aligned_ner()
|
2019-12-22 00:53:56 +00:00
|
|
|
if label not in ("O", "-", None)
|
2020-04-29 10:56:46 +00:00
|
|
|
]
|
2018-11-30 19:16:14 +00:00
|
|
|
if label not in labels:
|
|
|
|
count += 1
|
|
|
|
return count
|
|
|
|
|
|
|
|
|
2021-04-09 09:53:42 +00:00
|
|
|
def _get_labels_from_model(nlp: Language, pipe_name: str) -> Set[str]:
|
2018-11-30 19:16:14 +00:00
|
|
|
if pipe_name not in nlp.pipe_names:
|
|
|
|
return set()
|
|
|
|
pipe = nlp.get_pipe(pipe_name)
|
2021-04-09 09:53:42 +00:00
|
|
|
return set(pipe.labels)
|