2020-06-16 13:37:35 +00:00
|
|
|
---
|
|
|
|
title: What's New in v2.3
|
|
|
|
teaser: New features, backwards incompatibilities and migration guide
|
|
|
|
menu:
|
|
|
|
- ['New Features', 'features']
|
|
|
|
- ['Backwards Incompatibilities', 'incompat']
|
|
|
|
- ['Migrating from v2.2', 'migrating']
|
|
|
|
---
|
|
|
|
|
|
|
|
## New Features {#features hidden="true"}
|
|
|
|
|
|
|
|
spaCy v2.3 features new pretrained models for five languages, word vectors for
|
|
|
|
all language models, and decreased model size and loading times for models with
|
|
|
|
vectors. We've added pretrained models for **Chinese, Danish, Japanese, Polish
|
|
|
|
and Romanian** and updated the training data and vectors for most languages.
|
|
|
|
Model packages with vectors are about **2×** smaller on disk and load
|
2020-06-16 15:13:35 +00:00
|
|
|
**2-4×** faster. For the full changelog, see the
|
|
|
|
[release notes on GitHub](https://github.com/explosion/spaCy/releases/tag/v2.3.0).
|
|
|
|
For more details and a behind-the-scenes look at the new release,
|
|
|
|
[see our blog post](https://explosion.ai/blog/spacy-v2-3).
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
### Expanded model families with vectors {#models}
|
|
|
|
|
|
|
|
> #### Example
|
|
|
|
>
|
|
|
|
> ```bash
|
|
|
|
> python -m spacy download da_core_news_sm
|
|
|
|
> python -m spacy download ja_core_news_sm
|
|
|
|
> python -m spacy download pl_core_news_sm
|
|
|
|
> python -m spacy download ro_core_news_sm
|
|
|
|
> python -m spacy download zh_core_web_sm
|
|
|
|
> ```
|
|
|
|
|
|
|
|
With new model families for Chinese, Danish, Polish, Romanian and Chinese plus
|
|
|
|
`md` and `lg` models with word vectors for all languages, this release provides
|
2020-06-16 15:13:35 +00:00
|
|
|
a total of 46 model packages. For models trained using
|
|
|
|
[Universal Dependencies](https://universaldependencies.org) corpora, the
|
|
|
|
training data has been updated to UD v2.5 (v2.6 for Japanese, v2.3 for Polish)
|
|
|
|
and Dutch has been extended to include both UD Dutch Alpino and LassySmall.
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
<Infobox>
|
|
|
|
|
|
|
|
**Models:** [Models directory](/models) **Benchmarks: **
|
|
|
|
[Release notes](https://github.com/explosion/spaCy/releases/tag/v2.3.0)
|
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
### Chinese {#chinese}
|
|
|
|
|
|
|
|
> #### Example
|
2020-06-16 15:13:35 +00:00
|
|
|
>
|
2020-06-16 13:37:35 +00:00
|
|
|
> ```python
|
|
|
|
> from spacy.lang.zh import Chinese
|
|
|
|
>
|
|
|
|
> # Load with "default" model provided by pkuseg
|
|
|
|
> cfg = {"pkuseg_model": "default", "require_pkuseg": True}
|
|
|
|
> nlp = Chinese(meta={"tokenizer": {"config": cfg}})
|
|
|
|
>
|
|
|
|
> # Append words to user dict
|
|
|
|
> nlp.tokenizer.pkuseg_update_user_dict(["中国", "ABC"])
|
2020-06-16 15:13:35 +00:00
|
|
|
> ```
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
This release adds support for
|
2020-06-16 15:13:35 +00:00
|
|
|
[`pkuseg`](https://github.com/lancopku/pkuseg-python) for word segmentation and
|
|
|
|
the new Chinese models ship with a custom pkuseg model trained on OntoNotes. The
|
|
|
|
Chinese tokenizer can be initialized with both `pkuseg` and custom models and
|
|
|
|
the `pkuseg` user dictionary is easy to customize. Note that
|
|
|
|
[`pkuseg`](https://github.com/lancopku/pkuseg-python) doesn't yet ship with
|
|
|
|
pre-compiled wheels for Python 3.8. See the
|
|
|
|
[usage documentation](/usage/models#chinese) for details on how to install it on
|
|
|
|
Python 3.8.
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
<Infobox>
|
|
|
|
|
2020-06-16 15:13:35 +00:00
|
|
|
**Models:** [Chinese models](/models/zh) **Usage: **
|
|
|
|
[Chinese tokenizer usage](/usage/models#chinese)
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
### Japanese {#japanese}
|
|
|
|
|
|
|
|
The updated Japanese language class switches to
|
2020-06-16 15:13:35 +00:00
|
|
|
[`SudachiPy`](https://github.com/WorksApplications/SudachiPy) for word
|
|
|
|
segmentation and part-of-speech tagging. Using `SudachiPy` greatly simplifies
|
2020-06-16 13:37:35 +00:00
|
|
|
installing spaCy for Japanese, which is now possible with a single command:
|
|
|
|
`pip install spacy[ja]`.
|
|
|
|
|
|
|
|
<Infobox>
|
|
|
|
|
2020-06-16 15:13:35 +00:00
|
|
|
**Models:** [Japanese models](/models/ja) **Usage:**
|
|
|
|
[Japanese tokenizer usage](/usage/models#japanese)
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
### Small CLI updates
|
|
|
|
|
2020-06-16 15:13:35 +00:00
|
|
|
- [`spacy debug-data`](/api/cli#debug-data) provides the coverage of the vectors
|
|
|
|
in a base model with `spacy debug-data lang train dev -b base_model`
|
|
|
|
- [`spacy evaluate`](/api/cli#evaluate) supports `blank:lg` (e.g.
|
|
|
|
`spacy evaluate blank:en dev.json`) to evaluate the tokenization accuracy
|
|
|
|
without loading a model
|
|
|
|
- [`spacy train`](/api/cli#train) on GPU restricts the CPU timing evaluation to
|
|
|
|
the first iteration
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
## Backwards incompatibilities {#incompat}
|
|
|
|
|
|
|
|
<Infobox title="Important note on models" variant="warning">
|
|
|
|
|
|
|
|
If you've been training **your own models**, you'll need to **retrain** them
|
|
|
|
with the new version. Also don't forget to upgrade all models to the latest
|
|
|
|
versions. Models for earlier v2 releases (v2.0, v2.1, v2.2) aren't compatible
|
2020-06-16 15:13:35 +00:00
|
|
|
with models for v2.3. To check if all of your models are up to date, you can run
|
|
|
|
the [`spacy validate`](/api/cli#validate) command.
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
> #### Install with lookups data
|
|
|
|
>
|
|
|
|
> ```bash
|
|
|
|
> $ pip install spacy[lookups]
|
|
|
|
> ```
|
|
|
|
>
|
|
|
|
> You can also install
|
|
|
|
> [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data)
|
|
|
|
> directly.
|
|
|
|
|
|
|
|
- If you're training new models, you'll want to install the package
|
2020-06-16 15:13:35 +00:00
|
|
|
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data), which
|
|
|
|
now includes both the lemmatization tables (as in v2.2) and the normalization
|
|
|
|
tables (new in v2.3). If you're using pretrained models, **nothing changes**,
|
|
|
|
because the relevant tables are included in the model packages.
|
2020-06-16 13:37:35 +00:00
|
|
|
- Due to the updated Universal Dependencies training data, the fine-grained
|
|
|
|
part-of-speech tags will change for many provided language models. The
|
|
|
|
coarse-grained part-of-speech tagset remains the same, but the mapping from
|
|
|
|
particular fine-grained to coarse-grained tags may show minor differences.
|
|
|
|
- For French, Italian, Portuguese and Spanish, the fine-grained part-of-speech
|
2020-06-16 15:13:35 +00:00
|
|
|
tagsets contain new merged tags related to contracted forms, such as `ADP_DET`
|
|
|
|
for French `"au"`, which maps to UPOS `ADP` based on the head `"à"`. This
|
|
|
|
increases the accuracy of the models by improving the alignment between
|
|
|
|
spaCy's tokenization and Universal Dependencies multi-word tokens used for
|
|
|
|
contractions.
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
### Migrating from spaCy 2.2 {#migrating}
|
|
|
|
|
|
|
|
#### Tokenizer settings
|
|
|
|
|
|
|
|
In spaCy v2.2.2-v2.2.4, there was a change to the precedence of `token_match`
|
|
|
|
that gave prefixes and suffixes priority over `token_match`, which caused
|
|
|
|
problems for many custom tokenizer configurations. This has been reverted in
|
|
|
|
v2.3 so that `token_match` has priority over prefixes and suffixes as in v2.2.1
|
|
|
|
and earlier versions.
|
|
|
|
|
|
|
|
A new tokenizer setting `url_match` has been introduced in v2.3.0 to handle
|
2020-06-16 15:13:35 +00:00
|
|
|
cases like URLs where the tokenizer should remove prefixes and suffixes (e.g., a
|
|
|
|
comma at the end of a URL) before applying the match. See the full
|
|
|
|
[tokenizer documentation](/usage/linguistic-features#tokenization) and try out
|
2020-06-16 13:37:35 +00:00
|
|
|
[`nlp.tokenizer.explain()`](/usage/linguistic-features#tokenizer-debug) when
|
|
|
|
debugging your tokenizer configuration.
|
|
|
|
|
|
|
|
#### Warnings configuration
|
|
|
|
|
2020-06-16 15:13:35 +00:00
|
|
|
spaCy's custom warnings have been replaced with native Python
|
2020-06-16 13:37:35 +00:00
|
|
|
[`warnings`](https://docs.python.org/3/library/warnings.html). Instead of
|
2020-06-16 15:13:35 +00:00
|
|
|
setting `SPACY_WARNING_IGNORE`, use the
|
|
|
|
[`warnings` filters](https://docs.python.org/3/library/warnings.html#the-warnings-filter)
|
2020-06-16 13:37:35 +00:00
|
|
|
to manage warnings.
|
|
|
|
|
|
|
|
#### Normalization tables
|
|
|
|
|
|
|
|
The normalization tables have moved from the language data in
|
2020-06-16 15:13:35 +00:00
|
|
|
[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang) to the
|
|
|
|
package [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data).
|
|
|
|
If you're adding data for a new language, the normalization table should be
|
|
|
|
added to `spacy-lookups-data`. See
|
|
|
|
[adding norm exceptions](/usage/adding-languages#norm-exceptions).
|
2020-06-16 13:37:35 +00:00
|
|
|
|
|
|
|
#### Probability and cluster features
|
|
|
|
|
|
|
|
> #### Load and save extra prob lookups table
|
|
|
|
>
|
|
|
|
> ```python
|
|
|
|
> from spacy.lang.en import English
|
|
|
|
> nlp = English()
|
|
|
|
> doc = nlp("the")
|
|
|
|
> print(doc[0].prob) # lazily loads extra prob table
|
|
|
|
> nlp.to_disk("/path/to/model") # includes prob table
|
|
|
|
> ```
|
|
|
|
|
|
|
|
The `Token.prob` and `Token.cluster` features, which are no longer used by the
|
|
|
|
core pipeline components as of spaCy v2, are no longer provided in the
|
2020-06-16 15:13:35 +00:00
|
|
|
pretrained models to reduce the model size. To keep these features available for
|
|
|
|
users relying on them, the `prob` and `cluster` features for the most frequent
|
|
|
|
1M tokens have been moved to
|
2020-06-16 13:37:35 +00:00
|
|
|
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) as
|
|
|
|
`extra` features for the relevant languages (English, German, Greek and
|
|
|
|
Spanish).
|
|
|
|
|
|
|
|
The extra tables are loaded lazily, so if you have `spacy-lookups-data`
|
2020-06-16 15:13:35 +00:00
|
|
|
installed and your code accesses `Token.prob`, the full table is loaded into the
|
|
|
|
model vocab, which will take a few seconds on initial loading. When you save
|
|
|
|
this model after loading the `prob` table, the full `prob` table will be saved
|
|
|
|
as part of the model vocab.
|
2020-06-16 13:37:35 +00:00
|
|
|
|
2020-06-16 15:13:35 +00:00
|
|
|
If you'd like to include custom `cluster`, `prob`, or `sentiment` tables as part
|
|
|
|
of a new model, add the data to
|
2020-06-16 13:37:35 +00:00
|
|
|
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) under
|
|
|
|
the entry point `lg_extra`, e.g. `en_extra` for English. Alternatively, you can
|
|
|
|
initialize your [`Vocab`](/api/vocab) with the `lookups_extra` argument with a
|
|
|
|
[`Lookups`](/api/lookups) object that includes the tables `lexeme_cluster`,
|
|
|
|
`lexeme_prob`, `lexeme_sentiment` or `lexeme_settings`. `lexeme_settings` is
|
2020-06-16 15:13:35 +00:00
|
|
|
currently only used to provide a custom `oov_prob`. See examples in the
|
|
|
|
[`data` directory](https://github.com/explosion/spacy-lookups-data/tree/master/spacy_lookups_data/data)
|
2020-06-16 13:37:35 +00:00
|
|
|
in `spacy-lookups-data`.
|
|
|
|
|
|
|
|
#### Initializing new models without extra lookups tables
|
|
|
|
|
|
|
|
When you initialize a new model with [`spacy init-model`](/api/cli#init-model),
|
|
|
|
the `prob` table from `spacy-lookups-data` may be loaded as part of the
|
|
|
|
initialization. If you'd like to omit this extra data as in spaCy's provided
|
|
|
|
v2.3 models, use the new flag `--omit-extra-lookups`.
|