2023-06-14 15:48:41 +00:00
|
|
|
import re
|
|
|
|
from collections import namedtuple
|
2020-07-22 11:42:59 +00:00
|
|
|
from pathlib import Path
|
2023-06-14 15:48:41 +00:00
|
|
|
from typing import Any, Callable, Dict, Optional, Union
|
|
|
|
|
2020-06-08 14:29:05 +00:00
|
|
|
import srsly
|
2021-10-01 17:19:26 +00:00
|
|
|
from thinc.api import Model
|
2018-05-03 16:38:26 +00:00
|
|
|
|
2023-06-14 15:48:41 +00:00
|
|
|
from ... import util
|
2020-06-08 14:29:05 +00:00
|
|
|
from ...errors import Errors
|
2023-06-14 15:48:41 +00:00
|
|
|
from ...language import BaseDefaults, Language
|
2021-10-01 17:19:26 +00:00
|
|
|
from ...pipeline import Morphologizer
|
|
|
|
from ...pipeline.morphologizer import DEFAULT_MORPH_MODEL
|
2020-09-27 20:20:45 +00:00
|
|
|
from ...scorer import Scorer
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
from ...symbols import POS
|
2021-10-01 17:19:26 +00:00
|
|
|
from ...tokens import Doc, MorphAnalysis
|
2020-09-27 20:20:45 +00:00
|
|
|
from ...training import validate_examples
|
2023-06-14 15:48:41 +00:00
|
|
|
from ...util import DummyTokenizer, load_config_from_str, registry
|
2021-08-19 12:37:47 +00:00
|
|
|
from ...vocab import Vocab
|
2023-06-14 15:48:41 +00:00
|
|
|
from .stop_words import STOP_WORDS
|
|
|
|
from .syntax_iterators import SYNTAX_ITERATORS
|
|
|
|
from .tag_bigram_map import TAG_BIGRAM_MAP
|
|
|
|
from .tag_map import TAG_MAP
|
|
|
|
from .tag_orth_map import TAG_ORTH_MAP
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
DEFAULT_CONFIG = """
|
|
|
|
[nlp]
|
2019-12-21 18:04:17 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
[nlp.tokenizer]
|
2020-07-24 12:50:26 +00:00
|
|
|
@tokenizers = "spacy.ja.JapaneseTokenizer"
|
2020-07-22 11:42:59 +00:00
|
|
|
split_mode = null
|
|
|
|
"""
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
|
|
|
|
|
2020-07-24 12:50:26 +00:00
|
|
|
@registry.tokenizers("spacy.ja.JapaneseTokenizer")
|
|
|
|
def create_tokenizer(split_mode: Optional[str] = None):
|
2020-07-22 11:42:59 +00:00
|
|
|
def japanese_tokenizer_factory(nlp):
|
2021-08-19 12:37:47 +00:00
|
|
|
return JapaneseTokenizer(nlp.vocab, split_mode=split_mode)
|
2019-09-13 14:28:12 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
return japanese_tokenizer_factory
|
2020-06-08 13:49:34 +00:00
|
|
|
|
|
|
|
|
2019-01-10 14:40:37 +00:00
|
|
|
class JapaneseTokenizer(DummyTokenizer):
|
2021-08-19 12:37:47 +00:00
|
|
|
def __init__(self, vocab: Vocab, split_mode: Optional[str] = None) -> None:
|
|
|
|
self.vocab = vocab
|
2020-07-22 11:42:59 +00:00
|
|
|
self.split_mode = split_mode
|
2020-06-08 14:29:05 +00:00
|
|
|
self.tokenizer = try_sudachi_import(self.split_mode)
|
2021-10-01 17:19:26 +00:00
|
|
|
# if we're using split mode A we don't need subtokens
|
|
|
|
self.need_subtokens = not (split_mode is None or split_mode == "A")
|
2017-10-14 11:11:39 +00:00
|
|
|
|
2021-08-19 12:37:47 +00:00
|
|
|
def __reduce__(self):
|
|
|
|
return JapaneseTokenizer, (self.vocab, self.split_mode)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def __call__(self, text: str) -> Doc:
|
2020-06-22 12:32:25 +00:00
|
|
|
# convert sudachipy.morpheme.Morpheme to DetailedToken and merge continuous spaces
|
|
|
|
sudachipy_tokens = self.tokenizer.tokenize(text)
|
|
|
|
dtokens = self._get_dtokens(sudachipy_tokens)
|
|
|
|
dtokens, spaces = get_dtokens_and_spaces(dtokens, text)
|
|
|
|
|
|
|
|
# create Doc with tag bi-gram based part-of-speech identification rules
|
2021-10-01 17:19:26 +00:00
|
|
|
words, tags, inflections, lemmas, norms, readings, sub_tokens_list = (
|
|
|
|
zip(*dtokens) if dtokens else [[]] * 7
|
2020-07-20 12:58:04 +00:00
|
|
|
)
|
2020-06-29 12:34:15 +00:00
|
|
|
sub_tokens_list = list(sub_tokens_list)
|
2019-01-10 14:40:37 +00:00
|
|
|
doc = Doc(self.vocab, words=words, spaces=spaces)
|
2020-06-22 12:32:25 +00:00
|
|
|
next_pos = None # for bi-gram rules
|
|
|
|
for idx, (token, dtoken) in enumerate(zip(doc, dtokens)):
|
|
|
|
token.tag_ = dtoken.tag
|
|
|
|
if next_pos: # already identified in previous iteration
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
token.pos = next_pos
|
|
|
|
next_pos = None
|
|
|
|
else:
|
2020-06-08 13:49:34 +00:00
|
|
|
token.pos, next_pos = resolve_pos(
|
|
|
|
token.orth_,
|
2020-06-22 12:32:25 +00:00
|
|
|
dtoken.tag,
|
2020-07-20 12:58:04 +00:00
|
|
|
tags[idx + 1] if idx + 1 < len(tags) else None,
|
2020-06-08 13:49:34 +00:00
|
|
|
)
|
2019-11-23 13:31:04 +00:00
|
|
|
# if there's no lemma info (it's an unk) just use the surface
|
2020-06-22 12:32:25 +00:00
|
|
|
token.lemma_ = dtoken.lemma if dtoken.lemma else dtoken.surface
|
2021-10-01 17:19:26 +00:00
|
|
|
morph = {}
|
2021-10-14 07:21:36 +00:00
|
|
|
if dtoken.inf:
|
|
|
|
# it's normal for this to be empty for non-inflecting types
|
2021-10-27 11:13:03 +00:00
|
|
|
morph["Inflection"] = dtoken.inf
|
2021-10-01 17:19:26 +00:00
|
|
|
token.norm_ = dtoken.norm
|
|
|
|
if dtoken.reading:
|
2021-10-14 07:21:36 +00:00
|
|
|
# punctuation is its own reading, but we don't want values like
|
|
|
|
# "=" here
|
2021-10-27 11:13:03 +00:00
|
|
|
morph["Reading"] = re.sub("[=|]", "_", dtoken.reading)
|
2021-10-01 17:19:26 +00:00
|
|
|
token.morph = MorphAnalysis(self.vocab, morph)
|
|
|
|
if self.need_subtokens:
|
|
|
|
doc.user_data["sub_tokens"] = sub_tokens_list
|
2019-01-10 14:40:37 +00:00
|
|
|
return doc
|
2017-11-15 11:44:02 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def _get_dtokens(self, sudachipy_tokens, need_sub_tokens: bool = True):
|
2020-07-20 12:58:04 +00:00
|
|
|
sub_tokens_list = (
|
|
|
|
self._get_sub_tokens(sudachipy_tokens) if need_sub_tokens else None
|
|
|
|
)
|
2020-06-22 12:32:25 +00:00
|
|
|
dtokens = [
|
|
|
|
DetailedToken(
|
|
|
|
token.surface(), # orth
|
2020-07-20 12:58:04 +00:00
|
|
|
"-".join([xx for xx in token.part_of_speech()[:4] if xx != "*"]), # tag
|
2021-10-07 08:28:15 +00:00
|
|
|
";".join([xx for xx in token.part_of_speech()[4:] if xx != "*"]), # inf
|
2020-06-22 12:32:25 +00:00
|
|
|
token.dictionary_form(), # lemma
|
2021-10-01 17:19:26 +00:00
|
|
|
token.normalized_form(),
|
|
|
|
token.reading_form(),
|
2020-07-20 12:58:04 +00:00
|
|
|
sub_tokens_list[idx]
|
|
|
|
if sub_tokens_list
|
|
|
|
else None, # user_data['sub_tokens']
|
|
|
|
)
|
|
|
|
for idx, token in enumerate(sudachipy_tokens)
|
|
|
|
if len(token.surface()) > 0
|
2020-06-22 12:32:25 +00:00
|
|
|
# remove empty tokens which can be produced with characters like … that
|
|
|
|
]
|
|
|
|
# Sudachi normalizes internally and outputs each space char as a token.
|
|
|
|
# This is the preparation for get_dtokens_and_spaces() to merge the continuous space tokens
|
|
|
|
return [
|
2020-07-20 12:58:04 +00:00
|
|
|
t
|
|
|
|
for idx, t in enumerate(dtokens)
|
|
|
|
if idx == 0
|
|
|
|
or not t.surface.isspace()
|
|
|
|
or t.tag != "空白"
|
|
|
|
or not dtokens[idx - 1].surface.isspace()
|
|
|
|
or dtokens[idx - 1].tag != "空白"
|
2020-06-22 12:32:25 +00:00
|
|
|
]
|
|
|
|
|
|
|
|
def _get_sub_tokens(self, sudachipy_tokens):
|
2021-10-01 17:19:26 +00:00
|
|
|
# do nothing for default split mode
|
|
|
|
if not self.need_subtokens:
|
2020-06-22 12:32:25 +00:00
|
|
|
return None
|
|
|
|
|
|
|
|
sub_tokens_list = [] # list of (list of list of DetailedToken | None)
|
|
|
|
for token in sudachipy_tokens:
|
|
|
|
sub_a = token.split(self.tokenizer.SplitMode.A)
|
|
|
|
if len(sub_a) == 1: # no sub tokens
|
|
|
|
sub_tokens_list.append(None)
|
|
|
|
elif self.split_mode == "B":
|
|
|
|
sub_tokens_list.append([self._get_dtokens(sub_a, False)])
|
|
|
|
else: # "C"
|
|
|
|
sub_b = token.split(self.tokenizer.SplitMode.B)
|
|
|
|
if len(sub_a) == len(sub_b):
|
|
|
|
dtokens = self._get_dtokens(sub_a, False)
|
|
|
|
sub_tokens_list.append([dtokens, dtokens])
|
|
|
|
else:
|
2020-07-20 12:58:04 +00:00
|
|
|
sub_tokens_list.append(
|
|
|
|
[
|
|
|
|
self._get_dtokens(sub_a, False),
|
|
|
|
self._get_dtokens(sub_b, False),
|
|
|
|
]
|
|
|
|
)
|
2020-06-22 12:32:25 +00:00
|
|
|
return sub_tokens_list
|
|
|
|
|
2020-09-27 20:20:45 +00:00
|
|
|
def score(self, examples):
|
|
|
|
validate_examples(examples, "JapaneseTokenizer.score")
|
|
|
|
return Scorer.score_tokenization(examples)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def _get_config(self) -> Dict[str, Any]:
|
|
|
|
return {"split_mode": self.split_mode}
|
2020-06-08 14:29:05 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def _set_config(self, config: Dict[str, Any] = {}) -> None:
|
2020-06-08 14:29:05 +00:00
|
|
|
self.split_mode = config.get("split_mode", None)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def to_bytes(self, **kwargs) -> bytes:
|
|
|
|
serializers = {"cfg": lambda: srsly.json_dumps(self._get_config())}
|
2020-06-08 14:29:05 +00:00
|
|
|
return util.to_bytes(serializers, [])
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def from_bytes(self, data: bytes, **kwargs) -> "JapaneseTokenizer":
|
|
|
|
deserializers = {"cfg": lambda b: self._set_config(srsly.json_loads(b))}
|
2020-06-08 14:29:05 +00:00
|
|
|
util.from_bytes(data, deserializers, [])
|
|
|
|
self.tokenizer = try_sudachi_import(self.split_mode)
|
|
|
|
return self
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def to_disk(self, path: Union[str, Path], **kwargs) -> None:
|
2020-06-08 14:29:05 +00:00
|
|
|
path = util.ensure_path(path)
|
2020-07-22 11:42:59 +00:00
|
|
|
serializers = {"cfg": lambda p: srsly.write_json(p, self._get_config())}
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
util.to_disk(path, serializers, [])
|
2020-06-08 14:29:05 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def from_disk(self, path: Union[str, Path], **kwargs) -> "JapaneseTokenizer":
|
2020-06-08 14:29:05 +00:00
|
|
|
path = util.ensure_path(path)
|
2020-07-22 11:42:59 +00:00
|
|
|
serializers = {"cfg": lambda p: self._set_config(srsly.read_json(p))}
|
2020-06-08 14:29:05 +00:00
|
|
|
util.from_disk(path, serializers, [])
|
|
|
|
self.tokenizer = try_sudachi_import(self.split_mode)
|
2020-07-22 11:42:59 +00:00
|
|
|
return self
|
2020-06-08 14:29:05 +00:00
|
|
|
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
|
🏷 Add Mypy check to CI and ignore all existing Mypy errors (#9167)
* 🚨 Ignore all existing Mypy errors
* 🏗 Add Mypy check to CI
* Add types-mock and types-requests as dev requirements
* Add additional type ignore directives
* Add types packages to dev-only list in reqs test
* Add types-dataclasses for python 3.6
* Add ignore to pretrain
* 🏷 Improve type annotation on `run_command` helper
The `run_command` helper previously declared that it returned an
`Optional[subprocess.CompletedProcess]`, but it isn't actually possible
for the function to return `None`. These changes modify the type
annotation of the `run_command` helper and remove all now-unnecessary
`# type: ignore` directives.
* 🔧 Allow variable type redefinition in limited contexts
These changes modify how Mypy is configured to allow variables to have
their type automatically redefined under certain conditions. The Mypy
documentation contains the following example:
```python
def process(items: List[str]) -> None:
# 'items' has type List[str]
items = [item.split() for item in items]
# 'items' now has type List[List[str]]
...
```
This configuration change is especially helpful in reducing the number
of `# type: ignore` directives needed to handle the common pattern of:
* Accepting a filepath as a string
* Overwriting the variable using `filepath = ensure_path(filepath)`
These changes enable redefinition and remove all `# type: ignore`
directives rendered redundant by this change.
* 🏷 Add type annotation to converters mapping
* 🚨 Fix Mypy error in convert CLI argument verification
* 🏷 Improve type annotation on `resolve_dot_names` helper
* 🏷 Add type annotations for `Vocab` attributes `strings` and `vectors`
* 🏷 Add type annotations for more `Vocab` attributes
* 🏷 Add loose type annotation for gold data compilation
* 🏷 Improve `_format_labels` type annotation
* 🏷 Fix `get_lang_class` type annotation
* 🏷 Loosen return type of `Language.evaluate`
* 🏷 Don't accept `Scorer` in `handle_scores_per_type`
* 🏷 Add `string_to_list` overloads
* 🏷 Fix non-Optional command-line options
* 🙈 Ignore redefinition of `wandb_logger` in `loggers.py`
* ➕ Install `typing_extensions` in Python 3.8+
The `typing_extensions` package states that it should be used when
"writing code that must be compatible with multiple Python versions".
Since SpaCy needs to support multiple Python versions, it should be used
when newer `typing` module members are required. One example of this is
`Literal`, which is available starting with Python 3.8.
Previously SpaCy tried to import `Literal` from `typing`, falling back
to `typing_extensions` if the import failed. However, Mypy doesn't seem
to be able to understand what `Literal` means when the initial import
means. Therefore, these changes modify how `compat` imports `Literal` by
always importing it from `typing_extensions`.
These changes also modify how `typing_extensions` is installed, so that
it is a requirement for all Python versions, including those greater
than or equal to 3.8.
* 🏷 Improve type annotation for `Language.pipe`
These changes add a missing overload variant to the type signature of
`Language.pipe`. Additionally, the type signature is enhanced to allow
type checkers to differentiate between the two overload variants based
on the `as_tuple` parameter.
Fixes #8772
* ➖ Don't install `typing-extensions` in Python 3.8+
After more detailed analysis of how to implement Python version-specific
type annotations using SpaCy, it has been determined that by branching
on a comparison against `sys.version_info` can be statically analyzed by
Mypy well enough to enable us to conditionally use
`typing_extensions.Literal`. This means that we no longer need to
install `typing_extensions` for Python versions greater than or equal to
3.8! 🎉
These changes revert previous changes installing `typing-extensions`
regardless of Python version and modify how we import the `Literal` type
to ensure that Mypy treats it properly.
* resolve mypy errors for Strict pydantic types
* refactor code to avoid missing return statement
* fix types of convert CLI command
* avoid list-set confustion in debug_data
* fix typo and formatting
* small fixes to avoid type ignores
* fix types in profile CLI command and make it more efficient
* type fixes in projects CLI
* put one ignore back
* type fixes for render
* fix render types - the sequel
* fix BaseDefault in language definitions
* fix type of noun_chunks iterator - yields tuple instead of span
* fix types in language-specific modules
* 🏷 Expand accepted inputs of `get_string_id`
`get_string_id` accepts either a string (in which case it returns its
ID) or an ID (in which case it immediately returns the ID). These
changes extend the type annotation of `get_string_id` to indicate that
it can accept either strings or IDs.
* 🏷 Handle override types in `combine_score_weights`
The `combine_score_weights` function allows users to pass an `overrides`
mapping to override data extracted from the `weights` argument. Since it
allows `Optional` dictionary values, the return value may also include
`Optional` dictionary values.
These changes update the type annotations for `combine_score_weights` to
reflect this fact.
* 🏷 Fix tokenizer serialization method signatures in `DummyTokenizer`
* 🏷 Fix redefinition of `wandb_logger`
These changes fix the redefinition of `wandb_logger` by giving a
separate name to each `WandbLogger` version. For
backwards-compatibility, `spacy.train` still exports `wandb_logger_v3`
as `wandb_logger` for now.
* more fixes for typing in language
* type fixes in model definitions
* 🏷 Annotate `_RandomWords.probs` as `NDArray`
* 🏷 Annotate `tok2vec` layers to help Mypy
* 🐛 Fix `_RandomWords.probs` type annotations for Python 3.6
Also remove an import that I forgot to move to the top of the module 😅
* more fixes for matchers and other pipeline components
* quick fix for entity linker
* fixing types for spancat, textcat, etc
* bugfix for tok2vec
* type annotations for scorer
* add runtime_checkable for Protocol
* type and import fixes in tests
* mypy fixes for training utilities
* few fixes in util
* fix import
* 🐵 Remove unused `# type: ignore` directives
* 🏷 Annotate `Language._components`
* 🏷 Annotate `spacy.pipeline.Pipe`
* add doc as property to span.pyi
* small fixes and cleanup
* explicit type annotations instead of via comment
Co-authored-by: Adriane Boyd <adrianeboyd@gmail.com>
Co-authored-by: svlandeg <sofie.vanlandeghem@gmail.com>
Co-authored-by: svlandeg <svlandeg@github.com>
2021-10-14 13:21:40 +00:00
|
|
|
class JapaneseDefaults(BaseDefaults):
|
2020-09-30 08:20:14 +00:00
|
|
|
config = load_config_from_str(DEFAULT_CONFIG)
|
2020-07-24 12:50:26 +00:00
|
|
|
stop_words = STOP_WORDS
|
|
|
|
syntax_iterators = SYNTAX_ITERATORS
|
|
|
|
writing_system = {"direction": "ltr", "has_case": False, "has_letters": False}
|
|
|
|
|
|
|
|
|
2017-05-03 04:56:21 +00:00
|
|
|
class Japanese(Language):
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
lang = "ja"
|
2020-07-24 12:50:26 +00:00
|
|
|
Defaults = JapaneseDefaults
|
2017-05-03 04:56:21 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
|
2021-10-01 17:19:26 +00:00
|
|
|
@Japanese.factory(
|
|
|
|
"morphologizer",
|
|
|
|
assigns=["token.morph", "token.pos"],
|
|
|
|
default_config={
|
|
|
|
"model": DEFAULT_MORPH_MODEL,
|
|
|
|
"overwrite": True,
|
|
|
|
"extend": True,
|
|
|
|
"scorer": {"@scorers": "spacy.morphologizer_scorer.v1"},
|
|
|
|
},
|
2021-11-05 08:56:26 +00:00
|
|
|
default_score_weights={
|
|
|
|
"pos_acc": 0.5,
|
|
|
|
"morph_micro_f": 0.5,
|
|
|
|
"morph_per_feat": None,
|
|
|
|
},
|
2021-10-01 17:19:26 +00:00
|
|
|
)
|
|
|
|
def make_morphologizer(
|
|
|
|
nlp: Language,
|
|
|
|
model: Model,
|
|
|
|
name: str,
|
|
|
|
overwrite: bool,
|
|
|
|
extend: bool,
|
|
|
|
scorer: Optional[Callable],
|
|
|
|
):
|
|
|
|
return Morphologizer(
|
|
|
|
nlp.vocab, model, name, overwrite=overwrite, extend=extend, scorer=scorer
|
|
|
|
)
|
|
|
|
|
|
|
|
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
# Hold the attributes we need with convenient names
|
2020-07-22 11:42:59 +00:00
|
|
|
DetailedToken = namedtuple(
|
2021-10-01 17:19:26 +00:00
|
|
|
"DetailedToken", ["surface", "tag", "inf", "lemma", "norm", "reading", "sub_tokens"]
|
2020-07-22 11:42:59 +00:00
|
|
|
)
|
2019-12-21 18:04:17 +00:00
|
|
|
|
2019-11-23 13:31:04 +00:00
|
|
|
|
2020-06-08 14:29:05 +00:00
|
|
|
def try_sudachi_import(split_mode="A"):
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
"""SudachiPy is required for Japanese support, so check for it.
|
2020-06-08 14:29:05 +00:00
|
|
|
It it's not available blow up and explain how to fix it.
|
|
|
|
split_mode should be one of these values: "A", "B", "C", None->"A"."""
|
2018-05-03 16:38:26 +00:00
|
|
|
try:
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
from sudachipy import dictionary, tokenizer
|
2020-07-22 11:42:59 +00:00
|
|
|
|
2020-06-08 14:29:05 +00:00
|
|
|
split_mode = {
|
|
|
|
None: tokenizer.Tokenizer.SplitMode.A,
|
|
|
|
"A": tokenizer.Tokenizer.SplitMode.A,
|
|
|
|
"B": tokenizer.Tokenizer.SplitMode.B,
|
|
|
|
"C": tokenizer.Tokenizer.SplitMode.C,
|
|
|
|
}[split_mode]
|
2020-07-22 11:42:59 +00:00
|
|
|
tok = dictionary.Dictionary().create(mode=split_mode)
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
return tok
|
2018-05-03 16:38:26 +00:00
|
|
|
except ImportError:
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
raise ImportError(
|
2020-06-11 11:47:37 +00:00
|
|
|
"Japanese support requires SudachiPy and SudachiDict-core "
|
|
|
|
"(https://github.com/WorksApplications/SudachiPy). "
|
|
|
|
"Install with `pip install sudachipy sudachidict_core` or "
|
|
|
|
"install spaCy with `pip install spacy[ja]`."
|
2020-08-05 21:53:21 +00:00
|
|
|
) from None
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
|
2018-05-03 16:38:26 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
def resolve_pos(orth, tag, next_tag):
|
2018-05-03 16:38:26 +00:00
|
|
|
"""If necessary, add a field to the POS tag for UD mapping.
|
|
|
|
Under Universal Dependencies, sometimes the same Unidic POS tag can
|
|
|
|
be mapped differently depending on the literal token or its context
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
in the sentence. This function returns resolved POSs for both token
|
|
|
|
and next_token by tuple.
|
2018-05-03 16:38:26 +00:00
|
|
|
"""
|
2019-09-13 14:28:12 +00:00
|
|
|
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
# Some tokens have their UD tag decided based on the POS of the following
|
|
|
|
# token.
|
2019-09-13 14:28:12 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
# apply orth based mapping
|
|
|
|
if tag in TAG_ORTH_MAP:
|
|
|
|
orth_map = TAG_ORTH_MAP[tag]
|
2020-06-08 13:49:34 +00:00
|
|
|
if orth in orth_map:
|
2020-06-22 12:32:25 +00:00
|
|
|
return orth_map[orth], None # current_pos, next_pos
|
2018-05-03 16:38:26 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
# apply tag bi-gram mapping
|
|
|
|
if next_tag:
|
|
|
|
tag_bigram = tag, next_tag
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
if tag_bigram in TAG_BIGRAM_MAP:
|
2020-06-22 12:32:25 +00:00
|
|
|
current_pos, next_pos = TAG_BIGRAM_MAP[tag_bigram]
|
|
|
|
if current_pos is None: # apply tag uni-gram mapping for current_pos
|
2020-07-22 11:42:59 +00:00
|
|
|
return (
|
|
|
|
TAG_MAP[tag][POS],
|
|
|
|
next_pos,
|
|
|
|
) # only next_pos is identified by tag bi-gram mapping
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
else:
|
2020-06-22 12:32:25 +00:00
|
|
|
return current_pos, next_pos
|
2019-09-13 14:28:12 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
# apply tag uni-gram mapping
|
|
|
|
return TAG_MAP[tag][POS], None
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
|
2019-12-21 18:04:17 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
def get_dtokens_and_spaces(dtokens, text, gap_tag="空白"):
|
|
|
|
# Compare the content of tokens and text, first
|
2020-06-08 13:49:34 +00:00
|
|
|
words = [x.surface for x in dtokens]
|
|
|
|
if "".join("".join(words).split()) != "".join(text.split()):
|
|
|
|
raise ValueError(Errors.E194.format(text=text, words=words))
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
text_dtokens = []
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces = []
|
|
|
|
text_pos = 0
|
2020-06-08 19:09:23 +00:00
|
|
|
# handle empty and whitespace-only texts
|
|
|
|
if len(words) == 0:
|
2020-06-22 12:32:25 +00:00
|
|
|
return text_dtokens, text_spaces
|
2020-06-08 19:09:23 +00:00
|
|
|
elif len([word for word in words if not word.isspace()]) == 0:
|
|
|
|
assert text.isspace()
|
2021-10-01 17:19:26 +00:00
|
|
|
text_dtokens = [DetailedToken(text, gap_tag, "", text, text, None, None)]
|
2020-06-08 19:09:23 +00:00
|
|
|
text_spaces = [False]
|
2020-06-22 12:32:25 +00:00
|
|
|
return text_dtokens, text_spaces
|
|
|
|
|
|
|
|
# align words and dtokens by referring text, and insert gap tokens for the space char spans
|
2020-08-25 12:16:24 +00:00
|
|
|
for i, (word, dtoken) in enumerate(zip(words, dtokens)):
|
2020-06-22 12:32:25 +00:00
|
|
|
# skip all space tokens
|
|
|
|
if word.isspace():
|
|
|
|
continue
|
2020-06-08 13:49:34 +00:00
|
|
|
try:
|
|
|
|
word_start = text[text_pos:].index(word)
|
|
|
|
except ValueError:
|
2020-08-05 21:53:21 +00:00
|
|
|
raise ValueError(Errors.E194.format(text=text, words=words)) from None
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
# space token
|
2020-06-08 13:49:34 +00:00
|
|
|
if word_start > 0:
|
2020-07-22 11:42:59 +00:00
|
|
|
w = text[text_pos : text_pos + word_start]
|
2021-10-01 17:19:26 +00:00
|
|
|
text_dtokens.append(DetailedToken(w, gap_tag, "", w, w, None, None))
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces.append(False)
|
|
|
|
text_pos += word_start
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
# content word
|
|
|
|
text_dtokens.append(dtoken)
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces.append(False)
|
|
|
|
text_pos += len(word)
|
2020-06-22 12:32:25 +00:00
|
|
|
# poll a space char after the word
|
2020-08-25 12:16:24 +00:00
|
|
|
if i + 1 < len(dtokens) and dtokens[i + 1].surface == " ":
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces[-1] = True
|
|
|
|
text_pos += 1
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
# trailing space token
|
2020-06-08 13:49:34 +00:00
|
|
|
if text_pos < len(text):
|
|
|
|
w = text[text_pos:]
|
2021-10-01 17:19:26 +00:00
|
|
|
text_dtokens.append(DetailedToken(w, gap_tag, "", w, w, None, None))
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces.append(False)
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
return text_dtokens, text_spaces
|
2020-06-08 13:49:34 +00:00
|
|
|
|
|
|
|
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
__all__ = ["Japanese"]
|