2020-07-24 12:50:26 +00:00
|
|
|
from typing import Optional, Union, Dict, Any
|
2020-07-22 11:42:59 +00:00
|
|
|
from pathlib import Path
|
2020-06-08 14:29:05 +00:00
|
|
|
import srsly
|
2020-07-22 11:42:59 +00:00
|
|
|
from collections import namedtuple
|
2018-05-03 16:38:26 +00:00
|
|
|
|
2019-03-06 13:21:15 +00:00
|
|
|
from .stop_words import STOP_WORDS
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
from .syntax_iterators import SYNTAX_ITERATORS
|
2019-01-10 14:40:37 +00:00
|
|
|
from .tag_map import TAG_MAP
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
from .tag_orth_map import TAG_ORTH_MAP
|
|
|
|
from .tag_bigram_map import TAG_BIGRAM_MAP
|
|
|
|
from ...compat import copy_reg
|
2020-06-08 14:29:05 +00:00
|
|
|
from ...errors import Errors
|
2019-01-10 14:40:37 +00:00
|
|
|
from ...language import Language
|
2020-09-27 20:20:45 +00:00
|
|
|
from ...scorer import Scorer
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
from ...symbols import POS
|
2019-03-12 12:30:33 +00:00
|
|
|
from ...tokens import Doc
|
2020-09-27 20:20:45 +00:00
|
|
|
from ...training import validate_examples
|
2020-09-30 08:20:14 +00:00
|
|
|
from ...util import DummyTokenizer, registry, load_config_from_str
|
2020-06-08 14:29:05 +00:00
|
|
|
from ... import util
|
2020-06-08 13:49:34 +00:00
|
|
|
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
DEFAULT_CONFIG = """
|
|
|
|
[nlp]
|
2019-12-21 18:04:17 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
[nlp.tokenizer]
|
2020-07-24 12:50:26 +00:00
|
|
|
@tokenizers = "spacy.ja.JapaneseTokenizer"
|
2020-07-22 11:42:59 +00:00
|
|
|
split_mode = null
|
|
|
|
"""
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
|
|
|
|
|
2020-07-24 12:50:26 +00:00
|
|
|
@registry.tokenizers("spacy.ja.JapaneseTokenizer")
|
|
|
|
def create_tokenizer(split_mode: Optional[str] = None):
|
2020-07-22 11:42:59 +00:00
|
|
|
def japanese_tokenizer_factory(nlp):
|
|
|
|
return JapaneseTokenizer(nlp, split_mode=split_mode)
|
2019-09-13 14:28:12 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
return japanese_tokenizer_factory
|
2020-06-08 13:49:34 +00:00
|
|
|
|
|
|
|
|
2019-01-10 14:40:37 +00:00
|
|
|
class JapaneseTokenizer(DummyTokenizer):
|
2020-07-22 11:42:59 +00:00
|
|
|
def __init__(self, nlp: Language, split_mode: Optional[str] = None) -> None:
|
|
|
|
self.vocab = nlp.vocab
|
|
|
|
self.split_mode = split_mode
|
2020-06-08 14:29:05 +00:00
|
|
|
self.tokenizer = try_sudachi_import(self.split_mode)
|
2017-10-14 11:11:39 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def __call__(self, text: str) -> Doc:
|
2020-06-22 12:32:25 +00:00
|
|
|
# convert sudachipy.morpheme.Morpheme to DetailedToken and merge continuous spaces
|
|
|
|
sudachipy_tokens = self.tokenizer.tokenize(text)
|
|
|
|
dtokens = self._get_dtokens(sudachipy_tokens)
|
|
|
|
dtokens, spaces = get_dtokens_and_spaces(dtokens, text)
|
|
|
|
|
|
|
|
# create Doc with tag bi-gram based part-of-speech identification rules
|
2020-07-20 12:58:04 +00:00
|
|
|
words, tags, inflections, lemmas, readings, sub_tokens_list = (
|
|
|
|
zip(*dtokens) if dtokens else [[]] * 6
|
|
|
|
)
|
2020-06-29 12:34:15 +00:00
|
|
|
sub_tokens_list = list(sub_tokens_list)
|
2019-01-10 14:40:37 +00:00
|
|
|
doc = Doc(self.vocab, words=words, spaces=spaces)
|
2020-06-22 12:32:25 +00:00
|
|
|
next_pos = None # for bi-gram rules
|
|
|
|
for idx, (token, dtoken) in enumerate(zip(doc, dtokens)):
|
|
|
|
token.tag_ = dtoken.tag
|
|
|
|
if next_pos: # already identified in previous iteration
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
token.pos = next_pos
|
|
|
|
next_pos = None
|
|
|
|
else:
|
2020-06-08 13:49:34 +00:00
|
|
|
token.pos, next_pos = resolve_pos(
|
|
|
|
token.orth_,
|
2020-06-22 12:32:25 +00:00
|
|
|
dtoken.tag,
|
2020-07-20 12:58:04 +00:00
|
|
|
tags[idx + 1] if idx + 1 < len(tags) else None,
|
2020-06-08 13:49:34 +00:00
|
|
|
)
|
2019-11-23 13:31:04 +00:00
|
|
|
# if there's no lemma info (it's an unk) just use the surface
|
2020-06-22 12:32:25 +00:00
|
|
|
token.lemma_ = dtoken.lemma if dtoken.lemma else dtoken.surface
|
2020-06-29 12:34:15 +00:00
|
|
|
doc.user_data["inflections"] = inflections
|
|
|
|
doc.user_data["reading_forms"] = readings
|
|
|
|
doc.user_data["sub_tokens"] = sub_tokens_list
|
2019-01-10 14:40:37 +00:00
|
|
|
return doc
|
2017-11-15 11:44:02 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def _get_dtokens(self, sudachipy_tokens, need_sub_tokens: bool = True):
|
2020-07-20 12:58:04 +00:00
|
|
|
sub_tokens_list = (
|
|
|
|
self._get_sub_tokens(sudachipy_tokens) if need_sub_tokens else None
|
|
|
|
)
|
2020-06-22 12:32:25 +00:00
|
|
|
dtokens = [
|
|
|
|
DetailedToken(
|
|
|
|
token.surface(), # orth
|
2020-07-20 12:58:04 +00:00
|
|
|
"-".join([xx for xx in token.part_of_speech()[:4] if xx != "*"]), # tag
|
|
|
|
",".join([xx for xx in token.part_of_speech()[4:] if xx != "*"]), # inf
|
2020-06-22 12:32:25 +00:00
|
|
|
token.dictionary_form(), # lemma
|
|
|
|
token.reading_form(), # user_data['reading_forms']
|
2020-07-20 12:58:04 +00:00
|
|
|
sub_tokens_list[idx]
|
|
|
|
if sub_tokens_list
|
|
|
|
else None, # user_data['sub_tokens']
|
|
|
|
)
|
|
|
|
for idx, token in enumerate(sudachipy_tokens)
|
|
|
|
if len(token.surface()) > 0
|
2020-06-22 12:32:25 +00:00
|
|
|
# remove empty tokens which can be produced with characters like … that
|
|
|
|
]
|
|
|
|
# Sudachi normalizes internally and outputs each space char as a token.
|
|
|
|
# This is the preparation for get_dtokens_and_spaces() to merge the continuous space tokens
|
|
|
|
return [
|
2020-07-20 12:58:04 +00:00
|
|
|
t
|
|
|
|
for idx, t in enumerate(dtokens)
|
|
|
|
if idx == 0
|
|
|
|
or not t.surface.isspace()
|
|
|
|
or t.tag != "空白"
|
|
|
|
or not dtokens[idx - 1].surface.isspace()
|
|
|
|
or dtokens[idx - 1].tag != "空白"
|
2020-06-22 12:32:25 +00:00
|
|
|
]
|
|
|
|
|
|
|
|
def _get_sub_tokens(self, sudachipy_tokens):
|
2020-07-20 12:58:04 +00:00
|
|
|
if (
|
|
|
|
self.split_mode is None or self.split_mode == "A"
|
|
|
|
): # do nothing for default split mode
|
2020-06-22 12:32:25 +00:00
|
|
|
return None
|
|
|
|
|
|
|
|
sub_tokens_list = [] # list of (list of list of DetailedToken | None)
|
|
|
|
for token in sudachipy_tokens:
|
|
|
|
sub_a = token.split(self.tokenizer.SplitMode.A)
|
|
|
|
if len(sub_a) == 1: # no sub tokens
|
|
|
|
sub_tokens_list.append(None)
|
|
|
|
elif self.split_mode == "B":
|
|
|
|
sub_tokens_list.append([self._get_dtokens(sub_a, False)])
|
|
|
|
else: # "C"
|
|
|
|
sub_b = token.split(self.tokenizer.SplitMode.B)
|
|
|
|
if len(sub_a) == len(sub_b):
|
|
|
|
dtokens = self._get_dtokens(sub_a, False)
|
|
|
|
sub_tokens_list.append([dtokens, dtokens])
|
|
|
|
else:
|
2020-07-20 12:58:04 +00:00
|
|
|
sub_tokens_list.append(
|
|
|
|
[
|
|
|
|
self._get_dtokens(sub_a, False),
|
|
|
|
self._get_dtokens(sub_b, False),
|
|
|
|
]
|
|
|
|
)
|
2020-06-22 12:32:25 +00:00
|
|
|
return sub_tokens_list
|
|
|
|
|
2020-09-27 20:20:45 +00:00
|
|
|
def score(self, examples):
|
|
|
|
validate_examples(examples, "JapaneseTokenizer.score")
|
|
|
|
return Scorer.score_tokenization(examples)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def _get_config(self) -> Dict[str, Any]:
|
|
|
|
return {"split_mode": self.split_mode}
|
2020-06-08 14:29:05 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def _set_config(self, config: Dict[str, Any] = {}) -> None:
|
2020-06-08 14:29:05 +00:00
|
|
|
self.split_mode = config.get("split_mode", None)
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def to_bytes(self, **kwargs) -> bytes:
|
|
|
|
serializers = {"cfg": lambda: srsly.json_dumps(self._get_config())}
|
2020-06-08 14:29:05 +00:00
|
|
|
return util.to_bytes(serializers, [])
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def from_bytes(self, data: bytes, **kwargs) -> "JapaneseTokenizer":
|
|
|
|
deserializers = {"cfg": lambda b: self._set_config(srsly.json_loads(b))}
|
2020-06-08 14:29:05 +00:00
|
|
|
util.from_bytes(data, deserializers, [])
|
|
|
|
self.tokenizer = try_sudachi_import(self.split_mode)
|
|
|
|
return self
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def to_disk(self, path: Union[str, Path], **kwargs) -> None:
|
2020-06-08 14:29:05 +00:00
|
|
|
path = util.ensure_path(path)
|
2020-07-22 11:42:59 +00:00
|
|
|
serializers = {"cfg": lambda p: srsly.write_json(p, self._get_config())}
|
2020-06-08 14:29:05 +00:00
|
|
|
return util.to_disk(path, serializers, [])
|
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
def from_disk(self, path: Union[str, Path], **kwargs) -> "JapaneseTokenizer":
|
2020-06-08 14:29:05 +00:00
|
|
|
path = util.ensure_path(path)
|
2020-07-22 11:42:59 +00:00
|
|
|
serializers = {"cfg": lambda p: self._set_config(srsly.read_json(p))}
|
2020-06-08 14:29:05 +00:00
|
|
|
util.from_disk(path, serializers, [])
|
|
|
|
self.tokenizer = try_sudachi_import(self.split_mode)
|
2020-07-22 11:42:59 +00:00
|
|
|
return self
|
2020-06-08 14:29:05 +00:00
|
|
|
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
|
2020-07-24 12:50:26 +00:00
|
|
|
class JapaneseDefaults(Language.Defaults):
|
2020-09-30 08:20:14 +00:00
|
|
|
config = load_config_from_str(DEFAULT_CONFIG)
|
2020-07-24 12:50:26 +00:00
|
|
|
stop_words = STOP_WORDS
|
|
|
|
syntax_iterators = SYNTAX_ITERATORS
|
|
|
|
writing_system = {"direction": "ltr", "has_case": False, "has_letters": False}
|
|
|
|
|
|
|
|
|
2017-05-03 04:56:21 +00:00
|
|
|
class Japanese(Language):
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
lang = "ja"
|
2020-07-24 12:50:26 +00:00
|
|
|
Defaults = JapaneseDefaults
|
2017-05-03 04:56:21 +00:00
|
|
|
|
2020-07-22 11:42:59 +00:00
|
|
|
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
# Hold the attributes we need with convenient names
|
2020-07-22 11:42:59 +00:00
|
|
|
DetailedToken = namedtuple(
|
|
|
|
"DetailedToken", ["surface", "tag", "inf", "lemma", "reading", "sub_tokens"]
|
|
|
|
)
|
2019-12-21 18:04:17 +00:00
|
|
|
|
2019-11-23 13:31:04 +00:00
|
|
|
|
2020-06-08 14:29:05 +00:00
|
|
|
def try_sudachi_import(split_mode="A"):
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
"""SudachiPy is required for Japanese support, so check for it.
|
2020-06-08 14:29:05 +00:00
|
|
|
It it's not available blow up and explain how to fix it.
|
|
|
|
split_mode should be one of these values: "A", "B", "C", None->"A"."""
|
2018-05-03 16:38:26 +00:00
|
|
|
try:
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
from sudachipy import dictionary, tokenizer
|
2020-07-22 11:42:59 +00:00
|
|
|
|
2020-06-08 14:29:05 +00:00
|
|
|
split_mode = {
|
|
|
|
None: tokenizer.Tokenizer.SplitMode.A,
|
|
|
|
"A": tokenizer.Tokenizer.SplitMode.A,
|
|
|
|
"B": tokenizer.Tokenizer.SplitMode.B,
|
|
|
|
"C": tokenizer.Tokenizer.SplitMode.C,
|
|
|
|
}[split_mode]
|
2020-07-22 11:42:59 +00:00
|
|
|
tok = dictionary.Dictionary().create(mode=split_mode)
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
return tok
|
2018-05-03 16:38:26 +00:00
|
|
|
except ImportError:
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
raise ImportError(
|
2020-06-11 11:47:37 +00:00
|
|
|
"Japanese support requires SudachiPy and SudachiDict-core "
|
|
|
|
"(https://github.com/WorksApplications/SudachiPy). "
|
|
|
|
"Install with `pip install sudachipy sudachidict_core` or "
|
|
|
|
"install spaCy with `pip install spacy[ja]`."
|
2020-08-05 21:53:21 +00:00
|
|
|
) from None
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
|
2018-05-03 16:38:26 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
def resolve_pos(orth, tag, next_tag):
|
2018-05-03 16:38:26 +00:00
|
|
|
"""If necessary, add a field to the POS tag for UD mapping.
|
|
|
|
Under Universal Dependencies, sometimes the same Unidic POS tag can
|
|
|
|
be mapped differently depending on the literal token or its context
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
in the sentence. This function returns resolved POSs for both token
|
|
|
|
and next_token by tuple.
|
2018-05-03 16:38:26 +00:00
|
|
|
"""
|
2019-09-13 14:28:12 +00:00
|
|
|
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
# Some tokens have their UD tag decided based on the POS of the following
|
|
|
|
# token.
|
2019-09-13 14:28:12 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
# apply orth based mapping
|
|
|
|
if tag in TAG_ORTH_MAP:
|
|
|
|
orth_map = TAG_ORTH_MAP[tag]
|
2020-06-08 13:49:34 +00:00
|
|
|
if orth in orth_map:
|
2020-06-22 12:32:25 +00:00
|
|
|
return orth_map[orth], None # current_pos, next_pos
|
2018-05-03 16:38:26 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
# apply tag bi-gram mapping
|
|
|
|
if next_tag:
|
|
|
|
tag_bigram = tag, next_tag
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
if tag_bigram in TAG_BIGRAM_MAP:
|
2020-06-22 12:32:25 +00:00
|
|
|
current_pos, next_pos = TAG_BIGRAM_MAP[tag_bigram]
|
|
|
|
if current_pos is None: # apply tag uni-gram mapping for current_pos
|
2020-07-22 11:42:59 +00:00
|
|
|
return (
|
|
|
|
TAG_MAP[tag][POS],
|
|
|
|
next_pos,
|
|
|
|
) # only next_pos is identified by tag bi-gram mapping
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
else:
|
2020-06-22 12:32:25 +00:00
|
|
|
return current_pos, next_pos
|
2019-09-13 14:28:12 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
# apply tag uni-gram mapping
|
|
|
|
return TAG_MAP[tag][POS], None
|
Add Japanese Model (#5544)
* Add more rules to deal with Japanese UD mappings
Japanese UD rules sometimes give different UD tags to tokens with the
same underlying POS tag. The UD spec indicates these cases should be
disambiguated using the output of a tool called "comainu", but rules are
enough to get the right result.
These rules are taken from Ginza at time of writing, see #3756.
* Add new tags from GSD
This is a few rare tags that aren't in Unidic but are in the GSD data.
* Add basic Japanese sentencization
This code is taken from Ginza again.
* Add sentenceizer quote handling
Could probably add more paired characters but this will do for now. Also
includes some tests.
* Replace fugashi with SudachiPy
* Modify tag format to match GSD annotations
Some of the tests still need to be updated, but I want to get this up
for testing training.
* Deal with case with closing punct without opening
* refactor resolve_pos()
* change tag field separator from "," to "-"
* add TAG_ORTH_MAP
* add TAG_BIGRAM_MAP
* revise rules for 連体詞
* revise rules for 連体詞
* improve POS about 2%
* add syntax_iterator.py (not mature yet)
* improve syntax_iterators.py
* improve syntax_iterators.py
* add phrases including nouns and drop NPs consist of STOP_WORDS
* First take at noun chunks
This works in many situations but still has issues in others.
If the start of a subtree has no noun, then nested phrases can be
generated.
また行きたい、そんな気持ちにさせてくれるお店です。
[そんな気持ち, また行きたい、そんな気持ちにさせてくれるお店]
For some reason て gets included sometimes. Not sure why.
ゲンに連れ添って円盤生物を調査するパートナーとなる。
[て円盤生物, ...]
Some phrases that look like they should be split are grouped together;
not entirely sure that's wrong. This whole thing becomes one chunk:
道の駅遠山郷北側からかぐら大橋南詰現道交点までの1.060kmのみ開通済み
* Use new generic get_words_and_spaces
The new get_words_and_spaces function is simpler than what was used in
Japanese, so it's good to be able to switch to it. However, there was an
issue. The new function works just on text, so POS info could get out of
sync. Fixing this required a small change to the way dtokens (tokens
with POS and lemma info) were generated.
Specifically, multiple extraneous spaces now become a single token, so
when generating dtokens multiple space tokens should be created in a
row.
* Fix noun_chunks, should be working now
* Fix some tests, add naughty strings tests
Some of the existing tests changed because the tokenization mode of
Sudachi changed to the more fine-grained A mode.
Sudachi also has issues with some strings, so this adds a test against
the naughty strings.
* Remove empty Sudachi tokens
Not doing this creates zero-length tokens and causes errors in the
internal spaCy processing.
* Add yield_bunsetu back in as a separate piece of code
Co-authored-by: Hiroshi Matsuda <40782025+hiroshi-matsuda-rit@users.noreply.github.com>
Co-authored-by: hiroshi <hiroshi_matsuda@megagon.ai>
2020-06-04 17:15:43 +00:00
|
|
|
|
2019-12-21 18:04:17 +00:00
|
|
|
|
2020-06-22 12:32:25 +00:00
|
|
|
def get_dtokens_and_spaces(dtokens, text, gap_tag="空白"):
|
|
|
|
# Compare the content of tokens and text, first
|
2020-06-08 13:49:34 +00:00
|
|
|
words = [x.surface for x in dtokens]
|
|
|
|
if "".join("".join(words).split()) != "".join(text.split()):
|
|
|
|
raise ValueError(Errors.E194.format(text=text, words=words))
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
text_dtokens = []
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces = []
|
|
|
|
text_pos = 0
|
2020-06-08 19:09:23 +00:00
|
|
|
# handle empty and whitespace-only texts
|
|
|
|
if len(words) == 0:
|
2020-06-22 12:32:25 +00:00
|
|
|
return text_dtokens, text_spaces
|
2020-06-08 19:09:23 +00:00
|
|
|
elif len([word for word in words if not word.isspace()]) == 0:
|
|
|
|
assert text.isspace()
|
2020-07-22 11:42:59 +00:00
|
|
|
text_dtokens = [DetailedToken(text, gap_tag, "", text, None, None)]
|
2020-06-08 19:09:23 +00:00
|
|
|
text_spaces = [False]
|
2020-06-22 12:32:25 +00:00
|
|
|
return text_dtokens, text_spaces
|
|
|
|
|
|
|
|
# align words and dtokens by referring text, and insert gap tokens for the space char spans
|
2020-08-25 12:16:24 +00:00
|
|
|
for i, (word, dtoken) in enumerate(zip(words, dtokens)):
|
2020-06-22 12:32:25 +00:00
|
|
|
# skip all space tokens
|
|
|
|
if word.isspace():
|
|
|
|
continue
|
2020-06-08 13:49:34 +00:00
|
|
|
try:
|
|
|
|
word_start = text[text_pos:].index(word)
|
|
|
|
except ValueError:
|
2020-08-05 21:53:21 +00:00
|
|
|
raise ValueError(Errors.E194.format(text=text, words=words)) from None
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
# space token
|
2020-06-08 13:49:34 +00:00
|
|
|
if word_start > 0:
|
2020-07-22 11:42:59 +00:00
|
|
|
w = text[text_pos : text_pos + word_start]
|
|
|
|
text_dtokens.append(DetailedToken(w, gap_tag, "", w, None, None))
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces.append(False)
|
|
|
|
text_pos += word_start
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
# content word
|
|
|
|
text_dtokens.append(dtoken)
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces.append(False)
|
|
|
|
text_pos += len(word)
|
2020-06-22 12:32:25 +00:00
|
|
|
# poll a space char after the word
|
2020-08-25 12:16:24 +00:00
|
|
|
if i + 1 < len(dtokens) and dtokens[i + 1].surface == " ":
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces[-1] = True
|
|
|
|
text_pos += 1
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
# trailing space token
|
2020-06-08 13:49:34 +00:00
|
|
|
if text_pos < len(text):
|
|
|
|
w = text[text_pos:]
|
2020-07-22 11:42:59 +00:00
|
|
|
text_dtokens.append(DetailedToken(w, gap_tag, "", w, None, None))
|
2020-06-08 13:49:34 +00:00
|
|
|
text_spaces.append(False)
|
2020-06-22 12:32:25 +00:00
|
|
|
|
|
|
|
return text_dtokens, text_spaces
|
2020-06-08 13:49:34 +00:00
|
|
|
|
|
|
|
|
2019-03-11 12:34:23 +00:00
|
|
|
def pickle_japanese(instance):
|
|
|
|
return Japanese, tuple()
|
|
|
|
|
|
|
|
|
|
|
|
copy_reg.pickle(Japanese, pickle_japanese)
|
|
|
|
|
💫 Tidy up and auto-format .py files (#2983)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Use [`black`](https://github.com/ambv/black) to auto-format all `.py` files.
- [x] Update flake8 config to exclude very large files (lemmatization tables etc.)
- [x] Update code to be compatible with flake8 rules
- [x] Fix various small bugs, inconsistencies and messy stuff in the language data
- [x] Update docs to explain new code style (`black`, `flake8`, when to use `# fmt: off` and `# fmt: on` and what `# noqa` means)
Once #2932 is merged, which auto-formats and tidies up the CLI, we'll be able to run `flake8 spacy` actually get meaningful results.
At the moment, the code style and linting isn't applied automatically, but I'm hoping that the new [GitHub Actions](https://github.com/features/actions) will let us auto-format pull requests and post comments with relevant linting information.
### Types of change
enhancement, code style
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-11-30 16:03:03 +00:00
|
|
|
__all__ = ["Japanese"]
|