2017-05-14 15:50:23 +00:00
|
|
|
# coding: utf8
|
|
|
|
from __future__ import unicode_literals
|
|
|
|
|
💫 Port master changes over to develop (#2979)
* Create aryaprabhudesai.md (#2681)
* Update _install.jade (#2688)
Typo fix: "models" -> "model"
* Add FAC to spacy.explain (resolves #2706)
* Remove docstrings for deprecated arguments (see #2703)
* When calling getoption() in conftest.py, pass a default option (#2709)
* When calling getoption() in conftest.py, pass a default option
This is necessary to allow testing an installed spacy by running:
pytest --pyargs spacy
* Add contributor agreement
* update bengali token rules for hyphen and digits (#2731)
* Less norm computations in token similarity (#2730)
* Less norm computations in token similarity
* Contributor agreement
* Remove ')' for clarity (#2737)
Sorry, don't mean to be nitpicky, I just noticed this when going through the CLI and thought it was a quick fix. That said, if this was intention than please let me know.
* added contributor agreement for mbkupfer (#2738)
* Basic support for Telugu language (#2751)
* Lex _attrs for polish language (#2750)
* Signed spaCy contributor agreement
* Added polish version of english lex_attrs
* Introduces a bulk merge function, in order to solve issue #653 (#2696)
* Fix comment
* Introduce bulk merge to increase performance on many span merges
* Sign contributor agreement
* Implement pull request suggestions
* Describe converters more explicitly (see #2643)
* Add multi-threading note to Language.pipe (resolves #2582) [ci skip]
* Fix formatting
* Fix dependency scheme docs (closes #2705) [ci skip]
* Don't set stop word in example (closes #2657) [ci skip]
* Add words to portuguese language _num_words (#2759)
* Add words to portuguese language _num_words
* Add words to portuguese language _num_words
* Update Indonesian model (#2752)
* adding e-KTP in tokenizer exceptions list
* add exception token
* removing lines with containing space as it won't matter since we use .split() method in the end, added new tokens in exception
* add tokenizer exceptions list
* combining base_norms with norm_exceptions
* adding norm_exception
* fix double key in lemmatizer
* remove unused import on punctuation.py
* reformat stop_words to reduce number of lines, improve readibility
* updating tokenizer exception
* implement is_currency for lang/id
* adding orth_first_upper in tokenizer_exceptions
* update the norm_exception list
* remove bunch of abbreviations
* adding contributors file
* Fixed spaCy+Keras example (#2763)
* bug fixes in keras example
* created contributor agreement
* Adding French hyphenated first name (#2786)
* Fix typo (closes #2784)
* Fix typo (#2795) [ci skip]
Fixed typo on line 6 "regcognizer --> recognizer"
* Adding basic support for Sinhala language. (#2788)
* adding Sinhala language package, stop words, examples and lex_attrs.
* Adding contributor agreement
* Updating contributor agreement
* Also include lowercase norm exceptions
* Fix error (#2802)
* Fix error
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
* added spaCy Contributor Agreement
* Add charlax's contributor agreement (#2805)
* agreement of contributor, may I introduce a tiny pl languge contribution (#2799)
* Contributors agreement
* Contributors agreement
* Contributors agreement
* Add jupyter=True to displacy.render in documentation (#2806)
* Revert "Also include lowercase norm exceptions"
This reverts commit 70f4e8adf37cfcfab60be2b97d6deae949b30e9e.
* Remove deprecated encoding argument to msgpack
* Set up dependency tree pattern matching skeleton (#2732)
* Fix bug when too many entity types. Fixes #2800
* Fix Python 2 test failure
* Require older msgpack-numpy
* Restore encoding arg on msgpack-numpy
* Try to fix version pin for msgpack-numpy
* Update Portuguese Language (#2790)
* Add words to portuguese language _num_words
* Add words to portuguese language _num_words
* Portuguese - Add/remove stopwords, fix tokenizer, add currency symbols
* Extended punctuation and norm_exceptions in the Portuguese language
* Correct error in spacy universe docs concerning spacy-lookup (#2814)
* Update Keras Example for (Parikh et al, 2016) implementation (#2803)
* bug fixes in keras example
* created contributor agreement
* baseline for Parikh model
* initial version of parikh 2016 implemented
* tested asymmetric models
* fixed grevious error in normalization
* use standard SNLI test file
* begin to rework parikh example
* initial version of running example
* start to document the new version
* start to document the new version
* Update Decompositional Attention.ipynb
* fixed calls to similarity
* updated the README
* import sys package duh
* simplified indexing on mapping word to IDs
* stupid python indent error
* added code from https://github.com/tensorflow/tensorflow/issues/3388 for tf bug workaround
* Fix typo (closes #2815) [ci skip]
* Update regex version dependency
* Set version to 2.0.13.dev3
* Skip seemingly problematic test
* Remove problematic test
* Try previous version of regex
* Revert "Remove problematic test"
This reverts commit bdebbef45552d698d390aa430b527ee27830f11b.
* Unskip test
* Try older version of regex
* 💫 Update training examples and use minibatching (#2830)
<!--- Provide a general summary of your changes in the title. -->
## Description
Update the training examples in `/examples/training` to show usage of spaCy's `minibatch` and `compounding` helpers ([see here](https://spacy.io/usage/training#tips-batch-size) for details). The lack of batching in the examples has caused some confusion in the past, especially for beginners who would copy-paste the examples, update them with large training sets and experienced slow and unsatisfying results.
### Types of change
enhancements
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Visual C++ link updated (#2842) (closes #2841) [ci skip]
* New landing page
* Add contribution agreement
* Correcting lang/ru/examples.py (#2845)
* Correct some grammatical inaccuracies in lang\ru\examples.py; filled Contributor Agreement
* Correct some grammatical inaccuracies in lang\ru\examples.py
* Move contributor agreement to separate file
* Set version to 2.0.13.dev4
* Add Persian(Farsi) language support (#2797)
* Also include lowercase norm exceptions
* Remove in favour of https://github.com/explosion/spaCy/graphs/contributors
* Rule-based French Lemmatizer (#2818)
<!--- Provide a general summary of your changes in the title. -->
## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->
Add a rule-based French Lemmatizer following the english one and the excellent PR for [greek language optimizations](https://github.com/explosion/spaCy/pull/2558) to adapt the Lemmatizer class.
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
- Lemma dictionary used can be found [here](http://infolingu.univ-mlv.fr/DonneesLinguistiques/Dictionnaires/telechargement.html), I used the XML version.
- Add several files containing exhaustive list of words for each part of speech
- Add some lemma rules
- Add POS that are not checked in the standard Lemmatizer, i.e PRON, DET, ADV and AUX
- Modify the Lemmatizer class to check in lookup table as a last resort if POS not mentionned
- Modify the lemmatize function to check in lookup table as a last resort
- Init files are updated so the model can support all the functionalities mentioned above
- Add words to tokenizer_exceptions_list.py in respect to regex used in tokenizer_exceptions.py
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [X] I have submitted the spaCy Contributor Agreement.
- [X] I ran the tests, and all new and existing tests passed.
- [X] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Set version to 2.0.13
* Fix formatting and consistency
* Update docs for new version [ci skip]
* Increment version [ci skip]
* Add info on wheels [ci skip]
* Adding "This is a sentence" example to Sinhala (#2846)
* Add wheels badge
* Update badge [ci skip]
* Update README.rst [ci skip]
* Update murmurhash pin
* Increment version to 2.0.14.dev0
* Update GPU docs for v2.0.14
* Add wheel to setup_requires
* Import prefer_gpu and require_gpu functions from Thinc
* Add tests for prefer_gpu() and require_gpu()
* Update requirements and setup.py
* Workaround bug in thinc require_gpu
* Set version to v2.0.14
* Update push-tag script
* Unhack prefer_gpu
* Require thinc 6.10.6
* Update prefer_gpu and require_gpu docs [ci skip]
* Fix specifiers for GPU
* Set version to 2.0.14.dev1
* Set version to 2.0.14
* Update Thinc version pin
* Increment version
* Fix msgpack-numpy version pin
* Increment version
* Update version to 2.0.16
* Update version [ci skip]
* Redundant ')' in the Stop words' example (#2856)
<!--- Provide a general summary of your changes in the title. -->
## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [ ] I have submitted the spaCy Contributor Agreement.
- [ ] I ran the tests, and all new and existing tests passed.
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Documentation improvement regarding joblib and SO (#2867)
Some documentation improvements
## Description
1. Fixed the dead URL to joblib
2. Fixed Stack Overflow brand name (with space)
### Types of change
Documentation
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* raise error when setting overlapping entities as doc.ents (#2880)
* Fix out-of-bounds access in NER training
The helper method state.B(1) gets the index of the first token of the
buffer, or -1 if no such token exists. Normally this is safe because we
pass this to functions like state.safe_get(), which returns an empty
token. Here we used it directly as an array index, which is not okay!
This error may have been the cause of out-of-bounds access errors during
training. Similar errors may still be around, so much be hunted down.
Hunting this one down took a long time...I printed out values across
training runs and diffed, looking for points of divergence between
runs, when no randomness should be allowed.
* Change PyThaiNLP Url (#2876)
* Fix missing comma
* Add example showing a fix-up rule for space entities
* Set version to 2.0.17.dev0
* Update regex version
* Revert "Update regex version"
This reverts commit 62358dd867d15bc6a475942dff34effba69dd70a.
* Try setting older regex version, to align with conda
* Set version to 2.0.17
* Add spacy-js to universe [ci-skip]
* Add spacy-raspberry to universe (closes #2889)
* Add script to validate universe json [ci skip]
* Removed space in docs + added contributor indo (#2909)
* - removed unneeded space in documentation
* - added contributor info
* Allow input text of length up to max_length, inclusive (#2922)
* Include universe spec for spacy-wordnet component (#2919)
* feat: include universe spec for spacy-wordnet component
* chore: include spaCy contributor agreement
* Minor formatting changes [ci skip]
* Fix image [ci skip]
Twitter URL doesn't work on live site
* Check if the word is in one of the regular lists specific to each POS (#2886)
* 💫 Create random IDs for SVGs to prevent ID clashes (#2927)
Resolves #2924.
## Description
Fixes problem where multiple visualizations in Jupyter notebooks would have clashing arc IDs, resulting in weirdly positioned arc labels. Generating a random ID prefix so even identical parses won't receive the same IDs for consistency (even if effect of ID clash isn't noticable here.)
### Types of change
bug fix
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Fix typo [ci skip]
* fixes symbolic link on py3 and windows (#2949)
* fixes symbolic link on py3 and windows
during setup of spacy using command
python -m spacy link en_core_web_sm en
closes #2948
* Update spacy/compat.py
Co-Authored-By: cicorias <cicorias@users.noreply.github.com>
* Fix formatting
* Update universe [ci skip]
* Catalan Language Support (#2940)
* Catalan language Support
* Ddding Catalan to documentation
* Sort languages alphabetically [ci skip]
* Update tests for pytest 4.x (#2965)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Replace marks in params for pytest 4.0 compat ([see here](https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize))
- [x] Un-xfail passing tests (some fixes in a recent update resolved a bunch of issues, but tests were apparently never updated here)
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Fix regex pin to harmonize with conda (#2964)
* Update README.rst
* Fix bug where Vocab.prune_vector did not use 'batch_size' (#2977)
Fixes #2976
* Fix typo
* Fix typo
* Remove duplicate file
* Require thinc 7.0.0.dev2
Fixes bug in gpu_ops that would use cupy instead of numpy on CPU
* Add missing import
* Fix error IDs
* Fix tests
2018-11-29 15:30:29 +00:00
|
|
|
import random
|
|
|
|
|
2017-05-14 15:50:23 +00:00
|
|
|
from .templates import TPL_DEP_SVG, TPL_DEP_WORDS, TPL_DEP_ARCS
|
|
|
|
from .templates import TPL_ENT, TPL_ENTS, TPL_FIGURE, TPL_TITLE, TPL_PAGE
|
escape html in displacy.render (#2378) (closes #2361)
## Description
Fix for issue #2361 :
replace &, <, >, " with &amp; , &lt; , &gt; , &quot; in before rendering svg
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [ ] I ran the tests, and all new and existing tests passed.
(As discussed in the comments to #2361)
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-05-28 16:36:41 +00:00
|
|
|
from ..util import minify_html, escape_html
|
2017-05-14 15:50:23 +00:00
|
|
|
|
|
|
|
|
|
|
|
class DependencyRenderer(object):
|
|
|
|
"""Render dependency parses as SVGs."""
|
|
|
|
style = 'dep'
|
|
|
|
|
|
|
|
def __init__(self, options={}):
|
|
|
|
"""Initialise dependency renderer.
|
|
|
|
|
|
|
|
options (dict): Visualiser-specific options (compact, word_spacing,
|
2017-10-27 12:39:19 +00:00
|
|
|
arrow_spacing, arrow_width, arrow_stroke, distance, offset_x,
|
|
|
|
color, bg, font)
|
2017-05-14 15:50:23 +00:00
|
|
|
"""
|
|
|
|
self.compact = options.get('compact', False)
|
|
|
|
self.word_spacing = options.get('word_spacing', 45)
|
2017-10-27 12:39:19 +00:00
|
|
|
self.arrow_spacing = options.get('arrow_spacing',
|
|
|
|
12 if self.compact else 20)
|
|
|
|
self.arrow_width = options.get('arrow_width',
|
|
|
|
6 if self.compact else 10)
|
2017-05-14 15:50:23 +00:00
|
|
|
self.arrow_stroke = options.get('arrow_stroke', 2)
|
2017-06-03 11:25:32 +00:00
|
|
|
self.distance = options.get('distance', 150 if self.compact else 175)
|
2017-05-14 15:50:23 +00:00
|
|
|
self.offset_x = options.get('offset_x', 50)
|
|
|
|
self.color = options.get('color', '#000000')
|
|
|
|
self.bg = options.get('bg', '#ffffff')
|
|
|
|
self.font = options.get('font', 'Arial')
|
|
|
|
|
|
|
|
def render(self, parsed, page=False, minify=False):
|
|
|
|
"""Render complete markup.
|
|
|
|
|
|
|
|
parsed (list): Dependency parses to render.
|
|
|
|
page (bool): Render parses wrapped as full HTML page.
|
|
|
|
minify (bool): Minify HTML markup.
|
|
|
|
RETURNS (unicode): Rendered SVG or HTML markup.
|
|
|
|
"""
|
💫 Port master changes over to develop (#2979)
* Create aryaprabhudesai.md (#2681)
* Update _install.jade (#2688)
Typo fix: "models" -> "model"
* Add FAC to spacy.explain (resolves #2706)
* Remove docstrings for deprecated arguments (see #2703)
* When calling getoption() in conftest.py, pass a default option (#2709)
* When calling getoption() in conftest.py, pass a default option
This is necessary to allow testing an installed spacy by running:
pytest --pyargs spacy
* Add contributor agreement
* update bengali token rules for hyphen and digits (#2731)
* Less norm computations in token similarity (#2730)
* Less norm computations in token similarity
* Contributor agreement
* Remove ')' for clarity (#2737)
Sorry, don't mean to be nitpicky, I just noticed this when going through the CLI and thought it was a quick fix. That said, if this was intention than please let me know.
* added contributor agreement for mbkupfer (#2738)
* Basic support for Telugu language (#2751)
* Lex _attrs for polish language (#2750)
* Signed spaCy contributor agreement
* Added polish version of english lex_attrs
* Introduces a bulk merge function, in order to solve issue #653 (#2696)
* Fix comment
* Introduce bulk merge to increase performance on many span merges
* Sign contributor agreement
* Implement pull request suggestions
* Describe converters more explicitly (see #2643)
* Add multi-threading note to Language.pipe (resolves #2582) [ci skip]
* Fix formatting
* Fix dependency scheme docs (closes #2705) [ci skip]
* Don't set stop word in example (closes #2657) [ci skip]
* Add words to portuguese language _num_words (#2759)
* Add words to portuguese language _num_words
* Add words to portuguese language _num_words
* Update Indonesian model (#2752)
* adding e-KTP in tokenizer exceptions list
* add exception token
* removing lines with containing space as it won't matter since we use .split() method in the end, added new tokens in exception
* add tokenizer exceptions list
* combining base_norms with norm_exceptions
* adding norm_exception
* fix double key in lemmatizer
* remove unused import on punctuation.py
* reformat stop_words to reduce number of lines, improve readibility
* updating tokenizer exception
* implement is_currency for lang/id
* adding orth_first_upper in tokenizer_exceptions
* update the norm_exception list
* remove bunch of abbreviations
* adding contributors file
* Fixed spaCy+Keras example (#2763)
* bug fixes in keras example
* created contributor agreement
* Adding French hyphenated first name (#2786)
* Fix typo (closes #2784)
* Fix typo (#2795) [ci skip]
Fixed typo on line 6 "regcognizer --> recognizer"
* Adding basic support for Sinhala language. (#2788)
* adding Sinhala language package, stop words, examples and lex_attrs.
* Adding contributor agreement
* Updating contributor agreement
* Also include lowercase norm exceptions
* Fix error (#2802)
* Fix error
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
* added spaCy Contributor Agreement
* Add charlax's contributor agreement (#2805)
* agreement of contributor, may I introduce a tiny pl languge contribution (#2799)
* Contributors agreement
* Contributors agreement
* Contributors agreement
* Add jupyter=True to displacy.render in documentation (#2806)
* Revert "Also include lowercase norm exceptions"
This reverts commit 70f4e8adf37cfcfab60be2b97d6deae949b30e9e.
* Remove deprecated encoding argument to msgpack
* Set up dependency tree pattern matching skeleton (#2732)
* Fix bug when too many entity types. Fixes #2800
* Fix Python 2 test failure
* Require older msgpack-numpy
* Restore encoding arg on msgpack-numpy
* Try to fix version pin for msgpack-numpy
* Update Portuguese Language (#2790)
* Add words to portuguese language _num_words
* Add words to portuguese language _num_words
* Portuguese - Add/remove stopwords, fix tokenizer, add currency symbols
* Extended punctuation and norm_exceptions in the Portuguese language
* Correct error in spacy universe docs concerning spacy-lookup (#2814)
* Update Keras Example for (Parikh et al, 2016) implementation (#2803)
* bug fixes in keras example
* created contributor agreement
* baseline for Parikh model
* initial version of parikh 2016 implemented
* tested asymmetric models
* fixed grevious error in normalization
* use standard SNLI test file
* begin to rework parikh example
* initial version of running example
* start to document the new version
* start to document the new version
* Update Decompositional Attention.ipynb
* fixed calls to similarity
* updated the README
* import sys package duh
* simplified indexing on mapping word to IDs
* stupid python indent error
* added code from https://github.com/tensorflow/tensorflow/issues/3388 for tf bug workaround
* Fix typo (closes #2815) [ci skip]
* Update regex version dependency
* Set version to 2.0.13.dev3
* Skip seemingly problematic test
* Remove problematic test
* Try previous version of regex
* Revert "Remove problematic test"
This reverts commit bdebbef45552d698d390aa430b527ee27830f11b.
* Unskip test
* Try older version of regex
* 💫 Update training examples and use minibatching (#2830)
<!--- Provide a general summary of your changes in the title. -->
## Description
Update the training examples in `/examples/training` to show usage of spaCy's `minibatch` and `compounding` helpers ([see here](https://spacy.io/usage/training#tips-batch-size) for details). The lack of batching in the examples has caused some confusion in the past, especially for beginners who would copy-paste the examples, update them with large training sets and experienced slow and unsatisfying results.
### Types of change
enhancements
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Visual C++ link updated (#2842) (closes #2841) [ci skip]
* New landing page
* Add contribution agreement
* Correcting lang/ru/examples.py (#2845)
* Correct some grammatical inaccuracies in lang\ru\examples.py; filled Contributor Agreement
* Correct some grammatical inaccuracies in lang\ru\examples.py
* Move contributor agreement to separate file
* Set version to 2.0.13.dev4
* Add Persian(Farsi) language support (#2797)
* Also include lowercase norm exceptions
* Remove in favour of https://github.com/explosion/spaCy/graphs/contributors
* Rule-based French Lemmatizer (#2818)
<!--- Provide a general summary of your changes in the title. -->
## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->
Add a rule-based French Lemmatizer following the english one and the excellent PR for [greek language optimizations](https://github.com/explosion/spaCy/pull/2558) to adapt the Lemmatizer class.
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
- Lemma dictionary used can be found [here](http://infolingu.univ-mlv.fr/DonneesLinguistiques/Dictionnaires/telechargement.html), I used the XML version.
- Add several files containing exhaustive list of words for each part of speech
- Add some lemma rules
- Add POS that are not checked in the standard Lemmatizer, i.e PRON, DET, ADV and AUX
- Modify the Lemmatizer class to check in lookup table as a last resort if POS not mentionned
- Modify the lemmatize function to check in lookup table as a last resort
- Init files are updated so the model can support all the functionalities mentioned above
- Add words to tokenizer_exceptions_list.py in respect to regex used in tokenizer_exceptions.py
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [X] I have submitted the spaCy Contributor Agreement.
- [X] I ran the tests, and all new and existing tests passed.
- [X] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Set version to 2.0.13
* Fix formatting and consistency
* Update docs for new version [ci skip]
* Increment version [ci skip]
* Add info on wheels [ci skip]
* Adding "This is a sentence" example to Sinhala (#2846)
* Add wheels badge
* Update badge [ci skip]
* Update README.rst [ci skip]
* Update murmurhash pin
* Increment version to 2.0.14.dev0
* Update GPU docs for v2.0.14
* Add wheel to setup_requires
* Import prefer_gpu and require_gpu functions from Thinc
* Add tests for prefer_gpu() and require_gpu()
* Update requirements and setup.py
* Workaround bug in thinc require_gpu
* Set version to v2.0.14
* Update push-tag script
* Unhack prefer_gpu
* Require thinc 6.10.6
* Update prefer_gpu and require_gpu docs [ci skip]
* Fix specifiers for GPU
* Set version to 2.0.14.dev1
* Set version to 2.0.14
* Update Thinc version pin
* Increment version
* Fix msgpack-numpy version pin
* Increment version
* Update version to 2.0.16
* Update version [ci skip]
* Redundant ')' in the Stop words' example (#2856)
<!--- Provide a general summary of your changes in the title. -->
## Description
<!--- Use this section to describe your changes. If your changes required
testing, include information about the testing environment and the tests you
ran. If your test fixes a bug reported in an issue, don't forget to include the
issue number. If your PR is still a work in progress, that's totally fine – just
include a note to let us know. -->
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [ ] I have submitted the spaCy Contributor Agreement.
- [ ] I ran the tests, and all new and existing tests passed.
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Documentation improvement regarding joblib and SO (#2867)
Some documentation improvements
## Description
1. Fixed the dead URL to joblib
2. Fixed Stack Overflow brand name (with space)
### Types of change
Documentation
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* raise error when setting overlapping entities as doc.ents (#2880)
* Fix out-of-bounds access in NER training
The helper method state.B(1) gets the index of the first token of the
buffer, or -1 if no such token exists. Normally this is safe because we
pass this to functions like state.safe_get(), which returns an empty
token. Here we used it directly as an array index, which is not okay!
This error may have been the cause of out-of-bounds access errors during
training. Similar errors may still be around, so much be hunted down.
Hunting this one down took a long time...I printed out values across
training runs and diffed, looking for points of divergence between
runs, when no randomness should be allowed.
* Change PyThaiNLP Url (#2876)
* Fix missing comma
* Add example showing a fix-up rule for space entities
* Set version to 2.0.17.dev0
* Update regex version
* Revert "Update regex version"
This reverts commit 62358dd867d15bc6a475942dff34effba69dd70a.
* Try setting older regex version, to align with conda
* Set version to 2.0.17
* Add spacy-js to universe [ci-skip]
* Add spacy-raspberry to universe (closes #2889)
* Add script to validate universe json [ci skip]
* Removed space in docs + added contributor indo (#2909)
* - removed unneeded space in documentation
* - added contributor info
* Allow input text of length up to max_length, inclusive (#2922)
* Include universe spec for spacy-wordnet component (#2919)
* feat: include universe spec for spacy-wordnet component
* chore: include spaCy contributor agreement
* Minor formatting changes [ci skip]
* Fix image [ci skip]
Twitter URL doesn't work on live site
* Check if the word is in one of the regular lists specific to each POS (#2886)
* 💫 Create random IDs for SVGs to prevent ID clashes (#2927)
Resolves #2924.
## Description
Fixes problem where multiple visualizations in Jupyter notebooks would have clashing arc IDs, resulting in weirdly positioned arc labels. Generating a random ID prefix so even identical parses won't receive the same IDs for consistency (even if effect of ID clash isn't noticable here.)
### Types of change
bug fix
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Fix typo [ci skip]
* fixes symbolic link on py3 and windows (#2949)
* fixes symbolic link on py3 and windows
during setup of spacy using command
python -m spacy link en_core_web_sm en
closes #2948
* Update spacy/compat.py
Co-Authored-By: cicorias <cicorias@users.noreply.github.com>
* Fix formatting
* Update universe [ci skip]
* Catalan Language Support (#2940)
* Catalan language Support
* Ddding Catalan to documentation
* Sort languages alphabetically [ci skip]
* Update tests for pytest 4.x (#2965)
<!--- Provide a general summary of your changes in the title. -->
## Description
- [x] Replace marks in params for pytest 4.0 compat ([see here](https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize))
- [x] Un-xfail passing tests (some fixes in a recent update resolved a bunch of issues, but tests were apparently never updated here)
### Types of change
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
or new feature, or a change to the documentation? -->
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
* Fix regex pin to harmonize with conda (#2964)
* Update README.rst
* Fix bug where Vocab.prune_vector did not use 'batch_size' (#2977)
Fixes #2976
* Fix typo
* Fix typo
* Remove duplicate file
* Require thinc 7.0.0.dev2
Fixes bug in gpu_ops that would use cupy instead of numpy on CPU
* Add missing import
* Fix error IDs
* Fix tests
2018-11-29 15:30:29 +00:00
|
|
|
# Create a random ID prefix to make sure parses don't receive the
|
|
|
|
# same ID, even if they're identical
|
|
|
|
id_prefix = random.randint(0, 999)
|
|
|
|
rendered = [self.render_svg('{}-{}'.format(id_prefix, i), p['words'], p['arcs'])
|
2017-05-14 15:50:23 +00:00
|
|
|
for i, p in enumerate(parsed)]
|
|
|
|
if page:
|
2017-10-27 12:39:19 +00:00
|
|
|
content = ''.join([TPL_FIGURE.format(content=svg)
|
|
|
|
for svg in rendered])
|
2017-05-14 15:50:23 +00:00
|
|
|
markup = TPL_PAGE.format(content=content)
|
|
|
|
else:
|
|
|
|
markup = ''.join(rendered)
|
|
|
|
if minify:
|
|
|
|
return minify_html(markup)
|
|
|
|
return markup
|
|
|
|
|
|
|
|
def render_svg(self, render_id, words, arcs):
|
|
|
|
"""Render SVG.
|
|
|
|
|
|
|
|
render_id (int): Unique ID, typically index of document.
|
|
|
|
words (list): Individual words and their tags.
|
|
|
|
arcs (list): Individual arcs and their start, end, direction and label.
|
|
|
|
RETURNS (unicode): Rendered SVG markup.
|
|
|
|
"""
|
|
|
|
self.levels = self.get_levels(arcs)
|
|
|
|
self.highest_level = len(self.levels)
|
|
|
|
self.offset_y = self.distance/2*self.highest_level+self.arrow_stroke
|
|
|
|
self.width = self.offset_x+len(words)*self.distance
|
|
|
|
self.height = self.offset_y+3*self.word_spacing
|
|
|
|
self.id = render_id
|
|
|
|
words = [self.render_word(w['text'], w['tag'], i)
|
|
|
|
for i, w in enumerate(words)]
|
2017-10-27 12:39:19 +00:00
|
|
|
arcs = [self.render_arrow(a['label'], a['start'],
|
|
|
|
a['end'], a['dir'], i)
|
2017-05-14 15:50:23 +00:00
|
|
|
for i, a in enumerate(arcs)]
|
|
|
|
content = ''.join(words) + ''.join(arcs)
|
2017-10-27 12:39:19 +00:00
|
|
|
return TPL_DEP_SVG.format(id=self.id, width=self.width,
|
|
|
|
height=self.height, color=self.color,
|
|
|
|
bg=self.bg, font=self.font, content=content)
|
2017-05-14 15:50:23 +00:00
|
|
|
|
|
|
|
def render_word(self, text, tag, i):
|
|
|
|
"""Render individual word.
|
|
|
|
|
|
|
|
text (unicode): Word text.
|
|
|
|
tag (unicode): Part-of-speech tag.
|
|
|
|
i (int): Unique ID, typically word index.
|
|
|
|
RETURNS (unicode): Rendered SVG markup.
|
|
|
|
"""
|
|
|
|
y = self.offset_y+self.word_spacing
|
|
|
|
x = self.offset_x+i*self.distance
|
escape html in displacy.render (#2378) (closes #2361)
## Description
Fix for issue #2361 :
replace &, <, >, " with &amp; , &lt; , &gt; , &quot; in before rendering svg
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [ ] I ran the tests, and all new and existing tests passed.
(As discussed in the comments to #2361)
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-05-28 16:36:41 +00:00
|
|
|
html_text = escape_html(text)
|
|
|
|
return TPL_DEP_WORDS.format(text=html_text, tag=tag, x=x, y=y)
|
|
|
|
|
2017-05-14 15:50:23 +00:00
|
|
|
|
|
|
|
def render_arrow(self, label, start, end, direction, i):
|
|
|
|
"""Render indivicual arrow.
|
|
|
|
|
|
|
|
label (unicode): Dependency label.
|
|
|
|
start (int): Index of start word.
|
|
|
|
end (int): Index of end word.
|
|
|
|
direction (unicode): Arrow direction, 'left' or 'right'.
|
|
|
|
i (int): Unique ID, typically arrow index.
|
|
|
|
RETURNS (unicode): Rendered SVG markup.
|
|
|
|
"""
|
|
|
|
level = self.levels.index(end-start)+1
|
|
|
|
x_start = self.offset_x+start*self.distance+self.arrow_spacing
|
|
|
|
y = self.offset_y
|
|
|
|
x_end = (self.offset_x+(end-start)*self.distance+start*self.distance
|
2017-10-27 12:39:19 +00:00
|
|
|
- self.arrow_spacing*(self.highest_level-level)/4)
|
2017-05-14 15:50:23 +00:00
|
|
|
y_curve = self.offset_y-level*self.distance/2
|
2017-06-03 11:25:32 +00:00
|
|
|
if self.compact:
|
|
|
|
y_curve = self.offset_y-level*self.distance/6
|
2017-05-14 15:50:23 +00:00
|
|
|
if y_curve == 0 and len(self.levels) > 5:
|
|
|
|
y_curve = -self.distance
|
|
|
|
arrowhead = self.get_arrowhead(direction, x_start, y, x_end)
|
|
|
|
arc = self.get_arc(x_start, y, y_curve, x_end)
|
|
|
|
return TPL_DEP_ARCS.format(id=self.id, i=i, stroke=self.arrow_stroke,
|
|
|
|
head=arrowhead, label=label, arc=arc)
|
|
|
|
|
|
|
|
def get_arc(self, x_start, y, y_curve, x_end):
|
|
|
|
"""Render individual arc.
|
|
|
|
|
|
|
|
x_start (int): X-coordinate of arrow start point.
|
|
|
|
y (int): Y-coordinate of arrow start and end point.
|
|
|
|
y_curve (int): Y-corrdinate of Cubic Bézier y_curve point.
|
|
|
|
x_end (int): X-coordinate of arrow end point.
|
|
|
|
RETURNS (unicode): Definition of the arc path ('d' attribute).
|
|
|
|
"""
|
|
|
|
template = "M{x},{y} C{x},{c} {e},{c} {e},{y}"
|
|
|
|
if self.compact:
|
|
|
|
template = "M{x},{y} {x},{c} {e},{c} {e},{y}"
|
|
|
|
return template.format(x=x_start, y=y, c=y_curve, e=x_end)
|
|
|
|
|
|
|
|
def get_arrowhead(self, direction, x, y, end):
|
|
|
|
"""Render individual arrow head.
|
|
|
|
|
|
|
|
direction (unicode): Arrow direction, 'left' or 'right'.
|
|
|
|
x (int): X-coordinate of arrow start point.
|
|
|
|
y (int): Y-coordinate of arrow start and end point.
|
|
|
|
end (int): X-coordinate of arrow end point.
|
|
|
|
RETURNS (unicode): Definition of the arrow head path ('d' attribute).
|
|
|
|
"""
|
2018-07-04 12:12:08 +00:00
|
|
|
if direction == 'left':
|
2017-05-14 15:50:23 +00:00
|
|
|
pos1, pos2, pos3 = (x, x-self.arrow_width+2, x+self.arrow_width-2)
|
|
|
|
else:
|
2017-10-27 12:39:19 +00:00
|
|
|
pos1, pos2, pos3 = (end, end+self.arrow_width-2,
|
|
|
|
end-self.arrow_width+2)
|
|
|
|
arrowhead = (pos1, y+2, pos2, y-self.arrow_width, pos3,
|
|
|
|
y-self.arrow_width)
|
2017-05-14 15:50:23 +00:00
|
|
|
return "M{},{} L{},{} {},{}".format(*arrowhead)
|
|
|
|
|
|
|
|
def get_levels(self, arcs):
|
|
|
|
"""Calculate available arc height "levels".
|
|
|
|
Used to calculate arrow heights dynamically and without wasting space.
|
|
|
|
|
|
|
|
args (list): Individual arcs and their start, end, direction and label.
|
|
|
|
RETURNS (list): Arc levels sorted from lowest to highest.
|
|
|
|
"""
|
|
|
|
levels = set(map(lambda arc: arc['end'] - arc['start'], arcs))
|
|
|
|
return sorted(list(levels))
|
|
|
|
|
|
|
|
|
|
|
|
class EntityRenderer(object):
|
|
|
|
"""Render named entities as HTML."""
|
|
|
|
style = 'ent'
|
|
|
|
|
|
|
|
def __init__(self, options={}):
|
|
|
|
"""Initialise dependency renderer.
|
|
|
|
|
|
|
|
options (dict): Visualiser-specific options (colors, ents)
|
|
|
|
"""
|
2017-05-14 23:24:57 +00:00
|
|
|
colors = {'ORG': '#7aecec', 'PRODUCT': '#bfeeb7', 'GPE': '#feca74',
|
|
|
|
'LOC': '#ff9561', 'PERSON': '#aa9cfc', 'NORP': '#c887fb',
|
2017-10-27 12:39:19 +00:00
|
|
|
'FACILITY': '#9cc9cc', 'EVENT': '#ffeb80', 'LAW': '#ff8197',
|
|
|
|
'LANGUAGE': '#ff8197', 'WORK_OF_ART': '#f0d0ff',
|
|
|
|
'DATE': '#bfe1d9', 'TIME': '#bfe1d9', 'MONEY': '#e4e7d2',
|
|
|
|
'QUANTITY': '#e4e7d2', 'ORDINAL': '#e4e7d2',
|
2017-05-14 23:24:57 +00:00
|
|
|
'CARDINAL': '#e4e7d2', 'PERCENT': '#e4e7d2'}
|
2017-05-14 15:50:23 +00:00
|
|
|
colors.update(options.get('colors', {}))
|
|
|
|
self.default_color = '#ddd'
|
|
|
|
self.colors = colors
|
|
|
|
self.ents = options.get('ents', None)
|
|
|
|
|
|
|
|
def render(self, parsed, page=False, minify=False):
|
|
|
|
"""Render complete markup.
|
|
|
|
|
|
|
|
parsed (list): Dependency parses to render.
|
|
|
|
page (bool): Render parses wrapped as full HTML page.
|
|
|
|
minify (bool): Minify HTML markup.
|
|
|
|
RETURNS (unicode): Rendered HTML markup.
|
|
|
|
"""
|
2017-10-27 12:39:19 +00:00
|
|
|
rendered = [self.render_ents(p['text'], p['ents'],
|
|
|
|
p.get('title', None)) for p in parsed]
|
2017-05-14 15:50:23 +00:00
|
|
|
if page:
|
2017-10-27 12:39:19 +00:00
|
|
|
docs = ''.join([TPL_FIGURE.format(content=doc)
|
|
|
|
for doc in rendered])
|
2017-05-14 15:50:23 +00:00
|
|
|
markup = TPL_PAGE.format(content=docs)
|
|
|
|
else:
|
|
|
|
markup = ''.join(rendered)
|
|
|
|
if minify:
|
|
|
|
return minify_html(markup)
|
|
|
|
return markup
|
|
|
|
|
|
|
|
def render_ents(self, text, spans, title):
|
|
|
|
"""Render entities in text.
|
|
|
|
|
|
|
|
text (unicode): Original text.
|
|
|
|
spans (list): Individual entity spans and their start, end and label.
|
2017-05-14 17:30:47 +00:00
|
|
|
title (unicode or None): Document title set in Doc.user_data['title'].
|
2017-05-14 15:50:23 +00:00
|
|
|
"""
|
|
|
|
markup = ''
|
|
|
|
offset = 0
|
|
|
|
for span in spans:
|
|
|
|
label = span['label']
|
|
|
|
start = span['start']
|
|
|
|
end = span['end']
|
|
|
|
entity = text[start:end]
|
|
|
|
fragments = text[offset:start].split('\n')
|
|
|
|
for i, fragment in enumerate(fragments):
|
|
|
|
markup += fragment
|
|
|
|
if len(fragments) > 1 and i != len(fragments)-1:
|
|
|
|
markup += '</br>'
|
2017-05-14 23:24:57 +00:00
|
|
|
if self.ents is None or label.upper() in self.ents:
|
|
|
|
color = self.colors.get(label.upper(), self.default_color)
|
2017-05-14 15:50:23 +00:00
|
|
|
markup += TPL_ENT.format(label=label, text=entity, bg=color)
|
|
|
|
else:
|
|
|
|
markup += entity
|
|
|
|
offset = end
|
|
|
|
markup += text[offset:]
|
|
|
|
markup = TPL_ENTS.format(content=markup, colors=self.colors)
|
|
|
|
if title:
|
|
|
|
markup = TPL_TITLE.format(title=title) + markup
|
|
|
|
return markup
|