This commit is contained in:
Matthew Honnibal 2019-06-07 20:41:00 +02:00
commit 7f71cf0b02
38 changed files with 1426 additions and 102 deletions

106
.github/contributors/NirantK.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Nirant Kasliwal |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | |
| GitHub username | NirantK |
| Website (optional) | https://nirantk.com |

106
.github/contributors/estr4ng7d.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Amey Baviskar |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | 21-May-2019 |
| GitHub username | estr4ng7d |
| Website (optional) | |

106
.github/contributors/intrafindBreno.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | ------------------------ |
| Name | Breno Faria |
| Company name (if applicable) | IntraFind |
| Title or role (if applicable) | Product Lead |
| Date | 03.06.2019 |
| GitHub username | intrafindBreno |
| Website (optional) | |

106
.github/contributors/munozbravo.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Germán Muñoz |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | 2019-06-01 |
| GitHub username | munozbravo |
| Website (optional) | |

106
.github/contributors/nipunsadvilkar.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [x] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Nipun Sadvilkar |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | 31st May, 2019 |
| GitHub username | nipunsadvilkar|
| Website (optional) |https://nipunsadvilkar.github.io/|

106
.github/contributors/ujwal-narayan.md vendored Normal file
View File

@ -0,0 +1,106 @@
# spaCy contributor agreement
This spaCy Contributor Agreement (**"SCA"**) is based on the
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
The SCA applies to any contribution that you make to any product or project
managed by us (the **"project"**), and sets out the intellectual property rights
you grant to us in the contributed materials. The term **"us"** shall mean
[ExplosionAI GmbH](https://explosion.ai/legal). The term
**"you"** shall mean the person or entity identified below.
If you agree to be bound by these terms, fill in the information requested
below and include the filled-in version with your first pull request, under the
folder [`.github/contributors/`](/.github/contributors/). The name of the file
should be your GitHub username, with the extension `.md`. For example, the user
example_user would create the file `.github/contributors/example_user.md`.
Read this agreement carefully before signing. These terms and conditions
constitute a binding legal agreement.
## Contributor Agreement
1. The term "contribution" or "contributed materials" means any source code,
object code, patch, tool, sample, graphic, specification, manual,
documentation, or any other material posted or submitted by you to the project.
2. With respect to any worldwide copyrights, or copyright applications and
registrations, in your contribution:
* you hereby assign to us joint ownership, and to the extent that such
assignment is or becomes invalid, ineffective or unenforceable, you hereby
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
royalty-free, unrestricted license to exercise all rights under those
copyrights. This includes, at our option, the right to sublicense these same
rights to third parties through multiple levels of sublicensees or other
licensing arrangements;
* you agree that each of us can do all things in relation to your
contribution as if each of us were the sole owners, and if one of us makes
a derivative work of your contribution, the one who makes the derivative
work (or has it made will be the sole owner of that derivative work;
* you agree that you will not assert any moral rights in your contribution
against us, our licensees or transferees;
* you agree that we may register a copyright in your contribution and
exercise all ownership rights associated with it; and
* you agree that neither of us has any duty to consult with, obtain the
consent of, pay or render an accounting to the other for any use or
distribution of your contribution.
3. With respect to any patents you own, or that you can license without payment
to any third party, you hereby grant to us a perpetual, irrevocable,
non-exclusive, worldwide, no-charge, royalty-free license to:
* make, have made, use, sell, offer to sell, import, and otherwise transfer
your contribution in whole or in part, alone or in combination with or
included in any product, work or materials arising out of the project to
which your contribution was submitted, and
* at our option, to sublicense these same rights to third parties through
multiple levels of sublicensees or other licensing arrangements.
4. Except as set out above, you keep all right, title, and interest in your
contribution. The rights that you grant to us under these terms are effective
on the date you first submitted a contribution to us, even if your submission
took place before the date you sign these terms.
5. You covenant, represent, warrant and agree that:
* Each contribution that you submit is and shall be an original work of
authorship and you can legally grant the rights set out in this SCA;
* to the best of your knowledge, each contribution will not violate any
third party's copyrights, trademarks, patents, or other intellectual
property rights; and
* each contribution shall be in compliance with U.S. export control laws and
other applicable export and import laws. You agree to notify us if you
become aware of any circumstance which would make any of the foregoing
representations inaccurate in any respect. We may publicly disclose your
participation in the project, including the fact that you have signed the SCA.
6. This SCA is governed by the laws of the State of California and applicable
U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect to my
contributions.
* [ ] I am signing on behalf of my employer or a legal entity and I have the
actual authority to contractually bind that entity.
## Contributor Details
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Ujwal Narayan |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | 17/05/2019 |
| GitHub username | ujwal-narayan |
| Website (optional) | |

View File

@ -5,13 +5,14 @@ thinc>=7.0.2,<7.1.0
blis>=0.2.2,<0.3.0
murmurhash>=0.28.0,<1.1.0
wasabi>=0.2.0,<1.1.0
srsly>=0.0.5,<1.1.0
srsly>=0.0.6,<1.1.0
# Third party dependencies
numpy>=1.15.0
requests>=2.13.0,<3.0.0
jsonschema>=2.6.0,<3.1.0
plac<1.0.0,>=0.9.6
pathlib==1.0.1; python_version < "3.4"
# Optional dependencies
jsonschema>=2.6.0,<3.1.0
# Development dependencies
cython>=0.25
pytest>=4.0.0,<4.1.0

View File

@ -209,7 +209,7 @@ def setup_package():
generate_cython(root, "spacy")
setup(
name=about["__title__"],
name="spacy",
zip_safe=False,
packages=PACKAGES,
package_data=PACKAGE_DATA,
@ -232,9 +232,8 @@ def setup_package():
"blis>=0.2.2,<0.3.0",
"plac<1.0.0,>=0.9.6",
"requests>=2.13.0,<3.0.0",
"jsonschema>=2.6.0,<3.1.0",
"wasabi>=0.2.0,<1.1.0",
"srsly>=0.0.5,<1.1.0",
"srsly>=0.0.6,<1.1.0",
'pathlib==1.0.1; python_version < "3.4"',
],
setup_requires=["wheel"],

View File

@ -33,7 +33,7 @@ from .. import util
batch_size=("Number of words per training batch", "option", "bs", int),
max_length=("Max words per example.", "option", "xw", int),
min_length=("Min words per example.", "option", "nw", int),
seed=("Seed for random number generators", "option", "s", float),
seed=("Seed for random number generators", "option", "s", int),
n_iter=("Number of iterations to pretrain", "option", "i", int),
n_save_every=("Save model every X batches.", "option", "se", int),
)

View File

@ -141,8 +141,14 @@ class Errors(object):
E023 = ("Error cleaning up beam: The same state occurred twice at "
"memory address {addr} and position {i}.")
E024 = ("Could not find an optimal move to supervise the parser. Usually, "
"this means the GoldParse was not correct. For example, are all "
"labels added to the model?")
"this means that the model can't be updated in a way that's valid "
"and satisfies the correct annotations specified in the GoldParse. "
"For example, are all labels added to the model? If you're "
"training a named entity recognizer, also make sure that none of "
"your annotated entity spans have leading or trailing whitespace. "
"You can also use the experimental `debug-data` command to "
"validate your JSON-formatted training data. For details, run:\n"
"python -m spacy debug-data --help")
E025 = ("String is too long: {length} characters. Max is 2**30.")
E026 = ("Error accessing token at position {i}: out of bounds in Doc of "
"length {length}.")
@ -383,6 +389,10 @@ class Errors(object):
E133 = ("The sum of prior probabilities for alias '{alias}' should not exceed 1, "
"but found {sum}.")
E134 = ("Alias '{alias}' defined for unknown entity '{entity}'.")
E135 = ("If you meant to replace a built-in component, use `create_pipe`: "
"`nlp.replace_pipe('{name}', nlp.create_pipe('{name}'))`")
E136 = ("This additional feature requires the jsonschema library to be "
"installed:\npip install jsonschema")
@add_codes

View File

@ -532,7 +532,7 @@ cdef class GoldParse:
self.labels[i] = deps[i2j_multi[i]]
# Now set NER...This is annoying because if we've split
# got an entity word split into two, we need to adjust the
# BILOU tags. We can't have BB or LL etc.
# BILUO tags. We can't have BB or LL etc.
# Case 1: O -- easy.
ner_tag = entities[i2j_multi[i]]
if ner_tag == "O":

View File

@ -4,6 +4,7 @@ from __future__ import unicode_literals
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
from .tag_map import TAG_MAP
from .stop_words import STOP_WORDS
from .lex_attrs import LEX_ATTRS
from .lemmatizer import LOOKUP
from .syntax_iterators import SYNTAX_ITERATORS
@ -16,6 +17,7 @@ from ...util import update_exc, add_lookups
class SpanishDefaults(Language.Defaults):
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
lex_attr_getters.update(LEX_ATTRS)
lex_attr_getters[LANG] = lambda text: "es"
lex_attr_getters[NORM] = add_lookups(
Language.Defaults.lex_attr_getters[NORM], BASE_NORMS

View File

@ -0,0 +1,59 @@
# coding: utf8
from __future__ import unicode_literals
from ...attrs import LIKE_NUM
_num_words = [
"cero",
"uno",
"dos",
"tres",
"cuatro",
"cinco",
"seis",
"siete",
"ocho",
"nueve",
"diez",
"once",
"doce",
"trece",
"catorce",
"quince",
"dieciséis",
"diecisiete",
"dieciocho",
"diecinueve",
"veinte",
"treinta",
"cuarenta",
"cincuenta",
"sesenta",
"setenta",
"ochenta",
"noventa",
"cien",
"mil",
"millón",
"billón",
"trillón",
]
def like_num(text):
if text.startswith(("+", "-", "±", "~")):
text = text[1:]
text = text.replace(",", "").replace(".", "")
if text.isdigit():
return True
if text.count("/") == 1:
num, denom = text.split("/")
if num.isdigit() and denom.isdigit():
return True
if text.lower() in _num_words:
return True
return False
LEX_ATTRS = {LIKE_NUM: like_num}

View File

@ -4,67 +4,87 @@ from __future__ import unicode_literals
STOP_WORDS = set(
"""
ಮತ
ಅವರ
ಅವರ
ಬಗ
ಆದರ
ಅವರನ
ಆದರ
ತಮ
ದರ
ಿದರ
ಿ
ಬಳಿ
ಅವರಿ
ನಡ
ಿ
ಇದ
ಅವರ
ಕಳ
ಇದ
ಿಿಿದರ
ಿ
ತನ
ಿಿಿ
ಿ
ಈಗ
ಎಲ
ನನ
ನಮ
ಈಗಗಲ
ಇದಕ
ಹಲವ
ಇದ
ಮತ
ಿದರ
ಿ
ಇದರಿ
ಲಕ
ಅದ
ಇದನ
ಿ
ದರ
ಅವರ
ಈಗ
ಿ
ಅಷ
ಇದ
ಿ
ತಮ
ನಮ
ಿದರ
ಮತ
ಇದ
ಇತ
ಎಲ
ನಡ
ಅದನ
ಇಲಿ
ಆಗ
ಿ.
ಅದ
ಇರ
ಅಲಲದ
ಲವ
ದರ
ಿ
ಿ
ಇದರಿ
ನನಗ
ಅಲಲದ
ಎಷ
ಇದರ
ಇಲ
ಕಳ
ಈಗಗಲ
ಿ
ಅದಕ
ಬಗ
ಅವರ
ಇದನ
ಇದ
ಇನ
ಎಲ
ಇರ
ಅವರಿ
ಿ
ಏನ
ಇಲಿ
ನನನನ
ಲವ
ಬಳಿ
ತನ
ಆಗ
ಅಥವ
ಅಲ
ವಲ
ಆದರ
ಮತ
ಇನ
ಅದ
ಆಗಿ
ಅವರನ
ಿ
ನಡಿ
ಇದಕ
ನನ
""".split()
)

20
spacy/lang/mr/__init__.py Normal file
View File

@ -0,0 +1,20 @@
#coding: utf8
from __future__ import unicode_literals
from .stop_words import STOP_WORDS
from ...language import Language
from ...attrs import LANG
class MarathiDefaults(Language.Defaults):
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
lex_attr_getters[LANG] = lambda text: "mr"
stop_words = STOP_WORDS
class Marathi(Language):
lang = "mr"
Defaults = MarathiDefaults
__all__ = ["Marathi"]

196
spacy/lang/mr/stop_words.py Normal file
View File

@ -0,0 +1,196 @@
# coding: utf8
from __future__ import unicode_literals
# Source: https://github.com/stopwords-iso/stopwords-mr/blob/master/stopwords-mr.txt, https://github.com/6/stopwords-json/edit/master/dist/mr.json
STOP_WORDS = set(
"""
अतर
आणि
मग
पर
ऐस
आत
तय
अस
हण
आह
जर
हणि
एक
ऐस
मज
एथ
जय
अस
कर
ऐस
हल
ि
आघव
ऊनि
एक
सकळ
एऱहव
ि
ि
ि
तरि
आपण
ि
कर
इय
पड
अधि
अन
अश
असलय
असल
अस
अस
अस
आज
आणि
आत
आपल
आल
आल
आल
आह
आह
एक
एक
कम
करणय
कर
ि
ऊन
तर
तर
तस
ि
पण
पम
परयतन
ि
हणज
हण
हण
यकत
सर
ि
हज
""".split()
)

View File

@ -53,5 +53,11 @@ BASE_NORMS = {
"US$": "$",
"C$": "$",
"A$": "$",
"" : "$",
"": "$",
"": "$",
"": "$",
"": "$",
"Mex$": "$",
"": "$",
"": "$",
}

View File

@ -333,6 +333,11 @@ class Language(object):
"""
if name not in self.pipe_names:
raise ValueError(Errors.E001.format(name=name, opts=self.pipe_names))
if not hasattr(component, "__call__"):
msg = Errors.E003.format(component=repr(component), name=name)
if isinstance(component, basestring_) and component in self.factories:
msg += Errors.E135.format(name=name)
raise ValueError(msg)
self.pipeline[self.pipe_names.index(name)] = (name, component)
def rename_pipe(self, old_name, new_name):
@ -412,7 +417,9 @@ class Language(object):
golds (iterable): A batch of `GoldParse` objects.
drop (float): The droput rate.
sgd (callable): An optimizer.
RETURNS (dict): Results from the update.
losses (dict): Dictionary to update with the loss, keyed by component.
component_cfg (dict): Config parameters for specific pipeline
components, keyed by component name.
DOCS: https://spacy.io/api/language#update
"""
@ -593,6 +600,19 @@ class Language(object):
def evaluate(
self, docs_golds, verbose=False, batch_size=256, scorer=None, component_cfg=None
):
"""Evaluate a model's pipeline components.
docs_golds (iterable): Tuples of `Doc` and `GoldParse` objects.
verbose (bool): Print debugging information.
batch_size (int): Batch size to use.
scorer (Scorer): Optional `Scorer` to use. If not passed in, a new one
will be created.
component_cfg (dict): An optional dictionary with extra keyword
arguments for specific components.
RETURNS (Scorer): The scorer containing the evaluation results.
DOCS: https://spacy.io/api/language#evaluate
"""
if scorer is None:
scorer = Scorer()
if component_cfg is None:

View File

@ -48,7 +48,10 @@ cdef class Matcher:
self._extra_predicates = []
self.vocab = vocab
self.mem = Pool()
self.validator = get_json_validator(TOKEN_PATTERN_SCHEMA) if validate else None
if validate:
self.validator = get_json_validator(TOKEN_PATTERN_SCHEMA)
else:
self.validator = None
def __reduce__(self):
data = (self.vocab, self._patterns, self._callbacks)

View File

@ -127,7 +127,7 @@ cdef class PhraseMatcher:
and self.attr not in (DEP, POS, TAG, LEMMA):
string_attr = self.vocab.strings[self.attr]
user_warning(Warnings.W012.format(key=key, attr=string_attr))
tags = get_bilou(length)
tags = get_biluo(length)
phrase_key = <attr_t*>mem.alloc(length, sizeof(attr_t))
for i, tag in enumerate(tags):
attr_value = self.get_lex_value(doc, i)
@ -230,7 +230,7 @@ cdef class PhraseMatcher:
return "matcher:{}-{}".format(string_attr_name, string_attr_value)
def get_bilou(length):
def get_biluo(length):
if length == 0:
raise ValueError(Errors.E127)
elif length == 1:

View File

@ -35,7 +35,17 @@ class PRFScore(object):
class Scorer(object):
"""Compute evaluation scores."""
def __init__(self, eval_punct=False):
"""Initialize the Scorer.
eval_punct (bool): Evaluate the dependency attachments to and from
punctuation.
RETURNS (Scorer): The newly created object.
DOCS: https://spacy.io/api/scorer#init
"""
self.tokens = PRFScore()
self.sbd = PRFScore()
self.unlabelled = PRFScore()
@ -46,34 +56,46 @@ class Scorer(object):
@property
def tags_acc(self):
"""RETURNS (float): Part-of-speech tag accuracy (fine grained tags,
i.e. `Token.tag`).
"""
return self.tags.fscore * 100
@property
def token_acc(self):
"""RETURNS (float): Tokenization accuracy."""
return self.tokens.precision * 100
@property
def uas(self):
"""RETURNS (float): Unlabelled dependency score."""
return self.unlabelled.fscore * 100
@property
def las(self):
"""RETURNS (float): Labelled depdendency score."""
return self.labelled.fscore * 100
@property
def ents_p(self):
"""RETURNS (float): Named entity accuracy (precision)."""
return self.ner.precision * 100
@property
def ents_r(self):
"""RETURNS (float): Named entity accuracy (recall)."""
return self.ner.recall * 100
@property
def ents_f(self):
"""RETURNS (float): Named entity accuracy (F-score)."""
return self.ner.fscore * 100
@property
def scores(self):
"""RETURNS (dict): All scores with keys `uas`, `las`, `ents_p`,
`ents_r`, `ents_f`, `tags_acc` and `token_acc`.
"""
return {
"uas": self.uas,
"las": self.las,
@ -84,9 +106,20 @@ class Scorer(object):
"token_acc": self.token_acc,
}
def score(self, tokens, gold, verbose=False, punct_labels=("p", "punct")):
if len(tokens) != len(gold):
gold = GoldParse.from_annot_tuples(tokens, zip(*gold.orig_annot))
def score(self, doc, gold, verbose=False, punct_labels=("p", "punct")):
"""Update the evaluation scores from a single Doc / GoldParse pair.
doc (Doc): The predicted annotations.
gold (GoldParse): The correct annotations.
verbose (bool): Print debugging information.
punct_labels (tuple): Dependency labels for punctuation. Used to
evaluate dependency attachments to punctuation if `eval_punct` is
`True`.
DOCS: https://spacy.io/api/scorer#score
"""
if len(doc) != len(gold):
gold = GoldParse.from_annot_tuples(doc, zip(*gold.orig_annot))
gold_deps = set()
gold_tags = set()
gold_ents = set(tags_to_entities([annot[-1] for annot in gold.orig_annot]))
@ -96,7 +129,7 @@ class Scorer(object):
gold_deps.add((id_, head, dep.lower()))
cand_deps = set()
cand_tags = set()
for token in tokens:
for token in doc:
if token.orth_.isspace():
continue
gold_i = gold.cand_to_gold[token.i]
@ -116,7 +149,7 @@ class Scorer(object):
cand_deps.add((gold_i, gold_head, token.dep_.lower()))
if "-" not in [token[-1] for token in gold.orig_annot]:
cand_ents = set()
for ent in tokens.ents:
for ent in doc.ents:
first = gold.cand_to_gold[ent.start]
last = gold.cand_to_gold[ent.end - 1]
if first is None or last is None:

View File

@ -140,3 +140,28 @@ def test_underscore_mutable_defaults_dict(en_vocab):
assert len(token1._.mutable) == 2
assert token1._.mutable["x"] == ["y"]
assert len(token2._.mutable) == 0
def test_underscore_dir(en_vocab):
"""Test that dir() correctly returns extension attributes. This enables
things like tab-completion for the attributes in doc._."""
Doc.set_extension("test_dir", default=None)
doc = Doc(en_vocab, words=["hello", "world"])
assert "_" in dir(doc)
assert "test_dir" in dir(doc._)
assert "test_dir" not in dir(doc[0]._)
assert "test_dir" not in dir(doc[0:2]._)
def test_underscore_docstring(en_vocab):
"""Test that docstrings are available for extension methods, even though
they're partials."""
def test_method(doc, arg1=1, arg2=2):
"""I am a docstring"""
return (arg1, arg2)
Doc.set_extension("test_docstrings", method=test_method)
doc = Doc(en_vocab, words=["hello", "world"])
assert test_method.__doc__ == "I am a docstring"
assert doc._.test_docstrings.__doc__.rsplit(". ")[-1] == "I am a docstring"

View File

@ -52,11 +52,13 @@ def test_get_pipe(nlp, name):
assert nlp.get_pipe(name) == new_pipe
@pytest.mark.parametrize("name,replacement", [("my_component", lambda doc: doc)])
def test_replace_pipe(nlp, name, replacement):
@pytest.mark.parametrize("name,replacement,not_callable", [("my_component", lambda doc: doc, {})])
def test_replace_pipe(nlp, name, replacement, not_callable):
with pytest.raises(ValueError):
nlp.replace_pipe(name, new_pipe)
nlp.add_pipe(new_pipe, name=name)
with pytest.raises(ValueError):
nlp.replace_pipe(name, not_callable)
nlp.replace_pipe(name, replacement)
assert nlp.get_pipe(name) != new_pipe
assert nlp.get_pipe(name) == replacement

View File

@ -0,0 +1,15 @@
# coding: utf8
from __future__ import unicode_literals
import pytest
from spacy.lang.es import Spanish
def test_issue3803():
"""Test that spanish num-like tokens have True for like_num attribute."""
nlp = Spanish()
text = "2 dos 1000 mil 12 doce"
doc = nlp(text)
assert [t.like_num for t in doc] == [True, True, True, True, True, True]

View File

@ -25,6 +25,11 @@ class Underscore(object):
object.__setattr__(self, "_start", start)
object.__setattr__(self, "_end", end)
def __dir__(self):
# Hack to enable autocomplete on custom extensions
extensions = list(self._extensions.keys())
return ["set", "get", "has"] + extensions
def __getattr__(self, name):
if name not in self._extensions:
raise AttributeError(Errors.E046.format(name=name))
@ -32,7 +37,16 @@ class Underscore(object):
if getter is not None:
return getter(self._obj)
elif method is not None:
return functools.partial(method, self._obj)
method_partial = functools.partial(method, self._obj)
# Hack to port over docstrings of the original function
# See https://stackoverflow.com/q/27362727/6400719
method_docstring = method.__doc__ or ""
method_docstring_prefix = (
"This method is a partial function and its first argument "
"(the object it's called on) will be filled automatically. "
)
method_partial.__doc__ = method_docstring_prefix + method_docstring
return method_partial
else:
key = self._get_key(name)
if key in self._doc.user_data:

View File

@ -14,8 +14,11 @@ import functools
import itertools
import numpy.random
import srsly
from jsonschema import Draft4Validator
try:
import jsonschema
except ImportError:
jsonschema = None
try:
import cupy.random
@ -682,7 +685,9 @@ def get_json_validator(schema):
# validator that's used (e.g. different draft implementation), without
# having to change it all across the codebase.
# TODO: replace with (stable) Draft6Validator, if available
return Draft4Validator(schema)
if jsonschema is None:
raise ValueError(Errors.E136)
return jsonschema.Draft4Validator(schema)
def validate_schema(schema):

View File

@ -34,11 +34,10 @@ on the issue tracker.
## JSON format
To add a project, fork this repository, edit the [`universe.json`](universe.json)
To add a project, fork this repository, edit the [`universe.json`](meta/universe.json)
and add an object of the following format to the list of `"resources"`. Before
you submit your pull request, make sure to use a linter to verify that your
markup is correct. We'll also be adding linting for the `universe.json` to our
automated GitHub checks soon.
markup is correct.
```json
{
@ -92,4 +91,4 @@ automated GitHub checks soon.
To separate them from the projects, educational materials also specify
`"type": "education`. Books can also set a `"cover"` field containing a URL
to a cover image. If available, it's used in the overview and displayed on
the individual book page.
the individual book page.

View File

@ -510,7 +510,7 @@ described in any single publication. The model is a greedy transition-based
parser guided by a linear model whose weights are learned using the averaged
perceptron loss, via the
[dynamic oracle](http://www.aclweb.org/anthology/C12-1059) imitation learning
strategy. The transition system is equivalent to the BILOU tagging scheme.
strategy. The transition system is equivalent to the BILUO tagging scheme.
## Models and training data {#training}

View File

@ -119,8 +119,28 @@ Update the models in the pipeline.
| `golds` | iterable | A batch of `GoldParse` objects or dictionaries. Dictionaries will be used to create [`GoldParse`](/api/goldparse) objects. For the available keys and their usage, see [`GoldParse.__init__`](/api/goldparse#init). |
| `drop` | float | The dropout rate. |
| `sgd` | callable | An optimizer. |
| `losses` | dict | Dictionary to update with the loss, keyed by pipeline component. |
| `component_cfg` <Tag variant="new">2.1</Tag> | dict | Config parameters for specific pipeline components, keyed by component name. |
## Language.evaluate {#evaluate tag="method"}
Evaluate a model's pipeline components.
> #### Example
>
> ```python
> scorer = nlp.evaluate(docs_golds, verbose=True)
> print(scorer.scores)
> ```
| Name | Type | Description |
| -------------------------------------------- | -------- | ------------------------------------------------------------------------------------- |
| `docs_golds` | iterable | Tuples of `Doc` and `GoldParse` objects. |
| `verbose` | bool | Print debugging information. |
| `batch_size` | int | The batch size to use. |
| `scorer` | `Scorer` | Optional [`Scorer`](/api/scorer) to use. If not passed in, a new one will be created. |
| `component_cfg` <Tag variant="new">2.1</Tag> | dict | Config parameters for specific pipeline components, keyed by component name. |
## Language.begin_training {#begin_training tag="method"}
Allocate models, pre-process training data and acquire an optimizer.

View File

@ -128,7 +128,6 @@ The L2 norm of the lexeme's vector representation.
| `text` | unicode | Verbatim text content. |
| `orth` | int | ID of the verbatim text content. |
| `orth_` | unicode | Verbatim text content (identical to `Lexeme.text`). Exists mostly for consistency with the other attributes. |
| `lex_id` | int | ID of the lexeme's lexical type. |
| `rank` | int | Sequential ID of the lexemes's lexical type, used to index into tables, e.g. for word vectors. |
| `flags` | int | Container of the lexeme's binary flags. |
| `norm` | int | The lexemes's norm, i.e. a normalized form of the lexeme text. |

View File

@ -0,0 +1,58 @@
---
title: Scorer
teaser: Compute evaluation scores
tag: class
source: spacy/scorer.py
---
The `Scorer` computes and stores evaluation scores. It's typically created by
[`Language.evaluate`](/api/language#evaluate).
## Scorer.\_\_init\_\_ {#init tag="method"}
Create a new `Scorer`.
> #### Example
>
> ```python
> from spacy.scorer import Scorer
>
> scorer = Scorer()
> ```
| Name | Type | Description |
| ------------ | -------- | ------------------------------------------------------------ |
| `eval_punct` | bool | Evaluate the dependency attachments to and from punctuation. |
| **RETURNS** | `Scorer` | The newly created object. |
## Scorer.score {#score tag="method"}
Update the evaluation scores from a single [`Doc`](/api/doc) /
[`GoldParse`](/api/goldparse) pair.
> #### Example
>
> ```python
> scorer = Scorer()
> scorer.score(doc, gold)
> ```
| Name | Type | Description |
| -------------- | ----------- | -------------------------------------------------------------------------------------------------------------------- |
| `doc` | `Doc` | The predicted annotations. |
| `gold` | `GoldParse` | The correct annotations. |
| `verbose` | bool | Print debugging information. |
| `punct_labels` | tuple | Dependency labels for punctuation. Used to evaluate dependency attachments to punctuation if `eval_punct` is `True`. |
## Properties
| Name | Type | Description |
| ----------- | ----- | -------------------------------------------------------------------------------------------- |
| `token_acc` | float | Tokenization accuracy. |
| `tags_acc` | float | Part-of-speech tag accuracy (fine grained tags, i.e. `Token.tag`). |
| `uas` | float | Unlabelled dependency score. |
| `las` | float | Labelled dependency score. |
| `ents_p` | float | Named entity accuracy (precision). |
| `ents_r` | float | Named entity accuracy (recall). |
| `ents_f` | float | Named entity accuracy (F-score). |
| `scores` | dict | All scores with keys `uas`, `las`, `ents_p`, `ents_r`, `ents_f`, `tags_acc` and `token_acc`. |

View File

@ -424,7 +424,7 @@ The L2 norm of the token's vector representation.
| `ent_type` | int | Named entity type. |
| `ent_type_` | unicode | Named entity type. |
| `ent_iob` | int | IOB code of named entity tag. `3` means the token begins an entity, `2` means it is outside an entity, `1` means it is inside an entity, and `0` means no entity tag is set. | |
| `ent_iob_` | unicode | IOB code of named entity tag. `3` means the token begins an entity, `2` means it is outside an entity, `1` means it is inside an entity, and `0` means no entity tag is set. |
| `ent_iob_` | unicode | IOB code of named entity tag. "B" means the token begins an entity, "I" means it is inside an entity, "O" means it is outside an entity, and "" means no entity tag is set. |
| `ent_id` | int | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
| `ent_id_` | unicode | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
| `lemma` | int | Base form of the token, with no inflectional suffixes. |
@ -468,7 +468,7 @@ The L2 norm of the token's vector representation.
| `prob` | float | Smoothed log probability estimate of token's word type (context-independent entry in the vocabulary). |
| `idx` | int | The character offset of the token within the parent document. |
| `sentiment` | float | A scalar value indicating the positivity or negativity of the token. |
| `lex_id` | int | Sequential ID of the token's lexical type. |
| `lex_id` | int | Sequential ID of the token's lexical type, used to index into tables, e.g. for word vectors. |
| `rank` | int | Sequential ID of the token's lexical type, used to index into tables, e.g. for word vectors. |
| `cluster` | int | Brown cluster ID. |
| `_` | `Underscore` | User space for adding custom [attribute extensions](/usage/processing-pipelines#custom-components-attributes). |

View File

@ -351,7 +351,7 @@ the two-letter language code.
| `name` | unicode | Two-letter language code, e.g. `'en'`. |
| `cls` | `Language` | The language class, e.g. `English`. |
### util.lang_class_is_loaded (#util.lang_class_is_loaded tag="function" new="2.1")
### util.lang_class_is_loaded {#util.lang_class_is_loaded tag="function" new="2.1"}
Check whether a `Language` class is already loaded. `Language` classes are
loaded lazily, to avoid expensive setup code associated with the language data.

View File

@ -326,7 +326,7 @@ URLs.
```text
### requirements.txt
spacy>=2.0.0,<3.0.0
https://github.com/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm
https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz#egg=en_core_web_sm
```
Specifying `#egg=` with the package name tells pip which package to expect from

View File

@ -41,7 +41,7 @@ components. spaCy then does the following:
`Language` class contains the shared vocabulary, tokenization rules and the
language-specific annotation scheme.
2. Iterate over the **pipeline names** and create each component using
[`create_pipe`](/api/anguage#create_pipe), which looks them up in
[`create_pipe`](/api/language#create_pipe), which looks them up in
`Language.factories`.
3. Add each pipeline component to the pipeline in order, using
[`add_pipe`](/api/language#add_pipe).

View File

@ -405,7 +405,7 @@ class BadHTMLMerger(object):
for match_id, start, end in matches:
spans.append(doc[start:end])
with doc.retokenize() as retokenizer:
for span in hashtags:
for span in spans:
retokenizer.merge(span)
for token in span:
token._.bad_html = True # Mark token as bad HTML
@ -679,7 +679,7 @@ for match_id, start, end in matches:
if doc.vocab.strings[match_id] == "HASHTAG":
hashtags.append(doc[start:end])
with doc.retokenize() as retokenizer:
for span in spans:
for span in hashtags:
retokenizer.merge(span)
for token in span:
token._.is_hashtag = True

View File

@ -90,7 +90,8 @@
{ "text": "StringStore", "url": "/api/stringstore" },
{ "text": "Vectors", "url": "/api/vectors" },
{ "text": "GoldParse", "url": "/api/goldparse" },
{ "text": "GoldCorpus", "url": "/api/goldcorpus" }
{ "text": "GoldCorpus", "url": "/api/goldcorpus" },
{ "text": "Scorer", "url": "/api/scorer" }
]
},
{

View File

@ -1,5 +1,107 @@
{
"resources": [
{
"id": "nlp-architect",
"title": "NLP Architect",
"slogan": "Python lib for exploring Deep NLP & NLU by Intel AI",
"github": "NervanaSystems/nlp-architect",
"pip": "nlp-architect",
"thumb": "https://i.imgur.com/vMideRx.png",
"category": ["standalone", "research"],
"tags": ["pytorch"]
},
{
"id": "NeuroNER",
"title": "NeuroNER",
"slogan": "Named-entity recognition using neural networks",
"github": "Franck-Dernoncourt/NeuroNER",
"pip": "pyneuroner[cpu]",
"code_example": [
"from neuroner import neuromodel",
"nn = neuromodel.NeuroNER(train_model=False, use_pretrained_model=True)"
],
"category": ["ner"],
"tags": ["standalone"]
},
{
"id": "NLPre",
"title": "NLPre",
"slogan": "Natural Language Preprocessing Library for health data and more",
"github": "NIHOPA/NLPre",
"pip": "nlpre",
"code_example": [
"from nlpre import titlecaps, dedash, identify_parenthetical_phrases",
"from nlpre import replace_acronyms, replace_from_dictionary",
"ABBR = identify_parenthetical_phrases()(text)",
"parsers = [dedash(), titlecaps(), replace_acronyms(ABBR),",
" replace_from_dictionary(prefix='MeSH_')]",
"for f in parsers:",
" text = f(text)",
"print(text)"
],
"category": ["scientific"]
},
{
"id": "Chatterbot",
"title": "Chatterbot",
"slogan": "A machine-learning based conversational dialog engine for creating chat bots",
"github": "gunthercox/ChatterBot",
"pip": "chatterbot",
"thumb": "https://i.imgur.com/eyAhwXk.jpg",
"code_example": [
"from chatterbot import ChatBot",
"from chatterbot.trainers import ListTrainer",
"# Create a new chat bot named Charlie",
"chatbot = ChatBot('Charlie')",
"trainer = ListTrainer(chatbot)",
"trainer.train([",
"'Hi, can I help you?',",
"'Sure, I would like to book a flight to Iceland.",
"'Your flight has been booked.'",
"])",
"",
"response = chatbot.get_response('I would like to book a flight.')"
],
"author": "Gunther Cox",
"author_links": {
"github": "gunthercox"
},
"category": ["conversational", "standalone"],
"tags": ["chatbots"]
},
{
"id": "saber",
"title": "saber",
"slogan": "Deep-learning based tool for information extraction in the biomedical domain",
"github": "BaderLab/saber",
"pip": "saber",
"thumb": "https://raw.githubusercontent.com/BaderLab/saber/master/docs/img/saber_logo.png",
"code_example": [
"from saber.saber import Saber",
"saber = Saber()",
"saber.load('PRGE')",
"saber.annotate('The phosphorylation of Hdm2 by MK2 promotes the ubiquitination of p53.')"
],
"author": "Bader Lab, University of Toronto",
"category": ["scientific"],
"tags": ["keras", "biomedical"]
},
{
"id": "alibi",
"title": "alibi",
"slogan": "Algorithms for monitoring and explaining machine learning models ",
"github": "SeldonIO/alibi",
"pip": "alibi",
"thumb": "https://i.imgur.com/YkzQHRp.png",
"code_example": [
"from alibi.explainers import AnchorTabular",
"explainer = AnchorTabular(predict_fn, feature_names)",
"explainer.fit(X_train)",
"explainer.explain(x)"
],
"author": "Seldon",
"category": ["standalone", "research"]
},
{
"id": "spacymoji",
"slogan": "Emoji handling and meta data as a spaCy pipeline component",
@ -143,7 +245,7 @@
"doc = nlp(my_doc_text)"
],
"author": "tc64",
"author_link": {
"author_links": {
"github": "tc64"
},
"category": ["pipeline"]
@ -346,7 +448,7 @@
"author_links": {
"github": "huggingface"
},
"category": ["standalone", "conversational"],
"category": ["standalone", "conversational", "models"],
"tags": ["coref"]
},
{
@ -538,7 +640,7 @@
"twitter": "allenai_org",
"website": "http://allenai.org"
},
"category": ["models", "research"]
"category": ["scientific", "models", "research"]
},
{
"id": "textacy",
@ -601,7 +703,7 @@
"github": "ahalterman",
"twitter": "ahalterman"
},
"category": ["standalone"]
"category": ["standalone", "scientific"]
},
{
"id": "kindred",
@ -626,7 +728,7 @@
"author_links": {
"github": "jakelever"
},
"category": ["standalone"]
"category": ["standalone", "scientific"]
},
{
"id": "sense2vec",
@ -837,6 +939,42 @@
},
"category": ["standalone"]
},
{
"id": "prefect",
"title": "Prefect",
"slogan": "Workflow management system designed for modern infrastructure",
"github": "PrefectHQ/prefect",
"pip": "prefect",
"thumb": "https://i.imgur.com/oLTwr0e.png",
"code_example": [
"from prefect import Flow",
"from prefect.tasks.spacy.spacy_tasks import SpacyNLP",
"import spacy",
"",
"nlp = spacy.load(\"en_core_web_sm\")",
"",
"with Flow(\"Natural Language Processing\") as flow:",
" doc = SpacyNLP(text=\"This is some text\", nlp=nlp)",
"",
"flow.run()"
],
"author": "Prefect",
"author_links": {
"website": "https://prefect.io"
},
"category": ["standalone"]
},
{
"id": "graphbrain",
"title": "Graphbrain",
"slogan": "Automated meaning extraction and text understanding",
"description": "Graphbrain is an Artificial Intelligence open-source software library and scientific research tool. Its aim is to facilitate automated meaning extraction and text understanding, as well as the exploration and inference of knowledge.",
"github": "graphbrain/graphbrain",
"pip": "graphbrain",
"thumb": "https://i.imgur.com/cct9W1E.png",
"author": "Graphbrain",
"category": ["standalone"]
},
{
"type": "education",
"id": "oreilly-python-ds",
@ -894,6 +1032,23 @@
"author": "Aaron Kramer",
"category": ["courses"]
},
{
"type": "education",
"id": "spacy-course",
"title": "Advanced NLP with spaCy",
"slogan": "spaCy, 2019",
"description": "In this free interactive course, you'll learn how to use spaCy to build advanced natural language understanding systems, using both rule-based and machine learning approaches.",
"url": "https://course.spacy.io",
"image": "https://i.imgur.com/JC00pHW.jpg",
"thumb": "https://i.imgur.com/5RXLtrr.jpg",
"author": "Ines Montani",
"author_links": {
"twitter": "_inesmontani",
"github": "ines",
"website": "https://ines.io"
},
"category": ["courses"]
},
{
"type": "education",
"id": "video-spacys-ner-model",
@ -937,7 +1092,7 @@
{
"type": "education",
"id": "podcast-nlp-highlights",
"title": "NLP Highlights 78: Where do corpora come from?",
"title": "NLP Highlights #78: Where do corpora come from?",
"slogan": "January 2019",
"description": "Most NLP projects rely crucially on the quality of annotations used for training and evaluating models. In this episode, Matt and Ines of Explosion AI tell us how Prodigy can improve data annotation and model development workflows. Prodigy is an annotation tool implemented as a python library, and it comes with a web application and a command line interface. A developer can define input data streams and design simple annotation interfaces. Prodigy can help break down complex annotation decisions into a series of binary decisions, and it provides easy integration with spaCy models. Developers can specify how models should be modified as new annotations come in in an active learning framework.",
"soundcloud": "559200912",
@ -952,7 +1107,7 @@
{
"type": "education",
"id": "podcast-init",
"title": "Podcast.__init__ 87: spaCy with Matthew Honnibal",
"title": "Podcast.__init__ #87: spaCy with Matthew Honnibal",
"slogan": "December 2017",
"description": "As the amount of text available on the internet and in businesses continues to increase, the need for fast and accurate language analysis becomes more prominent. This week Matthew Honnibal, the creator of SpaCy, talks about his experiences researching natural language processing and creating a library to make his findings accessible to industry.",
"iframe": "https://www.pythonpodcast.com/wp-content/plugins/podlove-podcasting-plugin-for-wordpress/lib/modules/podlove_web_player/player_v4/dist/share.html?episode=https://www.pythonpodcast.com/?podlove_player4=176",
@ -968,7 +1123,7 @@
{
"type": "education",
"id": "talk-python-podcast",
"title": "Talk Python 202: Building a software business",
"title": "Talk Python #202: Building a software business",
"slogan": "March 2019",
"description": "One core question around open source is how do you fund it? Well, there is always that PayPal donate button. But that's been a tremendous failure for many projects. Often the go-to answer is consulting. But what if you don't want to trade time for money? You could take things up a notch and change the equation, exchanging value for money. That's what Ines Montani and her co-founder did when they started Explosion AI with spaCy as the foundation.",
"thumb": "https://i.imgur.com/q1twuK8.png",
@ -996,6 +1151,22 @@
},
"category": ["podcasts"]
},
{
"type": "education",
"id": "analytics-vidhya",
"title": "DataHack Radio #23: The Brains behind spaCy",
"slogan": "June 2019",
"description": "\"What would you do if you had the chance to pick the brains behind one of the most popular Natural Language Processing (NLP) libraries of our era? A library that has helped usher in the current boom in NLP applications and nurtured tons of NLP scientists? Well you invite the creators on our popular DataHack Radio podcast and let them do the talking! We are delighted to welcome Ines Montani and Matt Honnibal, the developers of spaCy a powerful and advanced library for NLP.\"",
"thumb": "https://i.imgur.com/3zJKZ1P.jpg",
"url": "https://www.analyticsvidhya.com/blog/2019/06/datahack-radio-ines-montani-matthew-honnibal-brains-behind-spacy/",
"soundcloud": "630741825",
"author": "Analytics Vidhya",
"author_links": {
"website": "https://www.analyticsvidhya.com",
"twitter": "analyticsvidhya"
},
"category": ["podcasts"]
},
{
"id": "adam_qas",
"title": "ADAM: Question Answering System",
@ -1054,7 +1225,7 @@
"github": "ecohealthalliance",
"website": " https://ecohealthalliance.org/"
},
"category": ["research", "standalone"]
"category": ["scientific", "standalone"]
},
{
"id": "self-attentive-parser",
@ -1376,7 +1547,7 @@
"url": "https://github.com/msg-systems/holmes-extractor",
"description": "Holmes is a Python 3 library that supports a number of use cases involving information extraction from English and German texts, including chatbot, structural search, topic matching and supervised document classification.",
"pip": "holmes-extractor",
"category": ["conversational", "research", "standalone"],
"category": ["conversational", "standalone"],
"tags": ["chatbots", "text-processing"],
"code_example": [
"import holmes_extractor as holmes",
@ -1415,6 +1586,11 @@
"title": "Research",
"description": "Frameworks and utilities for developing better NLP models, especially using neural networks"
},
{
"id": "scientific",
"title": "Scientific",
"description": "Frameworks and utilities for scientific text processing"
},
{
"id": "visualizers",
"title": "Visualizers",
@ -1434,6 +1610,11 @@
"id": "standalone",
"title": "Standalone",
"description": "Self-contained libraries or tools that use spaCy under the hood"
},
{
"id": "models",
"title": "Models",
"description": "Third-party pre-trained models for different languages and domains"
}
]
},