💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
---
|
|
|
|
title: Pipeline Functions
|
|
|
|
teaser: Other built-in pipeline components and helpers
|
2019-03-08 10:42:26 +00:00
|
|
|
source: spacy/pipeline/functions.py
|
💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
menu:
|
|
|
|
- ['merge_noun_chunks', 'merge_noun_chunks']
|
|
|
|
- ['merge_entities', 'merge_entities']
|
|
|
|
- ['merge_subtokens', 'merge_subtokens']
|
|
|
|
---
|
|
|
|
|
|
|
|
## merge_noun_chunks {#merge_noun_chunks tag="function"}
|
|
|
|
|
|
|
|
Merge noun chunks into a single token. Also available via the string name
|
|
|
|
`"merge_noun_chunks"`. After initialization, the component is typically added to
|
|
|
|
the processing pipeline using [`nlp.add_pipe`](/api/language#add_pipe).
|
|
|
|
|
|
|
|
> #### Example
|
|
|
|
>
|
|
|
|
> ```python
|
|
|
|
> texts = [t.token for t in nlp(u"I have a blue car")]
|
|
|
|
> assert texts = ["I", "have", "a", "blue", "car"]
|
|
|
|
>
|
|
|
|
> merge_nps = nlp.create_pipe("merge_noun_chunks")
|
|
|
|
> nlp.add_pipe(merge_nps)
|
|
|
|
>
|
|
|
|
> texts = [t.token for t in nlp(u"I have a blue car")]
|
|
|
|
> assert texts == ["I", "have", "a blue car"]
|
|
|
|
> ```
|
|
|
|
|
|
|
|
<Infobox variant="warning">
|
|
|
|
|
|
|
|
Since noun chunks require part-of-speech tags and the dependency parse, make
|
|
|
|
sure to add this component _after_ the `"tagger"` and `"parser"` components. By
|
|
|
|
default, `nlp.add_pipe` will add components to the end of the pipeline and after
|
|
|
|
all other components.
|
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
| Name | Type | Description |
|
|
|
|
| ----------- | ----- | ------------------------------------------------------------ |
|
|
|
|
| `doc` | `Doc` | The `Doc` object to process, e.g. the `Doc` in the pipeline. |
|
|
|
|
| **RETURNS** | `Doc` | The modified `Doc` with merged noun chunks. |
|
|
|
|
|
|
|
|
## merge_entities {#merge_entities tag="function"}
|
|
|
|
|
|
|
|
Merge named entities into a single token. Also available via the string name
|
|
|
|
`"merge_entities"`. After initialization, the component is typically added to
|
|
|
|
the processing pipeline using [`nlp.add_pipe`](/api/language#add_pipe).
|
|
|
|
|
|
|
|
> #### Example
|
|
|
|
>
|
|
|
|
> ```python
|
|
|
|
> texts = [t.token for t in nlp(u"I like David Bowie")]
|
|
|
|
> assert texts = ["I", "like", "David", "Bowie"]
|
|
|
|
>
|
|
|
|
> merge_ents = nlp.create_pipe("merge_entities")
|
|
|
|
> nlp.add_pipe(merge_ents)
|
|
|
|
>
|
|
|
|
> texts = [t.token for t in nlp(u"I like David Bowie")]
|
|
|
|
> assert texts == ["I", "like", "David Bowie"]
|
|
|
|
> ```
|
|
|
|
|
|
|
|
<Infobox variant="warning">
|
|
|
|
|
|
|
|
Since named entities are set by the entity recognizer, make sure to add this
|
|
|
|
component _after_ the `"ner"` component. By default, `nlp.add_pipe` will add
|
|
|
|
components to the end of the pipeline and after all other components.
|
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
| Name | Type | Description |
|
|
|
|
| ----------- | ----- | ------------------------------------------------------------ |
|
|
|
|
| `doc` | `Doc` | The `Doc` object to process, e.g. the `Doc` in the pipeline. |
|
|
|
|
| **RETURNS** | `Doc` | The modified `Doc` with merged entities. |
|
|
|
|
|
2019-03-08 10:42:26 +00:00
|
|
|
## merge_subtokens {#merge_subtokens tag="function" new="2.1"}
|
💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
|
|
|
|
Merge subtokens into a single token. Also available via the string name
|
2019-03-08 10:42:26 +00:00
|
|
|
`"merge_subtokens"`. After initialization, the component is typically added to
|
💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
the processing pipeline using [`nlp.add_pipe`](/api/language#add_pipe).
|
|
|
|
|
|
|
|
As of v2.1, the parser is able to predict "subtokens" that should be merged into
|
|
|
|
one single token later on. This is especially relevant for languages like
|
|
|
|
Chinese, Japanese or Korean, where a "word" isn't defined as a
|
|
|
|
whitespace-delimited sequence of characters. Under the hood, this component uses
|
|
|
|
the [`Matcher`](/api/matcher) to find sequences of tokens with the dependency
|
|
|
|
label `"subtok"` and then merges them into a single token.
|
|
|
|
|
|
|
|
> #### Example
|
|
|
|
>
|
|
|
|
> Note that this example assumes a custom Chinese model that oversegments and
|
|
|
|
> was trained to predict subtokens.
|
|
|
|
>
|
|
|
|
> ```python
|
|
|
|
> doc = nlp("拜托")
|
|
|
|
> print([(token.text, token.dep_) for token in doc])
|
|
|
|
> # [('拜', 'subtok'), ('托', 'subtok')]
|
|
|
|
>
|
|
|
|
> merge_subtok = nlp.create_pipe("merge_subtokens")
|
|
|
|
> nlp.add_pipe(merge_subtok)
|
|
|
|
>
|
|
|
|
> doc = nlp("拜托")
|
|
|
|
> print([token.text for token in doc])
|
|
|
|
> # ['拜托']
|
|
|
|
> ```
|
|
|
|
|
|
|
|
<Infobox variant="warning">
|
|
|
|
|
|
|
|
Since subtokens are set by the parser, make sure to add this component _after_
|
|
|
|
the `"parser"` component. By default, `nlp.add_pipe` will add components to the
|
|
|
|
end of the pipeline and after all other components.
|
|
|
|
|
|
|
|
</Infobox>
|
|
|
|
|
|
|
|
| Name | Type | Description |
|
|
|
|
| ----------- | ------- | ------------------------------------------------------------ |
|
|
|
|
| `doc` | `Doc` | The `Doc` object to process, e.g. the `Doc` in the pipeline. |
|
|
|
|
| `label` | unicode | The subtoken dependency label. Defaults to `"subtok"`. |
|
|
|
|
| **RETURNS** | `Doc` | The modified `Doc` with merged subtokens. |
|