💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
|
When you call `nlp` on a text, spaCy first tokenizes the text to produce a `Doc`
|
|
|
|
|
object. The `Doc` is then processed in several different steps – this is also
|
|
|
|
|
referred to as the **processing pipeline**. The pipeline used by the
|
|
|
|
|
[default models](/models) consists of a tagger, a parser and an entity
|
|
|
|
|
recognizer. Each pipeline component returns the processed `Doc`, which is then
|
|
|
|
|
passed on to the next component.
|
|
|
|
|
|
|
|
|
|
![The processing pipeline](../../images/pipeline.svg)
|
|
|
|
|
|
|
|
|
|
> - **Name**: ID of the pipeline component.
|
|
|
|
|
> - **Component:** spaCy's implementation of the component.
|
|
|
|
|
> - **Creates:** Objects, attributes and properties modified and set by the
|
|
|
|
|
> component.
|
|
|
|
|
|
|
|
|
|
| Name | Component | Creates | Description |
|
|
|
|
|
| ------------- | ------------------------------------------------------------------ | ----------------------------------------------------------- | ------------------------------------------------ |
|
|
|
|
|
| **tokenizer** | [`Tokenizer`](/api/tokenizer) | `Doc` | Segment text into tokens. |
|
|
|
|
|
| **tagger** | [`Tagger`](/api/tagger) | `Doc[i].tag` | Assign part-of-speech tags. |
|
|
|
|
|
| **parser** | [`DependencyParser`](/api/dependencyparser) | `Doc[i].head`, `Doc[i].dep`, `Doc.sents`, `Doc.noun_chunks` | Assign dependency labels. |
|
|
|
|
|
| **ner** | [`EntityRecognizer`](/api/entityrecognizer) | `Doc.ents`, `Doc[i].ent_iob`, `Doc[i].ent_type` | Detect and label named entities. |
|
|
|
|
|
| **textcat** | [`TextCategorizer`](/api/textcategorizer) | `Doc.cats` | Assign document labels. |
|
|
|
|
|
| ... | [custom components](/usage/processing-pipelines#custom-components) | `Doc._.xxx`, `Token._.xxx`, `Span._.xxx` | Assign custom attributes, methods or properties. |
|
|
|
|
|
|
|
|
|
|
The processing pipeline always **depends on the statistical model** and its
|
|
|
|
|
capabilities. For example, a pipeline can only include an entity recognizer
|
|
|
|
|
component if the model includes data to make predictions of entity labels. This
|
|
|
|
|
is why each model will specify the pipeline to use in its meta data, as a simple
|
|
|
|
|
list containing the component names:
|
|
|
|
|
|
|
|
|
|
```json
|
|
|
|
|
"pipeline": ["tagger", "parser", "ner"]
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
import Accordion from 'components/accordion.js'
|
|
|
|
|
|
2019-03-12 15:18:34 +00:00
|
|
|
|
<Accordion title="Does the order of pipeline components matter?" id="pipeline-components-order">
|
|
|
|
|
|
|
|
|
|
In spaCy v2.x, the statistical components like the tagger or parser are
|
|
|
|
|
independent and don't share any data between themselves. For example, the named
|
|
|
|
|
entity recognizer doesn't use any features set by the tagger and parser, and so
|
|
|
|
|
on. This means that you can swap them, or remove single components from the
|
|
|
|
|
pipeline without affecting the others.
|
|
|
|
|
|
|
|
|
|
However, custom components may depend on annotations set by other components.
|
|
|
|
|
For example, a custom lemmatizer may need the part-of-speech tags assigned, so
|
|
|
|
|
it'll only work if it's added after the tagger. The parser will respect
|
|
|
|
|
pre-defined sentence boundaries, so if a previous component in the pipeline sets
|
|
|
|
|
them, its dependency predictions may be different. Similarly, it matters if you
|
|
|
|
|
add the [`EntityRuler`](/api/entityruler) before or after the statistical entity
|
|
|
|
|
recognizer: if it's added before, the entity recognizer will take the existing
|
|
|
|
|
entities into account when making predictions.
|
💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
|
|
|
|
|
|
</Accordion>
|
|
|
|
|
|
2019-07-25 12:25:03 +00:00
|
|
|
|
<Accordion title="Why is the tokenizer special?" id="pipeline-components-tokenizer">
|
|
|
|
|
|
|
|
|
|
The tokenizer is a "special" component and isn't part of the regular pipeline.
|
|
|
|
|
It also doesn't show up in `nlp.pipe_names`. The reason is that there can only
|
|
|
|
|
really be one tokenizer, and while all other pipeline components take a `Doc`
|
|
|
|
|
and return it, the tokenizer takes a **string of text** and turns it into a
|
|
|
|
|
`Doc`. You can still customize the tokenizer, though. `nlp.tokenizer` is
|
|
|
|
|
writable, so you can either create your own
|
|
|
|
|
[`Tokenizer` class from scratch](/usage/linguistic-features#native-tokenizers),
|
|
|
|
|
or even replace it with an
|
|
|
|
|
[entirely custom function](/usage/linguistic-features#custom-tokenizer).
|
|
|
|
|
|
|
|
|
|
</Accordion>
|
|
|
|
|
|
💫 Update website (#3285)
<!--- Provide a general summary of your changes in the title. -->
## Description
The new website is implemented using [Gatsby](https://www.gatsbyjs.org) with [Remark](https://github.com/remarkjs/remark) and [MDX](https://mdxjs.com/). This allows authoring content in **straightforward Markdown** without the usual limitations. Standard elements can be overwritten with powerful [React](http://reactjs.org/) components and wherever Markdown syntax isn't enough, JSX components can be used. Hopefully, this update will also make it much easier to contribute to the docs. Once this PR is merged, I'll implement auto-deployment via [Netlify](https://netlify.com) on a specific branch (to avoid building the website on every PR). There's a bunch of other cool stuff that the new setup will allow us to do – including writing front-end tests, service workers, offline support, implementing a search and so on.
This PR also includes various new docs pages and content.
Resolves #3270. Resolves #3222. Resolves #2947. Resolves #2837.
### Types of change
enhancement
## Checklist
<!--- Before you submit the PR, go over this checklist and make sure you can
tick off all the boxes. [] -> [x] -->
- [x] I have submitted the spaCy Contributor Agreement.
- [x] I ran the tests, and all new and existing tests passed.
- [x] My changes don't require a change to the documentation, or if they do, I've added all required information.
2019-02-17 18:31:19 +00:00
|
|
|
|
---
|