spaCy/website/docs/api/corpus.md

173 lines
7.4 KiB
Markdown
Raw Normal View History

2020-07-04 12:23:10 +00:00
---
title: Corpus
teaser: An annotated corpus
tag: class
2020-09-12 15:05:10 +00:00
source: spacy/training/corpus.py
2020-07-04 12:23:10 +00:00
new: 3
---
2020-08-05 18:29:53 +00:00
This class manages annotated corpora and can be used for training and
development datasets in the [DocBin](/api/docbin) (`.spacy`) format. To
customize the data loading during training, you can register your own
2020-08-06 17:30:43 +00:00
[data readers and batchers](/usage/training#custom-code-readers-batchers).
## Config and implementation {#config}
`spacy.Corpus.v1` is a registered function that creates a `Corpus` of training
or evaluation data. It takes the same arguments as the `Corpus` class and
returns a callable that yields [`Example`](/api/example) objects. You can
replace it with your own registered function in the
2020-08-18 10:04:05 +00:00
[`@readers` registry](/api/top-level#registry) to customize the data loading and
2020-08-06 17:30:43 +00:00
streaming.
> #### Example config
>
> ```ini
> [paths]
> train = "corpus/train.spacy"
>
> [training.train_corpus]
> @readers = "spacy.Corpus.v1"
> path = ${paths.train}
2020-08-06 17:30:43 +00:00
> gold_preproc = false
> max_length = 0
> limit = 0
> ```
2020-08-17 14:45:24 +00:00
| Name | Description |
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `path` | The directory or filename to read from. Expects data in spaCy's binary [`.spacy` format](/api/data-formats#binary-training). ~~Path~~ |
|  `gold_preproc` | Whether to set up the Example object with gold-standard sentences and tokens for the predictions. See [`Corpus`](/api/corpus#init) for details. ~~bool~~ |
| `max_length` | Maximum document length. Longer documents will be split into sentences, if sentence boundaries are available. Defaults to `0` for no limit. ~~int~~ |
| `limit` | Limit corpus to a subset of examples, e.g. for debugging. Defaults to `0` for no limit. ~~int~~ |
2020-08-06 17:30:43 +00:00
```python
2020-09-12 15:05:10 +00:00
%%GITHUB_SPACY/spacy/training/corpus.py
2020-08-06 17:30:43 +00:00
```
2020-07-04 12:23:10 +00:00
## Corpus.\_\_init\_\_ {#init tag="method"}
2020-08-05 18:29:53 +00:00
Create a `Corpus` for iterating [Example](/api/example) objects from a file or
directory of [`.spacy` data files](/api/data-formats#binary-training). The
`gold_preproc` setting lets you specify whether to set up the `Example` object
with gold-standard sentences and tokens for the predictions. Gold preprocessing
helps the annotations align to the tokenization, and may result in sequences of
more consistent length. However, it may reduce runtime accuracy due to
train/test skew.
2020-07-04 12:23:10 +00:00
2020-07-29 09:36:42 +00:00
> #### Example
>
> ```python
> from spacy.training import Corpus
2020-07-29 09:36:42 +00:00
>
2020-08-05 18:29:53 +00:00
> # With a single file
> corpus = Corpus("./data/train.spacy")
2020-07-29 09:36:42 +00:00
>
2020-08-05 18:29:53 +00:00
> # With a directory
> corpus = Corpus("./data", limit=10)
2020-07-29 09:36:42 +00:00
> ```
2020-07-04 12:23:10 +00:00
2020-08-17 14:45:24 +00:00
| Name | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `path` | The directory or filename to read from. ~~Union[str, Path]~~ |
| _keyword-only_ | |
|  `gold_preproc` | Whether to set up the Example object with gold-standard sentences and tokens for the predictions. Defaults to `False`. ~~bool~~ |
| `max_length` | Maximum document length. Longer documents will be split into sentences, if sentence boundaries are available. Defaults to `0` for no limit. ~~int~~ |
| `limit` | Limit corpus to a subset of examples, e.g. for debugging. Defaults to `0` for no limit. ~~int~~ |
2020-07-04 12:23:10 +00:00
2020-08-05 18:29:53 +00:00
## Corpus.\_\_call\_\_ {#call tag="method"}
2020-07-04 12:23:10 +00:00
2020-08-05 18:29:53 +00:00
Yield examples from the data.
2020-07-04 12:23:10 +00:00
2020-07-29 09:36:42 +00:00
> #### Example
>
> ```python
> from spacy.training import Corpus
2020-07-29 09:36:42 +00:00
> import spacy
>
2020-08-05 18:29:53 +00:00
> corpus = Corpus("./train.spacy")
2020-07-29 09:36:42 +00:00
> nlp = spacy.blank("en")
2020-08-05 18:29:53 +00:00
> train_data = corpus(nlp)
2020-07-29 09:36:42 +00:00
> ```
2020-08-17 14:45:24 +00:00
| Name | Description |
| ---------- | -------------------------------------- |
| `nlp` | The current `nlp` object. ~~Language~~ |
| **YIELDS** | The examples. ~~Example~~ |
2020-09-14 22:32:49 +00:00
## JsonlTexts {#jsonltexts tag="class"}
Iterate Doc objects from a file or directory of JSONL (newline-delimited JSON)
formatted raw text files. Can be used to read the raw text corpus for language
model [pretraining](/usage/embeddings-transformers#pretraining) from a JSONL
file.
> #### Tip: Writing JSONL
>
> Our utility library [`srsly`](https://github.com/explosion/srsly) provides a
> handy `write_jsonl` helper that takes a file path and list of dictionaries and
> writes out JSONL-formatted data.
>
> ```python
> import srsly
> data = [{"text": "Some text"}, {"text": "More..."}]
> srsly.write_jsonl("/path/to/text.jsonl", data)
> ```
```json
### Example
{"text": "Can I ask where you work now and what you do, and if you enjoy it?"}
{"text": "They may just pull out of the Seattle market completely, at least until they have autonomous vehicles."}
{"text": "My cynical view on this is that it will never be free to the public. Reason: what would be the draw of joining the military? Right now their selling point is free Healthcare and Education. Ironically both are run horribly and most, that I've talked to, come out wishing they never went in."}
```
### JsonlTexts.\_\init\_\_ {#jsonltexts-init tag="method"}
Initialize the reader.
> #### Example
>
> ```python
> from spacy.training import JsonlTexts
>
> corpus = JsonlTexts("./data/texts.jsonl")
> ```
>
> ```ini
> ### Example config
> [pretraining.corpus]
> @readers = "spacy.JsonlReader.v1"
> path = "corpus/raw_text.jsonl"
> min_length = 0
> max_length = 0
> limit = 0
> ```
| Name | Description |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `path` | The directory or filename to read from. Expects newline-delimited JSON with a key `"text"` for each record. ~~Union[str, Path]~~ |
| _keyword-only_ | |
| `min_length` | Minimum document length (in tokens). Shorter documents will be skipped. Defaults to `0`, which indicates no limit. ~~int~~ |
| `max_length` | Maximum document length (in tokens). Longer documents will be skipped. Defaults to `0`, which indicates no limit. ~~int~~ |
| `limit` | Limit corpus to a subset of examples, e.g. for debugging. Defaults to `0` for no limit. ~~int~~ |
### JsonlTexts.\_\_call\_\_ {#jsonltexts-call tag="method"}
Yield examples from the data.
> #### Example
>
> ```python
> from spacy.training import JsonlTexts
> import spacy
>
> corpus = JsonlTexts("./texts.jsonl")
> nlp = spacy.blank("en")
> data = corpus(nlp)
> ```
| Name | Description |
| ---------- | -------------------------------------- |
| `nlp` | The current `nlp` object. ~~Language~~ |
| **YIELDS** | The examples. ~~Example~~ |