diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md index 018105065..3783fcb79 100644 --- a/website/docs/api/cli.md +++ b/website/docs/api/cli.md @@ -807,15 +807,16 @@ $ python -m spacy train [config_path] [--output] [--code] [--verbose] [--gpu-id] ## pretrain {#pretrain new="2.1" tag="command,experimental"} Pretrain the "token to vector" ([`Tok2vec`](/api/tok2vec)) layer of pipeline -components on [raw text](/api/data-formats#pretrain), using an approximate -language-modeling objective. Specifically, we load pretrained vectors, and train -a component like a CNN, BiLSTM, etc to predict vectors which match the -pretrained ones. The weights are saved to a directory after each epoch. You can -then include a **path to one of these pretrained weights files** in your +components on raw text, using an approximate language-modeling objective. +Specifically, we load pretrained vectors, and train a component like a CNN, +BiLSTM, etc to predict vectors which match the pretrained ones. The weights are +saved to a directory after each epoch. You can then include a **path to one of +these pretrained weights files** in your [training config](/usage/training#config) as the `init_tok2vec` setting when you train your pipeline. This technique may be especially helpful if you have little labelled data. See the usage docs on -[pretraining](/usage/embeddings-transformers#pretraining) for more info. +[pretraining](/usage/embeddings-transformers#pretraining) for more info. To read +the raw text, a [`JsonlCorpus`](/api/top-level#JsonlCorpus) is typically used. @@ -835,7 +836,6 @@ auto-generated by setting `--pretraining` on > $ python -m spacy pretrain config.cfg output_pretrain --paths.raw_text="data.jsonl" > ``` - ```cli $ python -m spacy pretrain [config_path] [output_dir] [--code] [--resume-path] [--epoch-resume] [--gpu-id] [overrides] ```