From 1fa6d6ba55e8d4c84db8d74a284fec1d60dc32c5 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Thu, 25 Jul 2019 14:24:56 +0200 Subject: [PATCH] Improve consistency of docs examples [ci skip] --- website/docs/api/doc.md | 10 +++++----- website/docs/api/top-level.md | 2 +- website/docs/usage/processing-pipelines.md | 4 ++-- website/docs/usage/saving-loading.md | 4 ++-- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/website/docs/api/doc.md b/website/docs/api/doc.md index bf9801564..b1306ef91 100644 --- a/website/docs/api/doc.md +++ b/website/docs/api/doc.md @@ -11,6 +11,11 @@ compressed binary strings. The `Doc` object holds an array of `TokenC]` structs. The Python-level `Token` and [`Span`](/api/span) objects are views of this array, i.e. they don't own the data themselves. +## Doc.\_\_init\_\_ {#init tag="method"} + +Construct a `Doc` object. The most common way to get a `Doc` object is via the +`nlp` object. + > #### Example > > ```python @@ -24,11 +29,6 @@ array, i.e. they don't own the data themselves. > doc = Doc(nlp.vocab, words=words, spaces=spaces) > ``` -## Doc.\_\_init\_\_ {#init tag="method"} - -Construct a `Doc` object. The most common way to get a `Doc` object is via the -`nlp` object. - | Name | Type | Description | | ----------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `vocab` | `Vocab` | A storage container for lexical types. | diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md index 9d5bdc527..2990a0969 100644 --- a/website/docs/api/top-level.md +++ b/website/docs/api/top-level.md @@ -29,7 +29,7 @@ class. The data will be loaded in via > nlp = spacy.load("/path/to/en") # unicode path > nlp = spacy.load(Path("/path/to/en")) # pathlib Path > -> nlp = spacy.load("en", disable=["parser", "tagger"]) +> nlp = spacy.load("en_core_web_sm", disable=["parser", "tagger"]) > ``` | Name | Type | Description | diff --git a/website/docs/usage/processing-pipelines.md b/website/docs/usage/processing-pipelines.md index 0fa243501..6934374ec 100644 --- a/website/docs/usage/processing-pipelines.md +++ b/website/docs/usage/processing-pipelines.md @@ -182,8 +182,8 @@ initializing a Language class via [`from_disk`](/api/language#from_disk). - nlp = spacy.load('en', tagger=False, entity=False) - doc = nlp(u"I don't want parsed", parse=False) -+ nlp = spacy.load('en', disable=['ner']) -+ nlp.remove_pipe('parser') ++ nlp = spacy.load("en", disable=["ner"]) ++ nlp.remove_pipe("parser") + doc = nlp(u"I don't want parsed") ``` diff --git a/website/docs/usage/saving-loading.md b/website/docs/usage/saving-loading.md index 3c1e51603..81e90dcc7 100644 --- a/website/docs/usage/saving-loading.md +++ b/website/docs/usage/saving-loading.md @@ -623,8 +623,8 @@ solves this with a clear distinction between setting up the instance and loading the data. ```diff -- nlp = spacy.load("en", path="/path/to/data") -+ nlp = spacy.blank("en").from_disk("/path/to/data") +- nlp = spacy.load("en_core_web_sm", path="/path/to/data") ++ nlp = spacy.blank("en_core_web_sm").from_disk("/path/to/data") ```