From f102164a1f32c28b1c96f3b46a24d188c2398db1 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Fri, 21 Aug 2020 19:34:06 +0200 Subject: [PATCH] Update docs [ci skip] --- website/docs/api/cli.md | 2 +- website/docs/usage/layers-architectures.md | 78 +++++++++++++++------- 2 files changed, 54 insertions(+), 26 deletions(-) diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md index 9cadb2f0f..551147929 100644 --- a/website/docs/api/cli.md +++ b/website/docs/api/cli.md @@ -123,7 +123,7 @@ $ python -m spacy init config [output_file] [--lang] [--pipeline] [--optimize] [ | Name | Description | | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `output_file` | Path to output `.cfg` file. If not set, the config is written to stdout so you can pipe it forward to a file. ~~Path (positional)~~ | +| `output_file` | Path to output `.cfg` file or `-` to write the config to stdout (so you can pipe it forward to a file). Note that if you're writing to stdout, no additional logging info is printed. ~~Path (positional)~~ | | `--lang`, `-l` | Optional code of the [language](/usage/models#languages) to use. Defaults to `"en"`. ~~str (option)~~ | | `--pipeline`, `-p` | Comma-separated list of trainable [pipeline components](/usage/processing-pipelines#built-in) to include in the model. Defaults to `"tagger,parser,ner"`. ~~str (option)~~ | | `--optimize`, `-o` | `"efficiency"` or `"accuracy"`. Whether to optimize for efficiency (faster inference, smaller model, lower memory consumption) or higher accuracy (potentially larger and slower model). This will impact the choice of architecture, pretrained weights and related hyperparameters. Defaults to `"efficiency"`. ~~str (option)~~ | diff --git a/website/docs/usage/layers-architectures.md b/website/docs/usage/layers-architectures.md index 8d26a4139..d9f80ce14 100644 --- a/website/docs/usage/layers-architectures.md +++ b/website/docs/usage/layers-architectures.md @@ -22,8 +22,29 @@ its model architecture. The architecture is like a recipe for the network, and you can't change the recipe once the dish has already been prepared. You have to make a new one. ​ + + ## Type signatures {#type-sigs} + + +> #### Example +> +> ```python +> @spacy.registry.architectures.register("spacy.Tagger.v1") +> def build_tagger_model( +> tok2vec: Model[List[Doc], List[Floats2d]], nO: Optional[int] = None +> ) -> Model[List[Doc], List[Floats2d]]: +> t2v_width = tok2vec.get_dim("nO") if tok2vec.has_dim("nO") else None +> output_layer = Softmax(nO, t2v_width, init_W=zero_init) +> softmax = with_array(output_layer) +> model = chain(tok2vec, softmax) +> model.set_ref("tok2vec", tok2vec) +> model.set_ref("softmax", output_layer) +> model.set_ref("output_layer", output_layer) +> return model +> ``` + ​ The Thinc `Model` class is a **generic type** that can specify its input and output types. Python uses a square-bracket notation for this, so the type ~~Model[List, Dict]~~ says that each batch of inputs to the model will be a @@ -43,39 +64,46 @@ are: ​ | ~~Ragged~~ | A container to handle variable-length sequence data in an unpadded contiguous array. | | ~~Padded~~ | A container to handle variable-length sequence data in a passed contiguous array. | -The model type-signatures help you figure out which model architectures and -components can fit together. For instance, the +The model type signatures help you figure out which model architectures and +components can **fit together**. For instance, the [`TextCategorizer`](/api/textcategorizer) class expects a model typed ~~Model[List[Doc], Floats2d]~~, because the model will predict one row of -category probabilities per `Doc`. In contrast, the `Tagger` class expects a -model typed ~~Model[List[Doc], List[Floats2d]]~~, because it needs to predict -one row of probabilities per token. ​ There's no guarantee that two models with -the same type-signature can be used interchangeably. There are many other ways -they could be incompatible. However, if the types don't match, they almost -surely _won't_ be compatible. This little bit of validation goes a long way, -especially if you configure your editor or other tools to highlight these errors -early. Thinc will also verify that your types match correctly when your config -file is processed at the beginning of training. ​ +category probabilities per [`Doc`](/api/doc). In contrast, the +[`Tagger`](/api/tagger) class expects a model typed ~~Model[List[Doc], +List[Floats2d]]~~, because it needs to predict one row of probabilities per +token. + +There's no guarantee that two models with the same type signature can be used +interchangeably. There are many other ways they could be incompatible. However, +if the types don't match, they almost surely _won't_ be compatible. This little +bit of validation goes a long way, especially if you +[configure your editor](https://thinc.ai/docs/usage-type-checking) or other +tools to highlight these errors early. Thinc will also verify that your types +match correctly when your config file is processed at the beginning of training. ## Defining sublayers {#sublayers} -​ Model architecture functions often accept sublayers as arguments, so that you -can try substituting a different layer into the network. Depending on how the -architecture function is structured, you might be able to define your network -structure entirely through the [config system](/usage/training#config), using -layers that have already been defined. ​The +​ Model architecture functions often accept **sublayers as arguments**, so that +you can try **substituting a different layer** into the network. Depending on +how the architecture function is structured, you might be able to define your +network structure entirely through the [config system](/usage/training#config), +using layers that have already been defined. ​The [transformers documentation](/usage/embeddings-transformers#transformers) -section shows a common example of swapping in a different sublayer. In most NLP -neural network models, the most important parts of the network are what we refer -to as the +section shows a common example of swapping in a different sublayer. + +In most neural network models for NLP, the most important parts of the network +are what we refer to as the [embed and encode](https://explosion.ai/blog/embed-encode-attend-predict) steps. These steps together compute dense, context-sensitive representations of the -tokens. Most of spaCy's default architectures accept a `tok2vec` layer as an -argument, so you can control this important part of the network separately. This -makes it easy to switch between transformer, CNN, BiLSTM or other feature -extraction approaches. And if you want to define your own solution, all you need -to do is register a ~~Model[List[Doc], List[Floats2d]]~~ architecture function, -and you'll be able to try it out in any of spaCy components. ​ +tokens. Most of spaCy's default architectures accept a +[`tok2vec` embedding layer](/api/architectures#tok2vec-arch) as an argument, so +you can control this important part of the network separately. This makes it +easy to **switch between** transformer, CNN, BiLSTM or other feature extraction +approaches. And if you want to define your own solution, all you need to do is +register a ~~Model[List[Doc], List[Floats2d]]~~ architecture function, and +you'll be able to try it out in any of spaCy components. ​ + + ### Registering new architectures