Improve simple training example in v3 migration (#6438)

* Create the examples once
* Use the examples in the initialization
* Provide the batch size
* Fix `begin_training` migration example
This commit is contained in:
Adriane Boyd 2020-11-30 02:39:45 +01:00 committed by GitHub
parent 079f6ea474
commit 1442d2f213
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 8 additions and 8 deletions

View File

@ -969,18 +969,18 @@ The [`Language.update`](/api/language#update),
raw text and a dictionary of annotations.
```python
### Training loop {highlight="11"}
### Training loop {highlight="5-8,12"}
TRAIN_DATA = [
("Who is Shaka Khan?", {"entities": [(7, 17, "PERSON")]}),
("I like London.", {"entities": [(7, 13, "LOC")]}),
]
nlp.initialize()
for i in range(20):
random.shuffle(TRAIN_DATA)
for batch in minibatch(TRAIN_DATA):
examples = []
for text, annots in batch:
for text, annots in TRAIN_DATA:
examples.append(Example.from_dict(nlp.make_doc(text), annots))
nlp.initialize(lambda: examples)
for i in range(20):
random.shuffle(examples)
for batch in minibatch(examples, size=8):
nlp.update(examples)
```
@ -995,7 +995,7 @@ network,
setting up the label scheme.
```diff
- nlp.initialize(examples)
- nlp.begin_training()
+ nlp.initialize(lambda: examples)
```