spaCy/spacy/tests/serialize/test_serialize_extension_at...

36 lines
1.0 KiB
Python
Raw Normal View History

# coding: utf-8
from __future__ import unicode_literals
import pytest
from spacy.tokens import Doc, Token
💫 Refactor test suite (#2568) ## Description Related issues: #2379 (should be fixed by separating model tests) * **total execution time down from > 300 seconds to under 60 seconds** 🎉 * removed all model-specific tests that could only really be run manually anyway – those will now live in a separate test suite in the [`spacy-models`](https://github.com/explosion/spacy-models) repository and are already integrated into our new model training infrastructure * changed all relative imports to absolute imports to prepare for moving the test suite from `/spacy/tests` to `/tests` (it'll now always test against the installed version) * merged old regression tests into collections, e.g. `test_issue1001-1500.py` (about 90% of the regression tests are very short anyways) * tidied up and rewrote existing tests wherever possible ### Todo - [ ] move tests to `/tests` and adjust CI commands accordingly - [x] move model test suite from internal repo to `spacy-models` - [x] ~~investigate why `pipeline/test_textcat.py` is flakey~~ - [x] review old regression tests (leftover files) and see if they can be merged, simplified or deleted - [ ] update documentation on how to run tests ### Types of change enhancement, tests ## Checklist <!--- Before you submit the PR, go over this checklist and make sure you can tick off all the boxes. [] -> [x] --> - [x] I have submitted the spaCy Contributor Agreement. - [x] I ran the tests, and all new and existing tests passed. - [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
2018-07-24 21:38:44 +00:00
from spacy.vocab import Vocab
@pytest.fixture
def doc_w_attrs(en_tokenizer):
Doc.set_extension("_test_attr", default=False)
Doc.set_extension("_test_prop", getter=lambda doc: len(doc.text))
Doc.set_extension(
"_test_method", method=lambda doc, arg: "{}{}".format(len(doc.text), arg)
)
doc = en_tokenizer("This is a test.")
doc._._test_attr = "test"
Token.set_extension("_test_token", default="t0")
doc[1]._._test_token = "t1"
return doc
def test_serialize_ext_attrs_from_bytes(doc_w_attrs):
doc_b = doc_w_attrs.to_bytes()
doc = Doc(Vocab()).from_bytes(doc_b)
assert doc._.has("_test_attr")
assert doc._._test_attr == "test"
assert doc._._test_prop == len(doc.text)
assert doc._._test_method("test") == "{}{}".format(len(doc.text), "test")
assert doc[0]._._test_token == "t0"
assert doc[1]._._test_token == "t1"
assert doc[2]._._test_token == "t0"