spaCy/bin/wiki_entity_linking
Sofie Van Landeghem 06f0a8daa0
Default settings to configurations (#4995)
* fix grad_clip naming

* cleaning up pretrained_vectors out of cfg

* further refactoring Model init's

* move Model building out of pipes

* further refactor to require a model config when creating a pipe

* small fixes

* making cfg in nn_parser more consistent

* fixing nr_class for parser

* fixing nn_parser's nO

* fix printing of loss

* architectures in own file per type, consistent naming

* convenience methods default_tagger_config and default_tok2vec_config

* let create_pipe access default config if available for that component

* default_parser_config

* move defaults to separate folder

* allow reading nlp from package or dir with argument 'name'

* architecture spacy.VocabVectors.v1 to read static vectors from file

* cleanup

* default configs for nel, textcat, morphologizer, tensorizer

* fix imports

* fixing unit tests

* fixes and clean up

* fixing defaults, nO, fix unit tests

* restore parser IO

* fix IO

* 'fix' serialization test

* add *.cfg to manifest

* fix example configs with additional arguments

* replace Morpohologizer with Tagger

* add IO bit when testing overfitting of tagger (currently failing)

* fix IO - don't initialize when reading from disk

* expand overfitting tests to also check IO goes OK

* remove dropout from HashEmbed to fix Tagger performance

* add defaults for sentrec

* update thinc

* always pass a Model instance to a Pipe

* fix piped_added statement

* remove obsolete W029

* remove obsolete errors

* restore byte checking tests (work again)

* clean up test

* further test cleanup

* convert from config to Model in create_pipe

* bring back error when component is not initialized

* cleanup

* remove calls for nlp2.begin_training

* use thinc.api in imports

* allow setting charembed's nM and nC

* fix for hardcoded nM/nC + unit test

* formatting fixes

* trigger build
2020-02-27 18:42:27 +01:00
..
README.md Friendly error warning for NEL example script (#4881) 2020-01-14 01:51:14 +01:00
__init__.py KB extensions and better parsing of WikiData (#4375) 2019-10-14 12:28:53 +02:00
entity_linker_evaluation.py Reduce mem usage in training Entity Linker (#4811) 2020-01-06 14:59:50 +01:00
kb_creator.py Example class for training data (#4543) 2019-11-11 17:35:27 +01:00
train_descriptions.py Default settings to configurations (#4995) 2020-02-27 18:42:27 +01:00
wiki_io.py KB extensions and better parsing of WikiData (#4375) 2019-10-14 12:28:53 +02:00
wiki_namespaces.py KB extensions and better parsing of WikiData (#4375) 2019-10-14 12:28:53 +02:00
wikidata_pretrain_kb.py Friendly error warning for NEL example script (#4881) 2020-01-14 01:51:14 +01:00
wikidata_processor.py KB extensions and better parsing of WikiData (#4375) 2019-10-14 12:28:53 +02:00
wikidata_train_entity_linker.py fix bugs from porting master to develop 2020-02-26 08:44:22 +01:00
wikipedia_processor.py Reduce mem usage in training Entity Linker (#4811) 2020-01-06 14:59:50 +01:00

README.md

Entity Linking with Wikipedia and Wikidata

Step 1: Create a Knowledge Base (KB) and training data

Run wikipedia_pretrain_kb.py

  • This takes as input the locations of a Wikipedia and a Wikidata dump, and produces a KB directory + training file
  • You can set the filtering parameters for KB construction:
    • max_per_alias (-a): (max) number of candidate entities in the KB per alias/synonym
    • min_freq (-f): threshold of number of times an entity should occur in the corpus to be included in the KB
    • min_pair (-c): threshold of number of times an entity+alias combination should occur in the corpus to be included in the KB
  • Further parameters to set:
    • descriptions_from_wikipedia (-wp): whether to parse descriptions from Wikipedia (True) or Wikidata (False)
    • entity_vector_length (-v): length of the pre-trained entity description vectors
    • lang (-la): language for which to fetch Wikidata information (as the dump contains all languages)

Quick testing and rerunning:

  • When trying out the pipeline for a quick test, set limit_prior (-lp), limit_train (-lt) and/or limit_wd (-lw) to read only parts of the dumps instead of everything.
    • e.g. set -lt 20000 -lp 2000 -lw 3000 -f 1
  • If you only want to (re)run certain parts of the pipeline, just remove the corresponding files and they will be recalculated or reparsed.

Step 2: Train an Entity Linking model

Run wikidata_train_entity_linker.py

  • This takes the KB directory produced by Step 1, and trains an Entity Linking model
  • Specify the output directory (-o) in which the final, trained model will be saved
  • You can set the learning parameters for the EL training:
    • epochs (-e): number of training iterations
    • dropout (-p): dropout rate
    • lr (-n): learning rate
    • l2 (-r): L2 regularization
  • Specify the number of training and dev testing articles with train_articles (-t) and dev_articles (-d) respectively
    • If not specified, the full dataset will be processed - this may take a LONG time !
  • Further parameters to set:
    • labels_discard (-l): NER label types to discard during training