GenieNLP: A versatile codebase for any NLP task
Go to file
Giovanni Campagna 74915155fa v0.4.0a1 2020-08-10 10:47:42 -07:00
dockerfiles Try fixing the docker build on dockerhub 2020-03-27 15:55:33 -07:00
genienlp almond: workaround @-tokens hallucinated by the paraphraser 2020-08-01 17:52:38 -07:00
tests Fix bugs 2020-07-24 16:32:54 -07:00
.gitignore merge master branch 2020-05-04 10:06:35 -07:00
.travis.yml Switch back to python 3.6 2020-02-13 09:22:48 -08:00
HISTORY v0.4.0a1 2020-08-10 10:47:42 -07:00
LICENSE Rebrand as genienlp 2020-01-28 18:06:51 -08:00
Pipfile Use pytorch 1.5 2020-07-23 12:33:43 -07:00
Pipfile.lock Use pytorch 1.5 2020-07-23 12:33:43 -07:00
README.md Added tests and instructions for paraphrasing 2020-03-02 16:57:35 -08:00
setup.py v0.4.0a1 2020-08-10 10:47:42 -07:00

README.md

Genie NLP library

Build Status Language grade: Python

This library contains the NLP models for the Genie toolkit for virtual assistants. It is derived from the decaNLP library by Salesforce, but has diverged significantly.

The library is suitable for all NLP tasks that can be framed as Contextual Question Answering, that is, with 3 inputs:

  • text or structured input as context
  • text input as question
  • text or structured output as answer

As the decaNLP paper shows, many different NLP tasks can be framed in this way. Genie primarily uses the library for semantic parsing, dialogue state tracking, and natural language generation given a formal dialogue state, and this is what the models work best for.

Installation

genienlp is available on PyPi. You can install it with:

pip3 install genienlp

After installation, a genienlp command becomes available.

Likely, you will also want to download the word embeddings ahead of time:

genienlp cache-embeddings --embeddings glove+char -d <embeddingdir>

Usage

Train a model:

genienlp train --tasks almond --train_iterations 50000 --embeddings <embeddingdir> --data <datadir> --save <modeldir>

Generate predictions:

genienlp predict --tasks almond --data <datadir> --path <modeldir>

Train a paraphrasing model:

genienlp train-paraphrase --train_data_file <train_data_file> --eval_data_file <dev_data_file> --output_dir <modeldir> --model_type gpt2 --do_train --do_eval --evaluate_during_training --logging_steps 1000 --save_steps 1000 --max_steps 40000 --save_total_limit 2 --gradient_accumulation_steps 16 --per_gpu_eval_batch_size 4 --per_gpu_train_batch_size 4 --num_train_epochs 1 --model_name_or_path <gpt2/gpt2-medium/gpt2-large/gpt2-xlarge>

Generate paraphrases:

genienlp run-paraphrase --model_type gpt2 --model_name_or_path <modeldir> --temperature 0.3 --repetition_penalty 1.0 --num_samples 4 --length 15 --batch_size 32 --input_file <input tsv file> --input_column 1

See genienlp --help and genienlp <command> --help for details about each argument.

Citation

If you use the MultiTask Question Answering model in your work, please cite The Natural Language Decathlon: Multitask Learning as Question Answering.

@article{McCann2018decaNLP,
  title={The Natural Language Decathlon: Multitask Learning as Question Answering},
  author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
  journal={arXiv preprint arXiv:1806.08730},
  year={2018}
}

If you use the BERT-LSTM model (Identity encoder + MQAN decoder), please cite Schema2QA: Answering Complex Queries on the Structured Web with a Neural Model

@article{Xu2020Schema2QA,
  title={Schema2QA: Answering Complex Queries on the Structured Web with a Neural Model},
  author={Silei Xu and Giovanni Campagna and Jian Li and Monica S. Lam},
  journal={arXiv preprint arXiv:2001.05609},
  year={2020}
}