spaCy/examples/keras_parikh_entailment
Ines Montani f37863093a 💫 Replace ujson, msgpack and dill/pickle/cloudpickle with srsly (#3003)
Remove hacks and wrappers, keep code in sync across our libraries and move spaCy a few steps closer to only depending on packages with binary wheels 🎉

See here: https://github.com/explosion/srsly

    Serialization is hard, especially across Python versions and multiple platforms. After dealing with many subtle bugs over the years (encodings, locales, large files) our libraries like spaCy and Prodigy have steadily grown a number of utility functions to wrap the multiple serialization formats we need to support (especially json, msgpack and pickle). These wrapping functions ended up duplicated across our codebases, so we wanted to put them in one place.

    At the same time, we noticed that having a lot of small dependencies was making maintainence harder, and making installation slower. To solve this, we've made srsly standalone, by including the component packages directly within it. This way we can provide all the serialization utilities we need in a single binary wheel.

    srsly currently includes forks of the following packages:

        ujson
        msgpack
        msgpack-numpy
        cloudpickle



* WIP: replace json/ujson with srsly

* Replace ujson in examples

Use regular json instead of srsly to make code easier to read and follow

* Update requirements

* Fix imports

* Fix typos

* Replace msgpack with srsly

* Fix warning
2018-12-03 01:28:22 +01:00
..
README.md 💫 Port master changes over to develop (#2979) 2018-11-29 16:30:29 +01:00
__main__.py 💫 Replace ujson, msgpack and dill/pickle/cloudpickle with srsly (#3003) 2018-12-03 01:28:22 +01:00
keras_decomposable_attention.py Auto-format examples 2018-12-02 04:26:26 +01:00
spacy_hook.py 💫 Port master changes over to develop (#2979) 2018-11-29 16:30:29 +01:00

README.md

A decomposable attention model for Natural Language Inference

by Matthew Honnibal, @honnibal Updated for spaCy 2.0+ and Keras 2.2.2+ by John Stewart, @free-variation

This directory contains an implementation of the entailment prediction model described by Parikh et al. (2016). The model is notable for its competitive performance with very few parameters.

The model is implemented using Keras and spaCy. Keras is used to build and train the network. spaCy is used to load the GloVe vectors, perform the feature extraction, and help you apply the model at run-time. The following demo code shows how the entailment model can be used at runtime, once the hook is installed to customise the .similarity() method of spaCy's Doc and Span objects:

def demo(shape):
	nlp = spacy.load('en_vectors_web_lg')
    nlp.add_pipe(KerasSimilarityShim.load(nlp.path / 'similarity', nlp, shape[0]))

    doc1 = nlp(u'The king of France is bald.')
    doc2 = nlp(u'France has no king.')

    print("Sentence 1:", doc1)
    print("Sentence 2:", doc2)

    entailment_type, confidence = doc1.similarity(doc2)
    print("Entailment type:", entailment_type, "(Confidence:", confidence, ")")

Which gives the output Entailment type: contradiction (Confidence: 0.60604566), showing that the system has definite opinions about Betrand Russell's famous conundrum!

I'm working on a blog post to explain Parikh et al.'s model in more detail. A notebook is available that briefly explains this implementation. I think it is a very interesting example of the attention mechanism, which I didn't understand very well before working through this paper. There are lots of ways to extend the model.

What's where

File Description
__main__.py The script that will be executed. Defines the CLI, the data reading, etc — all the boring stuff.
spacy_hook.py Provides a class KerasSimilarityShim that lets you use an arbitrary function to customize spaCy's doc.similarity() method. Instead of the default average-of-vectors algorithm, when you call doc1.similarity(doc2), you'll get the result of your_model(doc1, doc2).
keras_decomposable_attention.py Defines the neural network model.

Setting up

First, install Keras, spaCy and the spaCy English models (about 1GB of data):

pip install keras
pip install spacy
python -m spacy download en_vectors_web_lg

You'll also want to get Keras working on your GPU, and you will need a backend, such as TensorFlow or Theano. This will depend on your set up, so you're mostly on your own for this step. If you're using AWS, try the NVidia AMI. It made things pretty easy.

Once you've installed the dependencies, you can run a small preliminary test of the Keras model:

py.test keras_parikh_entailment/keras_decomposable_attention.py

This compiles the model and fits it with some dummy data. You should see that both tests passed.

Finally, download the Stanford Natural Language Inference corpus.

Running the example

You can run the keras_parikh_entailment/ directory as a script, which executes the file keras_parikh_entailment/__main__.py. If you run the script without arguments the usage is shown. Running it with -h explains the command line arguments.

The first thing you'll want to do is train the model:

python keras_parikh_entailment/ train -t <path to SNLI train JSON> -s <path to SNLI dev JSON>

Training takes about 300 epochs for full accuracy, and I haven't rerun the full experiment since refactoring things to publish this example — please let me know if I've broken something. You should get to at least 85% on the development data even after 10-15 epochs.

The other two modes demonstrate run-time usage. I never like relying on the accuracy printed by .fit() methods. I never really feel confident until I've run a new process that loads the model and starts making predictions, without access to the gold labels. I've therefore included an evaluate mode.

python keras_parikh_entailment/ evaluate -s <path to SNLI train JSON>

Finally, there's also a little demo, which mostly exists to show you how run-time usage will eventually look.

python keras_parikh_entailment/ demo

Getting updates

We should have the blog post explaining the model ready before the end of the week. To get notified when it's published, you can either follow me on Twitter or subscribe to our mailing list.