0.4.0a1 ======= * The paraphrase generation code was extended and can now use BART instead of GPT2. It also now has the ability to run as a translation task as well (using the Marian models) [#26, #27, #29, #31]. * Added the ability to override the context and the question used as input to the model [#23]. * MultiGPU training was tested and fixed [#25]. * Completed support for beam search, including the ability to return multiple results for a given input [#30]. * Misc bug fixes [#32]. 0.3.0 ===== * New option: sentence batching. Multiple sentences with related properties can be batched together in microbatches within a larger minibatch [#14, #11]. * Added option to append context and question in a single model input [#18, #20, #22]. * Updated Transformers dependency to 2.9, and fixed compatibility with newer versions [#18, #24]. 0.2.0 ===== * No changes since 0.2.0b2. Please see the development releases below for the full list of features in this release. 0.2.0b2 ======= * Misc bug fixes related to inference time [#12, #13]. 0.2.0b1 ======= * Added multilingual Almond tasks [#10]. 0.2.0a2 ======= * Misc bug fixes [#8, #9] 0.2.0a1 ======= New features: * Add new tasks for Almond: almond_dialogue_nlu, almond_dialogue_nlg, almond_dialogue_policy * Added a new encoder, "Coattention", which encodes the context and question separately, then coattends and applies a BiLSTM layer. * For Coattention and Identity encoder, it is now possible to specify the context and question embeddings separately. * Embeddings in context, question and answer can now be untied, by suffixing the name with '@' followed by an unique identifier (e.g. bert-base-uncased@0 and bert-base-uncased@1). * Added an option to pretrain the context encoder, using MLM objective. * Added beam search. * New embedding option: XLM-R (XLM trained with Roberta). * New task: paraphrasing with GPT2. This is not fully integrated with the other tasks yet, but it will in the future. * New command "genienlp export" can be used to save a trained model for inference. Incompatible changes: * The --save flag is now required when calling train 0.1.1 ===== * Fix publishing on pypi 0.1.0 ===== * First release