2020-08-10 17:47:42 +00:00
|
|
|
0.4.0a1
|
|
|
|
=======
|
|
|
|
|
|
|
|
* The paraphrase generation code was extended and can now use BART instead of GPT2. It also now
|
|
|
|
has the ability to run as a translation task as well (using the Marian models) [#26, #27, #29, #31].
|
|
|
|
* Added the ability to override the context and the question used as input to the model [#23].
|
|
|
|
* MultiGPU training was tested and fixed [#25].
|
|
|
|
* Completed support for beam search, including the ability to return multiple results for a given input [#30].
|
|
|
|
* Misc bug fixes [#32].
|
|
|
|
|
2020-06-09 03:50:28 +00:00
|
|
|
0.3.0
|
|
|
|
=====
|
|
|
|
|
|
|
|
* New option: sentence batching. Multiple sentences with related properties can be batched
|
|
|
|
together in microbatches within a larger minibatch [#14, #11].
|
|
|
|
* Added option to append context and question in a single model input [#18, #20, #22].
|
|
|
|
* Updated Transformers dependency to 2.9, and fixed compatibility with newer versions [#18, #24].
|
|
|
|
|
2020-04-03 17:28:04 +00:00
|
|
|
0.2.0
|
|
|
|
=====
|
|
|
|
|
|
|
|
* No changes since 0.2.0b2.
|
|
|
|
|
|
|
|
Please see the development releases below for the full list of features in this release.
|
|
|
|
|
2020-04-03 01:49:01 +00:00
|
|
|
0.2.0b2
|
|
|
|
=======
|
|
|
|
|
|
|
|
* Misc bug fixes related to inference time [#12, #13].
|
|
|
|
|
2020-03-27 20:08:42 +00:00
|
|
|
0.2.0b1
|
|
|
|
=======
|
|
|
|
|
|
|
|
* Added multilingual Almond tasks [#10].
|
|
|
|
|
2020-03-25 01:55:59 +00:00
|
|
|
0.2.0a2
|
|
|
|
=======
|
|
|
|
|
|
|
|
* Misc bug fixes [#8, #9]
|
|
|
|
|
2020-03-22 02:08:19 +00:00
|
|
|
0.2.0a1
|
|
|
|
=======
|
|
|
|
|
|
|
|
New features:
|
|
|
|
* Add new tasks for Almond: almond_dialogue_nlu, almond_dialogue_nlg, almond_dialogue_policy
|
|
|
|
* Added a new encoder, "Coattention", which encodes the context and question separately, then
|
|
|
|
coattends and applies a BiLSTM layer.
|
|
|
|
* For Coattention and Identity encoder, it is now possible to specify the context and question
|
|
|
|
embeddings separately.
|
|
|
|
* Embeddings in context, question and answer can now be untied, by suffixing the name with '@'
|
|
|
|
followed by an unique identifier (e.g. bert-base-uncased@0 and bert-base-uncased@1).
|
|
|
|
* Added an option to pretrain the context encoder, using MLM objective.
|
|
|
|
* Added beam search.
|
|
|
|
* New embedding option: XLM-R (XLM trained with Roberta).
|
|
|
|
* New task: paraphrasing with GPT2. This is not fully integrated with the other tasks yet,
|
|
|
|
but it will in the future.
|
|
|
|
* New command "genienlp export" can be used to save a trained model for inference.
|
|
|
|
|
|
|
|
Incompatible changes:
|
|
|
|
* The --save flag is now required when calling train
|
|
|
|
|
2020-01-29 18:23:44 +00:00
|
|
|
0.1.1
|
|
|
|
=====
|
|
|
|
|
|
|
|
* Fix publishing on pypi
|
|
|
|
|
2020-01-29 18:01:06 +00:00
|
|
|
0.1.0
|
|
|
|
=====
|
|
|
|
|
|
|
|
* First release
|