diff --git a/README.rst b/README.rst
index bac46dec4..844271f2b 100644
--- a/README.rst
+++ b/README.rst
@@ -6,7 +6,7 @@ Cython. spaCy is built on the very latest research, but it isn't researchware.
It was designed from day 1 to be used in real products. It's commercial
open-source software, released under the MIT license.
-💫 **Version 1.2 out now!** `Read the release notes here. `_
+💫 **Version 1.3 out now!** `Read the release notes here. `_
.. image:: http://i.imgur.com/wFvLZyJ.png
:target: https://travis-ci.org/explosion/spaCy
@@ -241,8 +241,38 @@ calling ``spacy.load()``, or by passing a ``path`` argument to the ``spacy.en.En
Changelog
=========
-2016-11-04 `v1.2.0 `_: *Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese*
--------------------------------------------------------------------------------------------------------------------------------------------
+2016-12-03 `v1.3.0 `_: *Improve API consistency*
+---------------------------------------------------------------------------------------------
+
+**✨ API improvements**
+
+* Add ``Span.sentiment`` attribute
+* `#658 `_: Add ``Span.noun_chunks`` iterator (thanks `@pokey `_).
+* `#642 `_: Let ``--data-path`` be specified when running download.py scripts (thanks `@ExplodingCabbage `_).
+* `#638 `_: Add German stopwords (thanks `@souravsingh `_).
+* `#614 `_: Fix ``PhraseMatcher`` to work with new ``Matcher`` (thanks `@sadovnychyi `_).
+
+**🔴 Bug fixes**
+
+* Fix issue `#605 `_: ``accept`` argument to ``Matcher`` now rejects matches as expected.
+* Fix issue `#617 `_: ``Vocab.load()`` now works with string paths, as well as ``Path`` objects.
+* Fix issue `#639 `_: Stop words in ``Language`` class now used as expected.
+* Fix issues `#656 `_, `#624 `_: ``Tokenizer`` special-case rules now support arbitrary token attributes.
+
+
+**📖 Documentation and examples**
+
+* Add `"Customizing the tokenizer" `_ workflow.
+* Add `"Training the tagger, parser and entity recognizer" `_ workflow.
+* Add `"Entity recognition" `_ workflow.
+* Fix various typos and inconsistencies.
+
+**👥 Contributors**
+
+Thanks to `@pokey `_, `@ExplodingCabbage `_, `@souravsingh `_, `@sadovnychyi `_, `@manojsakhwar `_, `@TiagoMRodrigues `_, `@savkov `_, `@pspiegelhalter `_, `@chenb67 `_, `@kylepjohnson `_, `@YanhaoYang `_, `@tjrileywisc `_, `@dechov `_, `@wjt `_, `@jsmootiv `_ and `@blarghmatey `_ for the pull requests!
+
+2016-11-04 `v1.2.0 `_: *Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese*
+------------------------------------------------------------------------------------------------------------------------------------------------------
**✨ Major features and improvements**