💫 Industrial-strength Natural Language Processing (NLP) in Python
Go to file
Matthew Honnibal 1f1cd5013f Revert "Changes to vocab for new stringstore scheme"
This reverts commit a51149a717.
2016-09-30 20:10:30 +02:00
bin Fix init_model.py 2016-09-25 15:58:51 +02:00
contributors Add contributor. 2015-10-07 17:55:46 -07:00
corpora/en Fix lemma of "coping" 2016-05-20 19:03:41 +10:00
examples remove deprecated LOCAL_DATA_DIR 2016-04-05 11:25:54 +02:00
include add stdint.h fallback (vs 2008) 2016-04-29 00:08:14 +02:00
lang_data * Allow longer ellipses to be treated as a single token, e.g. Hello......there 2016-05-09 13:22:53 +02:00
spacy Revert "Changes to vocab for new stringstore scheme" 2016-09-30 20:10:30 +02:00
website Merge pull request #465 from izeye/patch-1 2016-08-10 05:17:22 +10:00
.gitignore Update gitignore for website 2016-04-01 00:36:56 +11:00
.travis.yml Build travis with trusty 2016-09-23 14:17:46 +02:00
LICENSE Update LICENSE 2016-04-29 09:49:28 +02:00
MANIFEST.in cleanup 2016-03-13 18:12:32 +01:00
README.rst * Update changelog in preparation for 0.101.0 release 2016-05-09 12:57:07 +02:00
buildbot.json can't work around build issue on windows 2016-05-01 12:30:59 +02:00
fabfile.py Merge branch 'master' of https://github.com/honnibal/spaCy 2015-12-28 18:03:06 +01:00
package.json Update package.json 2016-04-08 14:48:47 +02:00
requirements.txt Require new preshed 2016-09-30 18:41:01 +02:00
setup.py Require new preshed 2016-09-30 18:41:01 +02:00
wordnet_license.txt * Add WordNet license file 2015-02-01 16:11:53 +11:00

README.rst

.. image:: https://travis-ci.org/spacy-io/spaCy.svg?branch=master
    :target: https://travis-ci.org/spacy-io/spaCy

==============================
spaCy: Industrial-strength NLP
==============================

spaCy is a library for advanced natural language processing in Python and Cython.

Documentation and details: https://spacy.io/

spaCy is built on the very latest research, but it isn't researchware.  It was
designed from day 1 to be used in real products. It's commercial open-source
software, released under the MIT license.

Features
--------

* Labelled dependency parsing (91.8% accuracy on OntoNotes 5)

* Named entity recognition (82.6% accuracy on OntoNotes 5)

* Part-of-speech tagging (97.1% accuracy on OntoNotes 5)

* Easy to use word vectors

* All strings mapped to integer IDs

* Export to numpy data arrays

* Alignment maintained to original string, ensuring easy mark up calculation

* Range of easy-to-use orthographic features.

* No pre-processing required. spaCy takes raw text as input, warts and newlines and all.

Top Peformance
--------------

* Fastest in the world: <50ms per document.  No faster system has ever been
  announced.

* Accuracy within 1% of the current state of the art on all tasks performed
  (parsing, named entity recognition, part-of-speech tagging).  The only more
  accurate systems are an order of magnitude slower or more.

Supports
--------

* CPython 2.6, 2.7, 3.3, 3.4, 3.5 (only 64 bit)
* OSX
* Linux
* Windows (Cygwin, MinGW, Visual Studio)


2016-05-0 0.101.0: Fixed German model
-------------------------------------

* Fixed bug that prevented German parses from being deprojectivised.

* Bug fixes to sentence boundary detection.

* Add rich comparison methods to the Lexeme class.

* Add missing Doc.has_vector and Span.has_vector properties.

* Add missing Span.sent property.


2016-05-05 v0.100.7: German!
----------------------------

spaCy finally supports another language, in addition to English. We're lucky to have Wolfgang Seeker on the team, and the new German model is just the beginning.
Now that there are multiple languages, you should consider loading spaCy via the load() function. This function also makes it easier to load extra word vector data for English:

.. code:: python

    import spacy
    en_nlp = spacy.load('en', vectors='en_glove_cc_300_1m_vectors')
    de_nlp = spacy.load('de')
    
To support use of the load function, there are also two new helper functions: spacy.get_lang_class and spacy.set_lang_class.
Once the German model is loaded, you can use it just like the English model:

.. code:: python

    doc = nlp(u'''Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie aus freien Inhalten, zu dem du mit deinem Wissen beitragen kannst. Seit Mai 2001 sind 1.936.257 Artikel in deutscher Sprache entstanden.''')
    
    for sent in doc.sents:
        print(sent.root.text, sent.root.n_lefts, sent.root.n_rights)
    
    # (u'ist', 1, 2)
    # (u'sind', 1, 3)
    
The German model provides tokenization, POS tagging, sentence boundary detection, syntactic dependency parsing, recognition of organisation, location and person entities, and word vector representations trained on a mix of open subtitles and Wikipedia data. It doesn't yet provide lemmatisation or morphological analysis, and it doesn't yet recognise numeric entities such as numbers and dates.

Bugfixes
--------

* spaCy < 0.100.7 had a bug in the semantics of the Token.__str__ and Token.__unicode__ built-ins: they included a trailing space.
* Improve handling of "infixed" hyphens. Previously the tokenizer struggled with multiple hyphens, such as "well-to-do".

* Improve handling of periods after mixed-case tokens

* Improve lemmatization for English special-case tokens

* Fix bug that allowed spaces to be treated as heads in the syntactic parse

* Fix bug that led to inconsistent sentence boundaries before and after serialisation.

* Fix bug from deserialising untagged documents.