spaCy/website/docs/tutorials/mark-adverbs.jade

122 lines
8.1 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

include ../../_includes/_mixins
p.u-text-large Let's say you're developing a proofreading tool, or possibly an IDE for writers. You're convinced by Stephen King's advice that #[a(href="http://www.brainpickings.org/2013/03/13/stephen-king-on-adverbs" target="_blank") adverbs are not your friend] so you want to #[strong highlight all adverbs]. We'll use one of the examples he finds particularly egregious:
+code.
>>> import spacy.en
>>> from spacy.parts_of_speech import ADV
>>> # Load the pipeline, and call it with some text.
>>> nlp = spacy.en.English()
>>> tokens = nlp(u"Give it back, he pleaded abjectly, its mine.", tag=True, parse=False)
>>> print u''.join(tok.string.upper() if tok.pos == ADV else tok.string for tok in tokens)
uGive it BACK, he pleaded ABJECTLY, its mine.
p Easy enough --- but the problem is that we've also highlighted "back". While "back" is undoubtedly an adverb, we probably don't want to highlight it. If what we're trying to do is flag dubious stylistic choices, we'll need to refine our logic. It turns out only a certain type of adverb is of interest to us.
p There are lots of ways we might do this, depending on just what words we want to flag. The simplest way to exclude adverbs like "back" and "not" is by word frequency: these words are much more common than the prototypical manner adverbs that the style guides are worried about.
p The #[code Lexeme.prob] and #[code Token.prob] attribute gives a log probability estimate of the word:
+code.
>>> nlp.vocab[u'back'].prob
-7.403977394104004
>>> nlp.vocab[u'not'].prob
-5.407193660736084
>>> nlp.vocab[u'quietly'].prob
-11.07155704498291
p (The probability estimate is based on counts from a 3 billion word corpus, smoothed using the `Simple Good-Turing`_ method.)
p So we can easily exclude the N most frequent words in English from our adverb marker. Let's try N=1000 for now:
+code.
>>> import spacy.en
>>> from spacy.parts_of_speech import ADV
>>> nlp = spacy.en.English()
>>> # Find log probability of Nth most frequent word
>>> probs = [lex.prob for lex in nlp.vocab]
>>> probs.sort()
>>> is_adverb = lambda tok: tok.pos == ADV and tok.prob < probs[-1000]
>>> tokens = nlp(u"Give it back, he pleaded abjectly, its mine.")
>>> print u''.join(tok.string.upper() if is_adverb(tok) else tok.string for tok in tokens)
Give it back, he pleaded ABJECTLY, its mine.
p There are lots of other ways we could refine the logic, depending on just what words we want to flag. Let's say we wanted to only flag adverbs that modified words similar to "pleaded". This is easy to do, as spaCy loads a vector-space representation for every word (by default, the vectors produced by Levy and Goldberg (2014)). Naturally, the vector is provided as a numpy array:
+code.
>>> pleaded = tokens[7]
>>> pleaded.vector.shape
(300,)
>>> pleaded.vector[:5]
array([ 0.04229792, 0.07459262, 0.00820188, -0.02181299, 0.07519238], dtype=float32)
p We want to sort the words in our vocabulary by their similarity to "pleaded". There are lots of ways to measure the similarity of two vectors. We'll use the cosine metric:
+code.
>>> from numpy import dot
>>> from numpy.linalg import norm
>>> cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
>>> words = [w for w in nlp.vocab if w.has_vector]
>>> words.sort(key=lambda w: cosine(w.vector, pleaded.vector))
>>> words.reverse()
>>> print('1-20', ', '.join(w.orth_ for w in words[0:20]))
1-20 pleaded, pled, plead, confessed, interceded, pleads, testified, conspired, motioned, demurred, countersued, remonstrated, begged, apologised, consented, acquiesced, petitioned, quarreled, appealed, pleading
>>> print('50-60', ', '.join(w.orth_ for w in words[50:60]))
50-60 counselled, bragged, backtracked, caucused, refiled, dueled, mused, dissented, yearned, confesses
>>> print('100-110', ', '.join(w.orth_ for w in words[100:110]))
100-110 cabled, ducked, sentenced, perjured, absconded, bargained, overstayed, clerked, confided, sympathizes
>>> print('1000-1010', ', '.join(w.orth_ for w in words[1000:1010]))
1000-1010 scorned, baled, righted, requested, swindled, posited, firebombed, slimed, deferred, sagged
>>> print('50000-50010', ', '.join(w.orth_ for w in words[50000:50010]))
50000-50010, fb, ford, systems, puck, anglers, ik, tabloid, dirty, rims, artists
p As you can see, the similarity model that these vectors give us is excellent — we're still getting meaningful results at 1000 words, off a single prototype! The only problem is that the list really contains two clusters of words: one associated with the legal meaning of "pleaded", and one for the more general sense. Sorting out these clusters is an area of active research.
p A simple work-around is to average the vectors of several words, and use that as our target:
+code.
>>> say_verbs = ['pleaded', 'confessed', 'remonstrated', 'begged', 'bragged', 'confided', 'requested']
>>> say_vector = sum(nlp.vocab[verb].vector for verb in say_verbs) / len(say_verbs)
>>> words.sort(key=lambda w: cosine(w.vector * say_vector))
>>> words.reverse()
>>> print('1-20', ', '.join(w.orth_ for w in words[0:20]))
1-20 bragged, remonstrated, enquired, demurred, sighed, mused, intimated, retorted, entreated, motioned, ranted, confided, countersued, gestured, implored, interceded, muttered, marvelled, bickered, despaired
>>> print('50-60', ', '.join(w.orth_ for w in words[50:60]))
50-60 flaunted, quarrelled, ingratiated, vouched, agonized, apologised, lunched, joked, chafed, schemed
>>> print('1000-1010', ', '.join(w.orth_ for w in words[1000:1010]))
1000-1010 hoarded, waded, ensnared, clamoring, abided, deploring, shriveled, endeared, rethought, berate
p These definitely look like words that King might scold a writer for attaching adverbs to. Recall that our original adverb highlighting function looked like this:
+code.
>>> import spacy.en
>>> from spacy.parts_of_speech import ADV
>>> # Load the pipeline, and call it with some text.
>>> nlp = spacy.en.English()
>>> tokens = nlp("Give it back, he pleaded abjectly, its mine.",
tag=True, parse=False)
>>> print(''.join(tok.string.upper() if tok.pos == ADV else tok.string for tok in tokens))
Give it BACK, he pleaded ABJECTLY, its mine.
p We wanted to refine the logic so that only adverbs modifying evocative verbs of communication, like "pleaded", were highlighted. We've now built a vector that represents that type of word, so now we can highlight adverbs based on subtle logic, honing in on adverbs that seem the most stylistically problematic, given our starting assumptions:
+code.
>>> import numpy
>>> from numpy import dot
>>> from numpy.linalg import norm
>>> import spacy.en
>>> from spacy.parts_of_speech import ADV, VERB
>>> cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
>>> def is_bad_adverb(token, target_verb, tol):
... if token.pos != ADV
... return False
... elif token.head.pos != VERB:
... return False
... elif cosine(token.head.vector, target_verb) < tol:
... return False
... else:
... return True
p This example was somewhat contrived — and, truth be told, I've never really bought the idea that adverbs were a grave stylistic sin. But hopefully it got the message across: the state-of-the-art NLP technologies are very powerful. spaCy gives you easy and efficient access to them, which lets you build all sorts of useful products and features that were previously impossible.