include ../_includes/_mixins +lead Because we've been doing the same thing for a long time, sometimes we get very used to talking about our work in words that most people don't use. So, here's another take on it, in the style of #[a(href="https://xkcd.com/thing-explainer/" target="_blank") thing explainer]. p When I was little, my favorite TV shows all had talking computers. Now I'm big and there are still no talking computers. At least, not really talking. We can make them, like, #[em say] things — but I want them to #[em tell us] things. And I want them to listen, and to read. Why is this so hard? p It turns out that almost anything we say could mean many many different things, but we don't notice because almost all of those meanings would be weird or stupid or just not possible. If I say: +example: a(href="http://spacy.io/demos/displacy?full=I%20saw%20a%20movie%20in%20a%20dress" target="_blank") I saw a movie in a dress p Would you ever ask me, +example “Were you in the dress, or was the movie in the dress?” p It's weird to even think of that. But a computer just might, because there are other cases like: +example: a(href="http://spacy.io/demos/displacy?full=The%20TV%20showed%20a%20girl%20in%20a%20dress" target="_blank") The TV showed a girl in a dress p Where the words hang together in the other way. People used to think that the answer was to tell the computer lots and lots of facts. But then you wake up one day and you're writing facts like #[em movies do not wear dresses], and you wonder where it all went wrong. Actually it's even worse than that. Not only are there too many facts, most of them are not even really facts! #[a(href="https://en.wikipedia.org/wiki/Cyc" target="_blank") People really tried this]. We've found that the world is made up of #[em if]s and #[em but]s. +aside('Unconstrained Vocabulary'). If you have a fixed constraint like #[em People wear dresses], and #[em Movies are not people], how does the system cope when someone talks about #[em dressing a script]? Even if nobody has ever said this before, someone might in future. Language is creative, and exceptions are the rule. p These days we just show the computer lots and lots and lots of words. We gave up trying to get it to understand what a “dress” is. We let #[em dress] be just some letters. But if it is seen it around #[em girl] enough times (which is just some other letters, which are seen around some #[strong other] other letters), it can make good guesses. p It doesn't always guess right, but we can tell how often it does, and we can think of ways it help it learn better. We have a number, and we can slowly make it bigger, a little bit by a little bit. p (One thing I've learned is, people are great at making a number bigger, if you pay a lot of them to try. The key is to pick numbers where, if they make the number bigger, they can't help but have done something actually good. This is harder than it sounds. Some say no numbers are like this. I ask them to show me much good being done another way, but they never can.) +aside("Unconstrained Vocabulary"). The potential problem with focusing on a benchmark task is #[a(href="https://en.wikipedia.org/wiki/Goodhart%27s_law") Goodhart's Law]. The AI community is conscious of the problem and has done well at averting it. +pullquote("Instead of telling the computer facts, what we needed to do was tell it how to learn.") p The ideas we come up with for getting the computer to talk, listen or read a little better can be used to get it to see or plan a little better, and the other way around. Once we stopped telling it things like “#[em movies do not wear dresses]”, things really took off. p Each bit of work still only makes our numbers a little bit bigger, and the bigger the numbers go, the harder they are to raise. But that is a good problem to have. Now that computers can read quite well, I think we should be able to do pretty great things. What should we get them to read?