Commit Graph

9053 Commits

Author SHA1 Message Date
Matthew Honnibal aba4a7c7ea * Remove ptb3 file from setup 2014-09-25 18:41:25 +02:00
Matthew Honnibal bc460de171 * Add extra tests 2014-09-25 18:29:42 +02:00
Matthew Honnibal 93505276ed * Add German tokenizer files 2014-09-25 18:29:13 +02:00
Matthew Honnibal 2e44fa7179 * Add util.py 2014-09-25 18:26:22 +02:00
Matthew Honnibal c4cd3bc57a * Add prefix and suffix data files 2014-09-25 18:24:52 +02:00
Matthew Honnibal 2d4e5ceafd * Remove old docs stuff 2014-09-25 18:24:05 +02:00
Matthew Honnibal b15619e170 * Use PointerHash instead of locally provided _hashing module 2014-09-25 18:23:35 +02:00
Matthew Honnibal ed446c67ad * Add typedefs file 2014-09-17 23:10:32 +02:00
Matthew Honnibal 316a57c4be * Remove own memory classes, which have now been broken out into their own package 2014-09-17 23:10:07 +02:00
Matthew Honnibal ac522e2553 * Switch from own memory class to cymem, in pip 2014-09-17 23:09:24 +02:00
Matthew Honnibal 6266cac593 * Switch to using a Python ref counted gateway to malloc/free, to prevent memory leaks 2014-09-17 20:02:26 +02:00
Matthew Honnibal 5a20dfc03e * Add memory management code 2014-09-17 20:02:06 +02:00
Matthew Honnibal 0152831c89 * Refactor tokenization, enable cache, and ensure we look up specials correctly even when there's confusing punctuation surrounding the token. 2014-09-16 18:01:46 +02:00
Matthew Honnibal 143e51ec73 * Refactor tokenization, splitting it into a clearer life-cycle. 2014-09-16 13:16:02 +02:00
Matthew Honnibal c396581a0b * Fiddle with the way strings are interned in lexeme 2014-09-15 06:34:45 +02:00
Matthew Honnibal 0bb547ab98 * Fix memory error in cache, where entry wasn't being null-terminated. Various other changes, some good for performance 2014-09-15 06:34:10 +02:00
Matthew Honnibal 7959141d36 * Add a few abbreviations, to get tests to pass 2014-09-15 06:32:18 +02:00
Matthew Honnibal db191361ee * Add new tests for fancier tokenization cases 2014-09-15 06:31:58 +02:00
Matthew Honnibal 6fc06bfe2f * Hack a hard-cased unit in to get a test to pass 2014-09-15 06:31:35 +02:00
Matthew Honnibal d235299260 * Few nips and tucks to hash table 2014-09-15 05:03:44 +02:00
Matthew Honnibal e68a431e5e * Pass only the tokens vector to _tokenize, instead of the whole python object. 2014-09-15 04:01:38 +02:00
Matthew Honnibal 08cef75ffd * Switch to using a heap-allocated vector in tokens 2014-09-15 03:46:14 +02:00
Matthew Honnibal f77b7098c0 * Upd Tokens to use vector, with bounds checking. 2014-09-15 03:22:40 +02:00
Matthew Honnibal 0f6bf2a2ee * Fix niggling memory error, which was caused by bug in the way tokens resized their internal vector. 2014-09-15 02:08:39 +02:00
Matthew Honnibal 5dcc1a426a * Update tokenization tests for new tokenizer rules 2014-09-15 01:32:51 +02:00
Matthew Honnibal df24e3708c * Move EnglishTokens stuff to Tokens 2014-09-15 01:31:44 +02:00
Matthew Honnibal bd08cb09a2 * Remove short-circuiting of initial_size argument for PointerHash 2014-09-15 01:30:49 +02:00
Matthew Honnibal f3393cf57c * Improve interface for PointerHash 2014-09-13 17:29:58 +02:00
Matthew Honnibal 45865be37e * Switch hash interface, using void* instead of size_t, to avoid casts. 2014-09-13 17:02:06 +02:00
Matthew Honnibal 0447279c57 * PointerHash working, efficiency is good. 6-7 mins 2014-09-13 16:43:59 +02:00
Matthew Honnibal 85d68e8e95 * Replaced cache with own hash table. Similar timing 2014-09-13 03:14:43 +02:00
Matthew Honnibal c8db76e3e1 * Add initial work on simple hash table 2014-09-13 02:02:41 +02:00
Matthew Honnibal afdc9b7ac2 * More performance fiddling, particularly moving the specials into the cache, so that we can just lookup the cache in _tokenize 2014-09-13 00:59:34 +02:00
Matthew Honnibal 7d239df4c8 * Fiddle with declarations, for small efficiency boost 2014-09-13 00:31:53 +02:00
Matthew Honnibal a8e7cce30f * Efficiency tweaks 2014-09-13 00:14:05 +02:00
Matthew Honnibal 126a8453a5 * Fix performance issues by implementing a better cache. Add own String struct to help 2014-09-12 23:50:37 +02:00
Matthew Honnibal 9298e36b36 * Move special tokenization into its own lookup table, away from the cache. 2014-09-12 19:43:14 +02:00
Matthew Honnibal 985bc68327 * Fix bug with trailing punct on contractions. Reduced efficiency, and slightly hacky implementation. 2014-09-12 18:26:26 +02:00
Matthew Honnibal 7eab281194 * Fiddle with token features 2014-09-12 15:49:55 +02:00
Matthew Honnibal 5aa591106b * Fiddle with token features 2014-09-12 15:49:36 +02:00
Matthew Honnibal 1533041885 * Update the split_one method, so that it doesn't need to cast back to a Python object 2014-09-12 05:10:59 +02:00
Matthew Honnibal 4817277d66 * Replace main lexicon dict with dense_hash_map. May be unsuitable, if strings need recovery. 2014-09-12 04:29:09 +02:00
Matthew Honnibal 8b20e9ad97 * Delete ununused _split method 2014-09-12 04:03:52 +02:00
Matthew Honnibal a4863686ec * Changed cache to use a linked-list data structure, to take out Python list code. Taking 6-7 mins for gigaword. 2014-09-12 03:30:50 +02:00
Matthew Honnibal 51e2006a65 * Increase cache size. Processing now 6-7 mins 2014-09-12 02:52:34 +02:00
Matthew Honnibal e096f30161 * Tweak signatures and refactor slightly. Processing gigaword taking 8-9 mins. Tests passing, but some sort of memory bug on exit. 2014-09-12 02:43:36 +02:00
Matthew Honnibal 073ee0de63 * Restore dense_hash_map for cache dictionary. Seems to double efficiency 2014-09-12 02:23:51 +02:00
Matthew Honnibal 3c928fb5e0 * Switch to 64 bit hashes, for better reliability 2014-09-12 02:04:47 +02:00
Matthew Honnibal 2389bd1b10 * Improve cache mechanism by including a random element depending on the size of the cache. 2014-09-12 00:19:16 +02:00
Matthew Honnibal c8f7c8bfde * Moving to storing LexemeC structs internally 2014-09-11 21:54:34 +02:00