mirror of https://github.com/mahmoud/boltons.git
cacheutils docs updates
This commit is contained in:
parent
4981e96633
commit
76e47cd139
|
@ -261,8 +261,8 @@ class LRI(dict):
|
|||
|
||||
*on_miss* is a callable that accepts the missing key (as opposed
|
||||
to :class:`collections.defaultdict`'s "default_factory", which
|
||||
accepts no arguments.) Also note that, unlike the :class:`LRU`,
|
||||
the ``LRI`` is not yet instrumented with statistics tracking.
|
||||
accepts no arguments.) Also note that, like the :class:`LRU`,
|
||||
the ``LRI`` is instrumented with statistics tracking.
|
||||
|
||||
>>> cap_cache = LRI(max_size=2)
|
||||
>>> cap_cache['a'], cap_cache['b'] = 'A', 'B'
|
||||
|
|
|
@ -20,11 +20,11 @@ Least-Recently Used (LRU)
|
|||
-------------------------
|
||||
|
||||
The :class:`LRU` is the more advanced cache, but it's still quite
|
||||
simple. When it reaches capacity, it replaces the least-recently used
|
||||
item. This strategy makes the LRU a more effective cache than the LRI
|
||||
for a wide variety of applications, but also entails more operations
|
||||
for all of its APIs, especially reads. Unlike the :class:`LRI`, the
|
||||
LRU has threadsafety built in.
|
||||
simple. When it reaches capacity, a new insertion replaces the
|
||||
least-recently used item. This strategy makes the LRU a more effective
|
||||
cache than the LRI for a wide variety of applications, but also
|
||||
entails more operations for all of its APIs, especially reads. Unlike
|
||||
the :class:`LRI`, the LRU has threadsafety built in.
|
||||
|
||||
.. autoclass:: boltons.cacheutils.LRU
|
||||
:members:
|
||||
|
|
Loading…
Reference in New Issue