boltons/docs/cacheutils.rst

52 lines
1.6 KiB
ReStructuredText
Raw Normal View History

``cacheutils`` - Caches and caching
2015-04-04 23:11:43 +00:00
===================================
2015-03-22 00:12:50 +00:00
.. automodule:: boltons.cacheutils
2015-04-04 23:11:43 +00:00
Least-Recently Inserted (LRI)
-----------------------------
2015-04-04 23:11:43 +00:00
The :class:`LRI` is the simpler cache, implementing a very simple first-in,
first-out (FIFO) approach to cache eviction. If the use case calls for
simple, very-low overhead caching, such as somewhat expensive local
operations (e.g., string operations), then the LRI is likely the right
choice.
.. autoclass:: boltons.cacheutils.LRI
:members:
Least-Recently Used (LRU)
-------------------------
2015-04-04 23:11:43 +00:00
The :class:`LRU` is the more advanced cache, but it's still quite
2015-08-22 23:49:05 +00:00
simple. When it reaches capacity, a new insertion replaces the
least-recently used item. This strategy makes the LRU a more effective
cache than the LRI for a wide variety of applications, but also
entails more operations for all of its APIs, especially reads. Unlike
the :class:`LRI`, the LRU has threadsafety built in.
2015-04-04 23:11:43 +00:00
.. autoclass:: boltons.cacheutils.LRU
2015-03-22 00:12:50 +00:00
:members:
2015-04-05 03:54:20 +00:00
Automatic function caching
--------------------------
Continuing in the theme of cache tunability and experimentation,
``cacheutils`` also offers a way to pluggably cache function return
values: the :func:`cached` function decorator.
.. autofunction:: boltons.cacheutils.cached
Similar functionality can be found in Python 3.4's
:func:`functools.lru_cache` decorator, but the functools approach does
not support the same cache strategy modification or sharing the cache
object across multiple functions.
Threshold-bounded Counting
--------------------------
.. autoclass:: boltons.cacheutils.ThresholdCounter
:members: