Profile photo for Quora User

To each his own:

Add a one-line decorator to the naive, exponential, recursive nth Fibonacci number function to automatically memoize it using an LRU cache of size 3, reducing it to linear run-time.

  1. @lru_cache(3) 
  2. def fib(n): 
  3. return n if n < 2 else fib(n - 1) + fib(n - 2) 

Edit: I've received a few requests to explain how this code works.

Determining when one can benefit from memoizing a function often requires considerable thought. However, the implementation details of converting a non-memoized function to a memoized function are often completely mechanical (Note: The details are slightly more nuanced than the code below suggests. See https://github.com/foodhype/memogen/blob/master/memogen.py for gory details.):

Non-memoized version

  1. def some_function(some_args): 
  2. if base_case: 
  3. # Calculate base case result 
  4. return some_result 
  5. else: 
  6. # Recursively calculate some result 
  7. return some_result 

Memoized version

  1. def some_function(some_args, some_cache): 
  2. if some_args in some_cache: 
  3. return some_cache[some_args] 
  4. else: 
  5. if base_case: 
  6. # Calculate base case result 
  7. return some_result 
  8. else: 
  9. # Recursively calculate some result 
  10. some_cache[some_args] = some_result 
  11. return some_result 

Assume for convenience that we define a "fixed capacity cache" to be a map that can store no more than C keys (where C is the capacity of the cache) and further assume that adding a new key to a full cache requires evicting some key-value pair to make room for the new entry. The criteria we use to decide which entry to evict is known as the cache's "eviction policy". An LRU cache is a fixed capacity cache whose eviction policy is to evict the "least recently used" entry.

In Python 3, the lru_cache decorator converts a non-memoized function to a memoized version that uses an LRU cache as its cache.

It turns out that it's only necessary to keep track of the three "most recent" Fibonacci numbers at any given time to avoid calling fib with the same n multiple times, because the result of fib(n) can only depend on n (if n < 2) or n-1 and n-2 (if n >= 2), so it's only necessary to use an LRU cache of size 3. Since we never call fib with the same argument more than once, and since fib(n) can only depend on fib(0), fib(1), ... , fib(n-1) in the worst case, the run-time is O(n).

The funny thing is that this isn't even close to the most efficient method because it uses O(n) stack space. A trivial iterative method can achieve O(n) run-time and O(1) space, and the run-time can be further reduced to O(log(n)) and the space to O(1) by using Binet's formula with configured floating point precision to account for estimation error (e.g. https://github.com/foodhype/common_algorithms/blob/master/fib.py).

The beauty here is in the power of a single line of code.

View 31 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025