Re[Haskell-cafe] [2]: Reduction Sequence of simple Fibonacci sequence implementation

wren ng thornton wren at
Mon Aug 31 02:00:04 EDT 2009

staafmeister wrote:
> The overhead is a O(1) overhead for the function because it needs to check
> if
> a computation has already performed. And the space overhead is not so big
> because
> every data object in memory there are a couple of references to be stored in
> lookup tables.
> So although there is space overhead it is not excessive.

Actually, having worked on designing the storage layer of a language 
which does automatic pervasive memoization (Dyna2), the space overhead 
for such languages can be incredibly large.

Determining whether you have a memo can take much longer than O(1) when 
you have so many of them. Determining when it's worthwhile to memoize a 
result vs recalculating should it be needed again; determining which of 
the memos should be flushed when memory pressure gets too high; 
determining the most efficient storage representation for a particular 
memotable;... all of these are extremely difficult questions, and still 
open questions in the community. The databases community is still alive 
and kicking, and they only have to deal with the easier parts of these 

To think about why it takes so much space, consider this: for some 
value, v, how many functions can accept v as an argument? This is the 
upper limit for the number of memos you need to keep while v is live. 
And since we're referentially transparent, a value (not a variable) is 
live so long as its type is live (the result of f on this v will be the 
same as on every other v, and as long as the type of v lives, we can 
produce another v so we need to keep the memo).


Memoization in the small is easy. There are a number of different ways 
to go about it, and they all get the job done.

But once you make it pervasive, the story changes. It's like managing 
memory: in the small you can do it manually, but once the idea of 
pervasive allocation shows up (either from OOP or from closures), doing 
it manually is a good way to ensure your project fails.

And once you make it automatic, the story changes again. Here, a better 
analogy is with pervasive laziness. Having pervasive laziness and making 
it automatic/invisible to the coder is easy. Making it efficient enough 
to be invisible to the user is another matter entirely. The key to 
automating pervasive laziness is to find out when you can get away with 
not doing it; and the key to automating memoization is the same. Of 
course memoization has the additional problems that, when you can't get 
rid of it, you need to choose from one of many different models and 
datastructures for storing your memos; and those choices circle back to 
influence the answer to whether you should even bother memoizing.

Live well,

More information about the Haskell-Cafe mailing list