
Hi Michael:
This is an interesting question, and I think you might indeed be able to
"hack" into the recursive calls if you are willing to be explicit with your
fixed-point calculation. Note that you'll have to *do* the work of keeping
track of "visited" nodes, so to speak; there's no free lunch. But the
construction of the table in a monadic form can be incorporated using
what's known as "monadic memoization mixins." See this paper by Brown and
Cook for details:
https://www.cs.utexas.edu/users/wcook/Drafts/2009/sblp09-memo-mixins.pdf
The idea is to detour through a memo-routine before each recursive call to
do the bookkeeping necessary. The underlying monad keeps track of the data
you are memoizing.
However, before diving into memo-mixins, you might want to just implement
the algorithm keeping track of a "visited" table in the monadic style, just
to appreciate what the mixins will buy you. For fun, I coded your minimize
example in idiomatic Haskell here:
https://gist.github.com/LeventErkok/615afe9a0e24249daedab3c623241275
Enjoy,
-Levent.
On Sat, Dec 10, 2016 at 8:50 AM, Michael George
I've come across a slight variant of memoization and I'm curious if either there is a better way to think about (I have yet to reach Haskell Enlightement), or if it's been thought about before.
I'm writing a program that does DFA minimization, as described here: https://www.tutorialspoint.com/automata_theory/dfa_minimization.htm.
For me, the simplest way to understand the algorithm is we form an equivalence relation, where two states are different if either one is final and the other isn't, or if there are transitions from them to states that are different.
different a b = isFinal a && not (isFinal b) || isFinal b && not (isFinal a) || exists chars (\c -> different (step a c) (step b c))
This definition diverges; it never decides that two states are the same. The standard algorithm builds a table, starting with the assumption that all states are the same, and iteratively applying this definition until it converges.
I'm thinking of that algorithm as a memoized version of the implementation of different above. But it's not quite:
different = memoFix2 different' where different' different'' a b = isFinal a && not (isFinal b) || isFinal b && not (isFinal a) || exists chars (\c -> different'' (step a c) (step b c))
for the same reason; it diverges. But with memoization I can detect the diversion by putting a marker in the memo table. If the computation makes a recursive call on an input that is started but not computed, then (in this case) I want to return false.
In general, I think what I want is
memoFixWithDefault :: (a -> b) -> ((a->b) -> (a -> b)) -> (a -> b)
Where (memoFixWithDefault default f) is like (memoFix f) except that if (memoFix f a) would loop, then (memoFixWithDefault default f a) = default a.
I suspect there's a nice denotational way of describing this, but my denotational semantics is rusty (maybe it's the least fixed point of f that is greater than g or something, but that's not it). I suspect that there needs to be a nice relationship between f and g for memoFixWithDefault f g to be sensible. I also suspect that there's a cute 3-line implementation like all the other implementations of memoize out there, but I don't see it. I also suspect that depth-first search has a cute implementation using this combinator.
So I was wondering if other people had thoughts or references before I went off and thought about it.
Thanks!
-M _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.