
Hi! There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint? All the best, Wolfgang

I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly
irrelevant purity
and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Hi! Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking). However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly. Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad. My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=). All the best, Wolfgang Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

I think that in addition to nesting and sliding, we should have the
following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the
composition of left shrinking and purity:
ffix (\x -> fmap (f x) g)
=
ffix (\x -> g >>= \y -> return (f x y))
= {left shrinking}
g >>= \y -> ffix (\x -> return (f x y))
= {purity}
g >>= \y -> return (fix (\x -> f x y))
=
fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is
significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a))
=
ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t)))
= {nesting}
ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1)))
(f (fst t2))))
=
ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst))
= {sliding}
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a'
-> (a', h a' a b)))))
=
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f))
= {pure left shrinking}
fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with
constant functions:
ffix (const a)
=
ffix (\_ -> fmap id a)
=
fmap (\y -> fix (\_ -> id y)) a
=
fmap id a
=
a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f)
=
ffix (\x -> return (f x))
=
ffix (\x -> fmap (\_ -> f x) (return ()))
=
fmap (\_ -> fix (\x -> f x)) (return ())
=
return (fix f)
Sincerely,
Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Jonathan, thanks a lot for working this out. Impressive! So we want the following laws for FunctorFix: Pure left shrinking: ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g Sliding: ffix (fmap h . f) = fmap h (ffix (f . h)) for strict h Nesting: ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x) Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows: Strictness: f ⊥ = ⊥ ⇔ mfix f = ⊥ Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this? All the best, Wolfgang Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

As long as we're going down this path, we should also consider
ApplicativeFix. All the laws except left shrinking make immediate
sense in that context. That surely has a law or two of its own. For
example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

I've structured this email with a bunch of sections because it is a
bit overly long.
# A replacement for `FunctorFix`
After thinking about this problem a bit more, I'm actually thinking
that we might want a slightly stronger class (bikeshedding is
welcome):
class Functor f => Unapply f where
-- | This has a single law:
-- > fmap (\g -> g x) (unapply f) == f x
unapply :: (a -> f b) -> f (a -> b)
ffix :: Unapply f => (a -> f a) -> f a
ffix = fmap fix . unapply
For efficiency, we'd probably want to add more methods to the class;
I'll get back to that.
The semantics of unapply is to take a function that produces a
container of some fixed "shape" and fill that shape with functions
that extract the result at the given position. Since that isn't a
clear description at all,
unapply (\x -> [f x, g x]) == [f, g]
The implementations of this class follow the same general pattern. The
input function is evaluated at bottom to figure out the correct shape,
and then that shape is filled with copies of the input function
composed with projection functions. For example:
instance Unapply [] where
unapply f = case f (error "Strict function passed to unapply") of
[] -> []
(_:_) -> head . f : unapply (tail . f)
Notably, while the only interesting new instance I figured out for
FunctorFix was for the sum of functors (:+:), I couldn't figure out a
way to implement FunctorFix for the composition of two functors (:.:).
Unapply, however, clearly is closed under functor composition:
instance (Unapply f, Unapply g) => Unapply (f :.: g) where
unapply f = Comp1 (fmap unapply (unapply (unComp1 . f)))
# Proofs
Now, this simple function with its one law is enough to derive *all*
the laws we want for ffix (or even afix or mfix), using the fact that
forall x. fmap (\g -> g x) u = fmap (\g -> g x) v
implies u = v (up to the existance of `seq`, that is; I think things
still work out with `seq` in the picture, but they get a lot messier).
## Lemmas
### Left lemma
Forall x,
fmap (\g -> g x) (unapply (fmap h . f))
= {constant application}
fmap h (f x)
= {constant application}
fmap h (fmap (\g -> g x) (unapply f))
=
fmap (\g -> h (g x)) (unapply f)
=
fmap (\g' -> g' x) (fmap (h .) (unapply f))
Therefore,
unapply (fmap h . f) = fmap (h .) (unapply f)
### Right lemma
Forall x,
fmap (\g -> g x) (unapply (f . h))
= {constant application}
f (h x)
= {constant application}
fmap (\g -> g (h x)) (unapply f)
=
fmap (\g' -> g' x) (fmap (. h) (unapply f))
Therefore,
unapply (f . h) = fmap (. h) (unapply f)
### Nesting lemma
Forall x,
fmap (\g' -> g' x) (fmap (\g y -> g y y) (unapply (unapply . f)))
=
fmap (\g -> g x x) (unapply (unapply . f))
=
fmap (\g' -> g' x) (fmap (\g -> g x) (unapply (unapply . f)))
= {constant application}
fmap (\g' -> g' x) (unapply (f x))
= {constant application}
f x x
=
(\y -> f y y) x
= {constant application}
fmap (\g -> g x) (unapply (\y -> f y y))
Therefore,
fmap (\g y -> g y y) (unapply (unapply . f)) = unapply (\y -> f y y)
### Inner application lemma
Forall f,
fmap (\g' -> g' f) (unapply (\g -> fmap g u))
= {constant application}
fmap f u
=
fmap (\applyx -> applyx f) (fmap (\x g -> g x) u)
Therefore,
unapply (\g -> fmap g u) = fmap (\x g -> g x) u
### Join lemma
Forall x,
fmap (\g -> g x) (join (fmap unapply (unapply f)))
=
join (fmap (fmap (\g -> g x)) (fmap unapply (unapply f)))
=
join (fmap (\h -> fmap (\g -> g x) (unapply h)) (unapply f))
= {constant application}
join (fmap (\h -> h x) (unapply f))
= {constant application}
join (f x)
= {constant application}
fmap (\g -> g x) (unapply (join . f))
Therefore,
join (fmap unapply (unapply f)) = unapply (join . f)
## Strictness
f ⊥ = ⊥
⇔ {constant application}
fmap (\g -> g x) (unapply f) = ⊥
⇔ {strictness of `fmap`}
unapply f = ⊥
⇔ {strictness of `fmap`}
fmap fix (unapply f) = ⊥
⇔ {definition of `ffix`}
ffix f = ⊥
## Sliding
ffix (fmap h . f)
= {definition of `ffix`}
fmap fix (unapply (fmap h . f))
= {left lemma}
fmap fix (fmap (h .) (unapply f))
=
fmap (\g -> fix (h . g)) (unapply f)
= {sliding (`fix`)}
fmap (\g -> h (fix (g . h))) (unapply f)
=
fmap h (fmap fix (fmap (. h) (unapply f)))
= {right lemma}
fmap h (fmap fix (unapply (f . h)))
= {definition of `ffix`}
fmap h (ffix (f . h))
## Nesting
ffix (\x -> ffix (\y -> f x y))
= {definition of `ffix`}
fmap fix (unapply (\x -> fmap fix (unapply (\y -> f x y))))
=
fmap fix (unapply (fmap fix . unapply . f))
= {left lemma}
fmap fix (fmap (fix .) (unapply (unapply . f)))
=
fmap (\g -> fix (\x -> fix (\y -> g x y))) (unapply (unapply . f))
= {nesting (`fix`)}
fmap (\g -> fix (\x -> g x x)) (unapply (unapply . f))
=
fmap fix (fmap (\g x -> g x x) (unapply (unapply . f)))
= {nesting lemma}
fmap fix (unapply (\x -> f x x))
= {definition of `ffix`}
ffix (\x -> f x x)
## Pure left shrinking
ffix (\x -> fmap (f x) u)
= {definition of `ffix`}
fmap fix (unapply (\x -> fmap (f x) u))
=
fmap fix (unapply ((\g -> fmap g u) . f))
= {right lemma}
fmap fix (fmap (. f) (unapply (\g -> fmap g u)))
= {inner application lemma}
fmap fix (fmap (. f) (fmap (\y g -> g y) u))
=
fmap (\y -> fix (\x -> f x y))
## Left shrinking
ffix (\x -> a >>= \y -> f x y)
= {definition of `ffix`}
fmap fix (unapply (\x -> a >>= \y -> f x y))
=
fmap fix (unapply ((a >>=) . f))
= {right lemma}
fmap fix (fmap (. f) (unapply (\g -> a >>= g)))
=
fmap fix (fmap (. f) (unapply (join . (\g -> fmap g a))))
= {join lemma}
fmap fix (fmap (. f) (join (fmap unapply (unapply (\g -> fmap g a)))))
= {inner application lemma}
fmap fix (fmap (. f) (join (fmap unapply (fmap (\y g -> g y) a))))
=
join (fmap (fmap fix . fmap (. f) . unapply . (\y g -> g y)) a)
=
a >>= \y -> fmap fix (fmap (. f) (unapply (\g -> g y)))
= {right lemma}
a >>= \y -> fmap fix (unapply ((\g -> g y) . f))
=
a >>= \y -> fmap fix (unapply (\x -> f x y))
= {definition of `ffix`}
a >>= \y -> ffix (\x -> f x y)
# Efficiency
I should preface this section by saying that I haven't actually done
any benchmarking or profiling.
Unfortunately, the Unapply solution seems to be slightly slower than
directly implementing FunctorFix, for two reasons. First, with
Unapply, the call to ffix is split into two pieces, first constructing
the resultant data structure and then mapping over that to calculate
fixed points. Especially since unapply may be recursive and might not
be inlined, this can result in intermediate data structures and
slowness. This is easily fixed. Instead of implementing unapply
directly, we can add a new method to the class,
unapplyMap :: ((a -> b) -> c) -> (a -> f b) -> f c
defined by
unapplyMap f = fmap f . unapply
unapply = unapplyMap id
Finally, we just implement unapplyMap directly instead of using
unapply and use unapplyMap in the definition of ffix.
The second performance problem is more subtle. It comes from the fact
that the current implementation of mfix for sum types is *speculative*
in a sense. While the Unapply instance for [] given above is perfectly
valid, it operates in two steps. In the first step, it calls f on
bottom to determing whether the result is a cons node or nil, and in
the second step, it extracts the appropriate components. The standard
library implementation, in contrast, just initially assumes that f
will return a cons node, calling `fix (head . f)`. If that results in
[], it will back off and return [], but otherwise it can just extract
the correct head immediately. To look at concrete instances, compare:
-- Equation 4.3 in Erkok's thesis, unoptimized
instance FunctorFix Maybe where
ffix f = case f (error "Strict function passed to ffix") of
Nothing -> Nothing
Just _ -> Just (fix (unJust . f))
where
unJust (Just x) = x
-- Equation 4.2 in Erkok's thesis, optimized
instance FunctorFix Maybe where
ffix f = fix (f . unJust)
where
unJust (Just x) = x
-- Reformulation of Equation 4.2 that shows the equivalence of the two
approaches
instance FunctorFix Maybe where
ffix f = case f (fix (unJust . f)) {- = fix (f . unJust) -} of
Nothing -> Nothing
Just x -> Just (x {- = unJust (f (fix (unJust . f))) = fix
(unJust . f) -})
Essentially, the optimized implementation is still passing something
to f and checking the result, but it chooses what to pass to f in a
clever way so that, in the Just case, it can be reused.
This optimization is nice and useful, but it isn't composable. Even
though it follows the same pattern of implementation, I don't see a
way to directly use this optimization in the implementation for (:+:).
It curcially relies on repliacing `fix (project . f)` with `fix (f .
project)`, and the latter simply does not typecheck when `project`
does not return a single value.
I don't see any good way to integrate this optimization into Unapply.
To do so we'd need to know about the feedback inherent in a fixpoint
to know to pass something useful into f when figuring out the shape of
the result. The simplest solution would be to keep ffix in the type
class and implement it independently of unapply whenever possible
(a.k.a. everywhere but in the instance for (:.:)), but that seems ugly
to me.
On Tue, Sep 5, 2017 at 6:35 PM, David Feuer
As long as we're going down this path, we should also consider ApplicativeFix. All the laws except left shrinking make immediate sense in that context. That surely has a law or two of its own. For example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
wrote: Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Is it possible to implement a law-abiding (or useful) unapply for IO?
That's a practically important MonadFix instance.
On Sep 6, 2017 12:42 AM, "Jonathan S"
As long as we're going down this path, we should also consider ApplicativeFix. All the laws except left shrinking make immediate sense in that context. That surely has a law or two of its own. For example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
wrote: Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Oh. Hm. That would be an obvious complication. I think that there is
no fundamental reason it should be impossible, but I doubt there is
any way to do so in current Haskell. The problem is that unapply
returns a function that can be multiple times, but the MVar trick that
the existing fixIO code does only works once - it produces a single
delayed value. To allow multiple values to be passed in, you'd have to
either traverse the in-memory representation of the result and make
copies for all the inputs (ugly, slow, and not even clearly possible)
or somehow dig into the internals of IO and evaluate the function
without its side effects by caching the old state, which also seems
impossible.
To be clear, a linearly typed version of Unapply would work just fine:
class UnapplyLinear f where
unapply :: (a -> f b) -> f (a -o b)
instance UnapplyLinear IO where
unapply k = do
m <- newEmptyMVar
input <- unsafeInterleaveIO (readMVar m)
result <- k input
return (\x -> unsafePerformIO (putMVar m x) `pseq` result)
However, for practicality's sake, this basically means that today,
switching entirely to Unapply is not a viable option. Unapply could
still be a subclass of FunctorFix, however, and we could still have
instance (Unapply f, FunctorFix g) => FunctorFix (f :.: g) where
ffix f = Comp1 (unapplyMap ffix (unComp1 . f))
I don't know whether it is worth it.
On Tue, Sep 5, 2017 at 11:52 PM, David Feuer
Is it possible to implement a law-abiding (or useful) unapply for IO? That's a practically important MonadFix instance.
On Sep 6, 2017 12:42 AM, "Jonathan S"
wrote: I've structured this email with a bunch of sections because it is a bit overly long.
# A replacement for `FunctorFix` After thinking about this problem a bit more, I'm actually thinking that we might want a slightly stronger class (bikeshedding is welcome):
class Functor f => Unapply f where -- | This has a single law: -- > fmap (\g -> g x) (unapply f) == f x unapply :: (a -> f b) -> f (a -> b)
ffix :: Unapply f => (a -> f a) -> f a ffix = fmap fix . unapply
For efficiency, we'd probably want to add more methods to the class; I'll get back to that.
The semantics of unapply is to take a function that produces a container of some fixed "shape" and fill that shape with functions that extract the result at the given position. Since that isn't a clear description at all,
unapply (\x -> [f x, g x]) == [f, g]
The implementations of this class follow the same general pattern. The input function is evaluated at bottom to figure out the correct shape, and then that shape is filled with copies of the input function composed with projection functions. For example:
instance Unapply [] where unapply f = case f (error "Strict function passed to unapply") of [] -> [] (_:_) -> head . f : unapply (tail . f)
Notably, while the only interesting new instance I figured out for FunctorFix was for the sum of functors (:+:), I couldn't figure out a way to implement FunctorFix for the composition of two functors (:.:). Unapply, however, clearly is closed under functor composition:
instance (Unapply f, Unapply g) => Unapply (f :.: g) where unapply f = Comp1 (fmap unapply (unapply (unComp1 . f)))
# Proofs Now, this simple function with its one law is enough to derive *all* the laws we want for ffix (or even afix or mfix), using the fact that
forall x. fmap (\g -> g x) u = fmap (\g -> g x) v
implies u = v (up to the existance of `seq`, that is; I think things still work out with `seq` in the picture, but they get a lot messier).
## Lemmas ### Left lemma Forall x, fmap (\g -> g x) (unapply (fmap h . f)) = {constant application} fmap h (f x) = {constant application} fmap h (fmap (\g -> g x) (unapply f)) = fmap (\g -> h (g x)) (unapply f) = fmap (\g' -> g' x) (fmap (h .) (unapply f)) Therefore, unapply (fmap h . f) = fmap (h .) (unapply f)
### Right lemma Forall x, fmap (\g -> g x) (unapply (f . h)) = {constant application} f (h x) = {constant application} fmap (\g -> g (h x)) (unapply f) = fmap (\g' -> g' x) (fmap (. h) (unapply f)) Therefore, unapply (f . h) = fmap (. h) (unapply f)
### Nesting lemma Forall x, fmap (\g' -> g' x) (fmap (\g y -> g y y) (unapply (unapply . f))) = fmap (\g -> g x x) (unapply (unapply . f)) = fmap (\g' -> g' x) (fmap (\g -> g x) (unapply (unapply . f))) = {constant application} fmap (\g' -> g' x) (unapply (f x)) = {constant application} f x x = (\y -> f y y) x = {constant application} fmap (\g -> g x) (unapply (\y -> f y y)) Therefore, fmap (\g y -> g y y) (unapply (unapply . f)) = unapply (\y -> f y y)
### Inner application lemma Forall f, fmap (\g' -> g' f) (unapply (\g -> fmap g u)) = {constant application} fmap f u = fmap (\applyx -> applyx f) (fmap (\x g -> g x) u) Therefore, unapply (\g -> fmap g u) = fmap (\x g -> g x) u
### Join lemma Forall x, fmap (\g -> g x) (join (fmap unapply (unapply f))) = join (fmap (fmap (\g -> g x)) (fmap unapply (unapply f))) = join (fmap (\h -> fmap (\g -> g x) (unapply h)) (unapply f)) = {constant application} join (fmap (\h -> h x) (unapply f)) = {constant application} join (f x) = {constant application} fmap (\g -> g x) (unapply (join . f)) Therefore, join (fmap unapply (unapply f)) = unapply (join . f)
## Strictness f ⊥ = ⊥ ⇔ {constant application} fmap (\g -> g x) (unapply f) = ⊥ ⇔ {strictness of `fmap`} unapply f = ⊥ ⇔ {strictness of `fmap`} fmap fix (unapply f) = ⊥ ⇔ {definition of `ffix`} ffix f = ⊥
## Sliding ffix (fmap h . f) = {definition of `ffix`} fmap fix (unapply (fmap h . f)) = {left lemma} fmap fix (fmap (h .) (unapply f)) = fmap (\g -> fix (h . g)) (unapply f) = {sliding (`fix`)} fmap (\g -> h (fix (g . h))) (unapply f) = fmap h (fmap fix (fmap (. h) (unapply f))) = {right lemma} fmap h (fmap fix (unapply (f . h))) = {definition of `ffix`} fmap h (ffix (f . h))
## Nesting ffix (\x -> ffix (\y -> f x y)) = {definition of `ffix`} fmap fix (unapply (\x -> fmap fix (unapply (\y -> f x y)))) = fmap fix (unapply (fmap fix . unapply . f)) = {left lemma} fmap fix (fmap (fix .) (unapply (unapply . f))) = fmap (\g -> fix (\x -> fix (\y -> g x y))) (unapply (unapply . f)) = {nesting (`fix`)} fmap (\g -> fix (\x -> g x x)) (unapply (unapply . f)) = fmap fix (fmap (\g x -> g x x) (unapply (unapply . f))) = {nesting lemma} fmap fix (unapply (\x -> f x x)) = {definition of `ffix`} ffix (\x -> f x x)
## Pure left shrinking ffix (\x -> fmap (f x) u) = {definition of `ffix`} fmap fix (unapply (\x -> fmap (f x) u)) = fmap fix (unapply ((\g -> fmap g u) . f)) = {right lemma} fmap fix (fmap (. f) (unapply (\g -> fmap g u))) = {inner application lemma} fmap fix (fmap (. f) (fmap (\y g -> g y) u)) = fmap (\y -> fix (\x -> f x y))
## Left shrinking ffix (\x -> a >>= \y -> f x y) = {definition of `ffix`} fmap fix (unapply (\x -> a >>= \y -> f x y)) = fmap fix (unapply ((a >>=) . f)) = {right lemma} fmap fix (fmap (. f) (unapply (\g -> a >>= g))) = fmap fix (fmap (. f) (unapply (join . (\g -> fmap g a)))) = {join lemma} fmap fix (fmap (. f) (join (fmap unapply (unapply (\g -> fmap g a))))) = {inner application lemma} fmap fix (fmap (. f) (join (fmap unapply (fmap (\y g -> g y) a)))) = join (fmap (fmap fix . fmap (. f) . unapply . (\y g -> g y)) a) = a >>= \y -> fmap fix (fmap (. f) (unapply (\g -> g y))) = {right lemma} a >>= \y -> fmap fix (unapply ((\g -> g y) . f)) = a >>= \y -> fmap fix (unapply (\x -> f x y)) = {definition of `ffix`} a >>= \y -> ffix (\x -> f x y)
# Efficiency I should preface this section by saying that I haven't actually done any benchmarking or profiling.
Unfortunately, the Unapply solution seems to be slightly slower than directly implementing FunctorFix, for two reasons. First, with Unapply, the call to ffix is split into two pieces, first constructing the resultant data structure and then mapping over that to calculate fixed points. Especially since unapply may be recursive and might not be inlined, this can result in intermediate data structures and slowness. This is easily fixed. Instead of implementing unapply directly, we can add a new method to the class,
unapplyMap :: ((a -> b) -> c) -> (a -> f b) -> f c
defined by
unapplyMap f = fmap f . unapply unapply = unapplyMap id
Finally, we just implement unapplyMap directly instead of using unapply and use unapplyMap in the definition of ffix.
The second performance problem is more subtle. It comes from the fact that the current implementation of mfix for sum types is *speculative* in a sense. While the Unapply instance for [] given above is perfectly valid, it operates in two steps. In the first step, it calls f on bottom to determing whether the result is a cons node or nil, and in the second step, it extracts the appropriate components. The standard library implementation, in contrast, just initially assumes that f will return a cons node, calling `fix (head . f)`. If that results in [], it will back off and return [], but otherwise it can just extract the correct head immediately. To look at concrete instances, compare:
-- Equation 4.3 in Erkok's thesis, unoptimized instance FunctorFix Maybe where ffix f = case f (error "Strict function passed to ffix") of Nothing -> Nothing Just _ -> Just (fix (unJust . f)) where unJust (Just x) = x
-- Equation 4.2 in Erkok's thesis, optimized instance FunctorFix Maybe where ffix f = fix (f . unJust) where unJust (Just x) = x
-- Reformulation of Equation 4.2 that shows the equivalence of the two approaches instance FunctorFix Maybe where ffix f = case f (fix (unJust . f)) {- = fix (f . unJust) -} of Nothing -> Nothing Just x -> Just (x {- = unJust (f (fix (unJust . f))) = fix (unJust . f) -})
Essentially, the optimized implementation is still passing something to f and checking the result, but it chooses what to pass to f in a clever way so that, in the Just case, it can be reused.
This optimization is nice and useful, but it isn't composable. Even though it follows the same pattern of implementation, I don't see a way to directly use this optimization in the implementation for (:+:). It curcially relies on repliacing `fix (project . f)` with `fix (f . project)`, and the latter simply does not typecheck when `project` does not return a single value.
I don't see any good way to integrate this optimization into Unapply. To do so we'd need to know about the feedback inherent in a fixpoint to know to pass something useful into f when figuring out the shape of the result. The simplest solution would be to keep ffix in the type class and implement it independently of unapply whenever possible (a.k.a. everywhere but in the instance for (:.:)), but that seems ugly to me.
On Tue, Sep 5, 2017 at 6:35 PM, David Feuer
wrote: As long as we're going down this path, we should also consider ApplicativeFix. All the laws except left shrinking make immediate sense in that context. That surely has a law or two of its own. For example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
wrote: Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: > > > > Hi! > > There is the MonadFix class with the mfix method. However, there > are > situations where you need a fixed point operator of type a -> f > a > for > some f, but f is not necessarily a monad. What about adding a > FunctorFix > class that is identical to MonadFix, except that it has a > Functor, > not a > Monad, superclass constraint? > > All the best, > Wolfgang > _______________________________________________ > Libraries mailing list > Libraries@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Your Unapply class looks a bit like Distributive, but with f ~ ((->) r). I wonder if there's a connection there? https://hackage.haskell.org/package/distributive-0.5.3/docs/Data-Distributiv... On Wed, Sep 6, 2017 at 4:42 PM, Jonathan S < gereeter+haskell.libraries@gmail.com> wrote:
I've structured this email with a bunch of sections because it is a bit overly long.
# A replacement for `FunctorFix` After thinking about this problem a bit more, I'm actually thinking that we might want a slightly stronger class (bikeshedding is welcome):
class Functor f => Unapply f where -- | This has a single law: -- > fmap (\g -> g x) (unapply f) == f x unapply :: (a -> f b) -> f (a -> b)
ffix :: Unapply f => (a -> f a) -> f a ffix = fmap fix . unapply
For efficiency, we'd probably want to add more methods to the class; I'll get back to that.
The semantics of unapply is to take a function that produces a container of some fixed "shape" and fill that shape with functions that extract the result at the given position. Since that isn't a clear description at all,
unapply (\x -> [f x, g x]) == [f, g]
The implementations of this class follow the same general pattern. The input function is evaluated at bottom to figure out the correct shape, and then that shape is filled with copies of the input function composed with projection functions. For example:
instance Unapply [] where unapply f = case f (error "Strict function passed to unapply") of [] -> [] (_:_) -> head . f : unapply (tail . f)
Notably, while the only interesting new instance I figured out for FunctorFix was for the sum of functors (:+:), I couldn't figure out a way to implement FunctorFix for the composition of two functors (:.:). Unapply, however, clearly is closed under functor composition:
instance (Unapply f, Unapply g) => Unapply (f :.: g) where unapply f = Comp1 (fmap unapply (unapply (unComp1 . f)))
# Proofs Now, this simple function with its one law is enough to derive *all* the laws we want for ffix (or even afix or mfix), using the fact that
forall x. fmap (\g -> g x) u = fmap (\g -> g x) v
implies u = v (up to the existance of `seq`, that is; I think things still work out with `seq` in the picture, but they get a lot messier).
## Lemmas ### Left lemma Forall x, fmap (\g -> g x) (unapply (fmap h . f)) = {constant application} fmap h (f x) = {constant application} fmap h (fmap (\g -> g x) (unapply f)) = fmap (\g -> h (g x)) (unapply f) = fmap (\g' -> g' x) (fmap (h .) (unapply f)) Therefore, unapply (fmap h . f) = fmap (h .) (unapply f)
### Right lemma Forall x, fmap (\g -> g x) (unapply (f . h)) = {constant application} f (h x) = {constant application} fmap (\g -> g (h x)) (unapply f) = fmap (\g' -> g' x) (fmap (. h) (unapply f)) Therefore, unapply (f . h) = fmap (. h) (unapply f)
### Nesting lemma Forall x, fmap (\g' -> g' x) (fmap (\g y -> g y y) (unapply (unapply . f))) = fmap (\g -> g x x) (unapply (unapply . f)) = fmap (\g' -> g' x) (fmap (\g -> g x) (unapply (unapply . f))) = {constant application} fmap (\g' -> g' x) (unapply (f x)) = {constant application} f x x = (\y -> f y y) x = {constant application} fmap (\g -> g x) (unapply (\y -> f y y)) Therefore, fmap (\g y -> g y y) (unapply (unapply . f)) = unapply (\y -> f y y)
### Inner application lemma Forall f, fmap (\g' -> g' f) (unapply (\g -> fmap g u)) = {constant application} fmap f u = fmap (\applyx -> applyx f) (fmap (\x g -> g x) u) Therefore, unapply (\g -> fmap g u) = fmap (\x g -> g x) u
### Join lemma Forall x, fmap (\g -> g x) (join (fmap unapply (unapply f))) = join (fmap (fmap (\g -> g x)) (fmap unapply (unapply f))) = join (fmap (\h -> fmap (\g -> g x) (unapply h)) (unapply f)) = {constant application} join (fmap (\h -> h x) (unapply f)) = {constant application} join (f x) = {constant application} fmap (\g -> g x) (unapply (join . f)) Therefore, join (fmap unapply (unapply f)) = unapply (join . f)
## Strictness f ⊥ = ⊥ ⇔ {constant application} fmap (\g -> g x) (unapply f) = ⊥ ⇔ {strictness of `fmap`} unapply f = ⊥ ⇔ {strictness of `fmap`} fmap fix (unapply f) = ⊥ ⇔ {definition of `ffix`} ffix f = ⊥
## Sliding ffix (fmap h . f) = {definition of `ffix`} fmap fix (unapply (fmap h . f)) = {left lemma} fmap fix (fmap (h .) (unapply f)) = fmap (\g -> fix (h . g)) (unapply f) = {sliding (`fix`)} fmap (\g -> h (fix (g . h))) (unapply f) = fmap h (fmap fix (fmap (. h) (unapply f))) = {right lemma} fmap h (fmap fix (unapply (f . h))) = {definition of `ffix`} fmap h (ffix (f . h))
## Nesting ffix (\x -> ffix (\y -> f x y)) = {definition of `ffix`} fmap fix (unapply (\x -> fmap fix (unapply (\y -> f x y)))) = fmap fix (unapply (fmap fix . unapply . f)) = {left lemma} fmap fix (fmap (fix .) (unapply (unapply . f))) = fmap (\g -> fix (\x -> fix (\y -> g x y))) (unapply (unapply . f)) = {nesting (`fix`)} fmap (\g -> fix (\x -> g x x)) (unapply (unapply . f)) = fmap fix (fmap (\g x -> g x x) (unapply (unapply . f))) = {nesting lemma} fmap fix (unapply (\x -> f x x)) = {definition of `ffix`} ffix (\x -> f x x)
## Pure left shrinking ffix (\x -> fmap (f x) u) = {definition of `ffix`} fmap fix (unapply (\x -> fmap (f x) u)) = fmap fix (unapply ((\g -> fmap g u) . f)) = {right lemma} fmap fix (fmap (. f) (unapply (\g -> fmap g u))) = {inner application lemma} fmap fix (fmap (. f) (fmap (\y g -> g y) u)) = fmap (\y -> fix (\x -> f x y))
## Left shrinking ffix (\x -> a >>= \y -> f x y) = {definition of `ffix`} fmap fix (unapply (\x -> a >>= \y -> f x y)) = fmap fix (unapply ((a >>=) . f)) = {right lemma} fmap fix (fmap (. f) (unapply (\g -> a >>= g))) = fmap fix (fmap (. f) (unapply (join . (\g -> fmap g a)))) = {join lemma} fmap fix (fmap (. f) (join (fmap unapply (unapply (\g -> fmap g a))))) = {inner application lemma} fmap fix (fmap (. f) (join (fmap unapply (fmap (\y g -> g y) a)))) = join (fmap (fmap fix . fmap (. f) . unapply . (\y g -> g y)) a) = a >>= \y -> fmap fix (fmap (. f) (unapply (\g -> g y))) = {right lemma} a >>= \y -> fmap fix (unapply ((\g -> g y) . f)) = a >>= \y -> fmap fix (unapply (\x -> f x y)) = {definition of `ffix`} a >>= \y -> ffix (\x -> f x y)
# Efficiency I should preface this section by saying that I haven't actually done any benchmarking or profiling.
Unfortunately, the Unapply solution seems to be slightly slower than directly implementing FunctorFix, for two reasons. First, with Unapply, the call to ffix is split into two pieces, first constructing the resultant data structure and then mapping over that to calculate fixed points. Especially since unapply may be recursive and might not be inlined, this can result in intermediate data structures and slowness. This is easily fixed. Instead of implementing unapply directly, we can add a new method to the class,
unapplyMap :: ((a -> b) -> c) -> (a -> f b) -> f c
defined by
unapplyMap f = fmap f . unapply unapply = unapplyMap id
Finally, we just implement unapplyMap directly instead of using unapply and use unapplyMap in the definition of ffix.
The second performance problem is more subtle. It comes from the fact that the current implementation of mfix for sum types is *speculative* in a sense. While the Unapply instance for [] given above is perfectly valid, it operates in two steps. In the first step, it calls f on bottom to determing whether the result is a cons node or nil, and in the second step, it extracts the appropriate components. The standard library implementation, in contrast, just initially assumes that f will return a cons node, calling `fix (head . f)`. If that results in [], it will back off and return [], but otherwise it can just extract the correct head immediately. To look at concrete instances, compare:
-- Equation 4.3 in Erkok's thesis, unoptimized instance FunctorFix Maybe where ffix f = case f (error "Strict function passed to ffix") of Nothing -> Nothing Just _ -> Just (fix (unJust . f)) where unJust (Just x) = x
-- Equation 4.2 in Erkok's thesis, optimized instance FunctorFix Maybe where ffix f = fix (f . unJust) where unJust (Just x) = x
-- Reformulation of Equation 4.2 that shows the equivalence of the two approaches instance FunctorFix Maybe where ffix f = case f (fix (unJust . f)) {- = fix (f . unJust) -} of Nothing -> Nothing Just x -> Just (x {- = unJust (f (fix (unJust . f))) = fix (unJust . f) -})
Essentially, the optimized implementation is still passing something to f and checking the result, but it chooses what to pass to f in a clever way so that, in the Just case, it can be reused.
This optimization is nice and useful, but it isn't composable. Even though it follows the same pattern of implementation, I don't see a way to directly use this optimization in the implementation for (:+:). It curcially relies on repliacing `fix (project . f)` with `fix (f . project)`, and the latter simply does not typecheck when `project` does not return a single value.
I don't see any good way to integrate this optimization into Unapply. To do so we'd need to know about the feedback inherent in a fixpoint to know to pass something useful into f when figuring out the shape of the result. The simplest solution would be to keep ffix in the type class and implement it independently of unapply whenever possible (a.k.a. everywhere but in the instance for (:.:)), but that seems ugly to me.
On Tue, Sep 5, 2017 at 6:35 PM, David Feuer
wrote: As long as we're going down this path, we should also consider ApplicativeFix. All the laws except left shrinking make immediate sense in that context. That surely has a law or two of its own. For example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
wrote: Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: > > > > Hi! > > There is the MonadFix class with the mfix method. However, there > are > situations where you need a fixed point operator of type a -> f > a > for > some f, but f is not necessarily a monad. What about adding a > FunctorFix > class that is identical to MonadFix, except that it has a > Functor, > not a > Monad, superclass constraint? > > All the best, > Wolfgang > _______________________________________________ > Libraries mailing list > Libraries@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
-- Chris Wong (https://lambda.xyz) "I had not the vaguest idea what this meant and when I could not remember the words, my tutor threw the book at my head, which did not stimulate my intellect in any way." -- Bertrand Russell

In my opinion, ApplicativeFix should have a restricted left shrinking axiom that is analogous to the restricted right shrinking axiom I proposed in another e-mail in this thread (but which does not hold for several MonadFix examples). Restricted Left Shrinking would look as follows: afix (\ ~(_, y) -> liftA2 (,) f (g y)) = liftA2 (,) f (afix g) I think this axiom is stronger than the one you suggested. All the best, Wolfgang Am Dienstag, den 05.09.2017, 19:35 -0400 schrieb David Feuer:
As long as we're going down this path, we should also consider ApplicativeFix. All the laws except left shrinking make immediate sense in that context. That surely has a law or two of its own. For example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
wrote: Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: Hi!
There is the MonadFix class with the mfix method. However, there are situations where you need a fixed point operator of type a -> f a for some f, but f is not necessarily a monad. What about adding a FunctorFix class that is identical to MonadFix, except that it has a Functor, not a Monad, superclass constraint?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Hi! Let me summarize the outcomes of this discussion. The goal is to replace the current MonadFix class by classes FunctorFix, ApplicativeFix, and MonadFix declared as follows:
class Functor f => FunctorFix f where
ffix :: (a -> f a) -> f a
class (Applicative f, FunctorFix f) => ApplicativeFix f where
afix :: (a -> f a) -> f a
class (Monad m, ApplicativeFix m) => MonadFix m where
mfix :: (a -> m a) -> m a
Note that the new MonadFix class differs from the previous one by its additional ApplicativeFix superclass constraint. For every instance of MonadFix, the equation mfix = afix should hold; for every instance of ApplicativeFix, the equation afix = ffix should hold. All instances of FunctorFix should fulfill the following axioms: Nesting: ffix (\ x -> ffix (\ y -> f x y)) = ffix (\ x -> f x x) Sliding: ffix (fmap h . f) = fmap h (ffix (f . h)) for strict h Functorial left shrinking: ffix (\ x -> fmap (h x) a) = fmap (\ y -> fix (\ x -> h x y)) a All instances of ApplicativeFix should additionally fulfill the following axioms: Purity: ffix (pure . h) = pure (fix h) Applicative left shrinking: afix (\ ~(_, y) -> liftA2 (,) a (g y)) = liftA2 (,) a (afix g) All instances of MonadFix should additionally fulfill the following axiom: Monadic left shrinking: mfix (\ x -> a >>= \ y -> f x y) = a >>= \ y -> mfix (\ x -> f x y) Should I turn this into a formal library proposal? All the best, Wolfgang Am Montag, den 11.09.2017, 03:08 +0300 schrieb Wolfgang Jeltsch:
In my opinion, ApplicativeFix should have a restricted left shrinking axiom that is analogous to the restricted right shrinking axiom I proposed in another e-mail in this thread (but which does not hold for several MonadFix examples). Restricted Left Shrinking would look as follows:
afix (\ ~(_, y) -> liftA2 (,) f (g y)) = liftA2 (,) f (afix g)
I think this axiom is stronger than the one you suggested.
All the best, Wolfgang
Am Dienstag, den 05.09.2017, 19:35 -0400 schrieb David Feuer:
As long as we're going down this path, we should also consider ApplicativeFix. All the laws except left shrinking make immediate sense in that context. That surely has a law or two of its own. For example, I'd expect that
afix (\x -> a *> f x) = a *> afix f
I don't know if it has anything more interesting.
On Tue, Sep 5, 2017 at 6:11 PM, Wolfgang Jeltsch
wrote: Jonathan, thanks a lot for working this out. Impressive!
So we want the following laws for FunctorFix:
Pure left shrinking:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
Sliding:
ffix (fmap h . f) = fmap h (ffix (f . h))
for strict h
Nesting:
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Levent Erkok’s thesis also mentions a strictness law for monadic fixed points, which is not mentioned in the documentation of Control.Monad.Fix. It goes as follows:
Strictness:
f ⊥ = ⊥ ⇔ mfix f = ⊥
Does this hold automatically, or did the designers of Control.Monad.Fix considered it inappropriate to require this?
All the best, Wolfgang
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
ffix (\x -> fmap (f x) g) = ffix (\x -> g >>= \y -> return (f x y)) = {left shrinking} g >>= \y -> ffix (\x -> return (f x y)) = {purity} g >>= \y -> return (fix (\x -> f x y)) = fmap (\y -> fix (\x -> f x y)) g
This is powerful enough to prove the scope change law, but is significantly simpler:
ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (f a)) = ffix (\t -> fmap (\a' -> (a', h a' (fst t) (snd t))) (f (fst t))) = {nesting} ffix (\t1 -> ffix (\t2 -> fmap (\a' -> (a', h a' (fst t1) (snd t1))) (f (fst t2)))) = ffix (\~(a, b) -> ffix (fmap (\a' -> (a', h a' a b)) . f . fst)) = {sliding} ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix (f . fst . (\a' -> (a', h a' a b))))) = ffix (\~(a, b) -> fmap (\a' -> (a', h a' a b)) (ffix f)) = {pure left shrinking} fmap (\a' -> fix (\~(a, b) -> (a', h a' a b))) (ffix f)
Moreover, it seems necessary to prove that ffix interacts well with constant functions:
ffix (const a) = ffix (\_ -> fmap id a) = fmap (\y -> fix (\_ -> id y)) a = fmap id a = a
In addition, when the functor in question is in fact a monad, it implies purity:
ffix (return . f) = ffix (\x -> return (f x)) = ffix (\x -> fmap (\_ -> f x) (return ())) = fmap (\_ -> fix (\x -> f x)) (return ()) = return (fix f)
Sincerely, Jonathan
On Fri, Sep 1, 2017 at 4:49 PM, Wolfgang Jeltsch
wrote: Hi!
Both the sliding law and the nesting law seem to make sense for FunctorFix. The other two laws seem to fundamentally rely on the existence of return (purity law) and (>>=) (left-shrinking).
However, there is also the scope change law, mentioned on page 19 of Levent Erkok’s thesis (http://digitalcommons.ohsu.edu/etd/164/ ). This law can be formulated based on fmap, without resorting to return and (>>=). Levent proves it using all four MonadFix axioms. I do not know whether it is possible to derive it just from sliding, nesting, and the Functor laws, or whether we would need to require it explicitly.
Every type that is an instance of MonadFix, should be an instance of FunctorFix, with ffix being the same as mfix. At the moment, I cannot come up with a FunctorFix instance that is not an instance of Monad.
My desire for FunctorFix comes from my work on the new version of the incremental-computing package. In this package, I have certain operations that were supposed to work for all functors. I found out that I need these functors to have mfix-like operations, but I do not want to impose a Monad constraint on them, because I do not need return or (>>=).
All the best, Wolfgang
Am Mittwoch, den 30.08.2017, 16:30 -0400 schrieb David Feuer:
I assume you want to impose the MonadFix sliding law,
ffix (fmap h . f) = fmap h (ffix (f . h)), for strict h.
Do you also want the nesting law?
ffix (\x -> ffix (\y -> f x y)) = ffix (\x -> f x x)
Are there any other laws you'd like to add in place of the seemingly irrelevant purity and left shrinking laws?
Can you give some sample instances and how one might use them?
On Wed, Aug 30, 2017 at 2:59 PM, Wolfgang Jeltsch
wrote: > > > > > > Hi! > > There is the MonadFix class with the mfix method. However, > there > are > situations where you need a fixed point operator of type a > -> f > a > for > some f, but f is not necessarily a monad. What about > adding > a > FunctorFix > class that is identical to MonadFix, except that it has a > Functor, > not a > Monad, superclass constraint? > > All the best, > Wolfgang > _______________________________________________ > Libraries mailing list > Libraries@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
I wonder whether “pure left shrinking” is an appropriate name for this. The shrinking is on the left, but the purity is on the right. Note that in “pure right shrinking”, a derived property discussed in Erkok’s thesis, both the shrinking and the purity are on the right. All the best, Wolfgang

Am Donnerstag, den 07.09.2017, 04:54 +0300 schrieb Wolfgang Jeltsch:
Am Samstag, den 02.09.2017, 14:08 -0500 schrieb Jonathan S:
I think that in addition to nesting and sliding, we should have the following law:
ffix (\x -> fmap (f x) g) = fmap (\y -> fix (\x -> f x y)) g
I guess I'd call this the "pure left shrinking" law because it is the composition of left shrinking and purity:
I wonder whether “pure left shrinking” is an appropriate name for this. The shrinking is on the left, but the purity is on the right. Note that in “pure right shrinking”, a derived property discussed in Erkok’s thesis, both the shrinking and the purity are on the right.
While we are at pure right shrinking, let me bring up another question: Why is there no general right shrinking axiom for MonadFix? Something like the following: Right Shrinking: mfix (\ ~(x, _) -> f x >>= \ y -> g y >>= \z -> return (y, z)) >>= return . snd = mfix f >>= g Can this be derived from the MonadFix axioms? Or are there reasonable MonadFix instances for which it does not hold? All the best, Wolfgang

I think you'll at least have to specify that g is lazy, because f may let
its argument "leak" arbitrarily into the return value of the action it
produces. But I don't have a clear sense of whether this is a good law
otherwise.
On Sep 6, 2017 10:04 PM, "Wolfgang Jeltsch"

Your right shrinking law is almost exactly the (impure) right
shrinking law specified in Erkok's thesis on page 22, equation 2.22.
The problem with this law, as shown on page 56, is that most of the
MonadFix instances we care about do not follow the right shrinking
law. In general (see Proposition 3.1.6 on page 27), if (>>=) is strict
in its left argument then either the monad is trivial or right
shrinking is not satisfied.
On Wed, Sep 6, 2017 at 9:21 PM, David Feuer
I think you'll at least have to specify that g is lazy, because f may let its argument "leak" arbitrarily into the return value of the action it produces. But I don't have a clear sense of whether this is a good law otherwise.
On Sep 6, 2017 10:04 PM, "Wolfgang Jeltsch"
wrote: While we are at pure right shrinking, let me bring up another question: Why is there no general right shrinking axiom for MonadFix? Something like the following:
Right Shrinking:
mfix (\ ~(x, _) -> f x >>= \ y -> g y >>= \z -> return (y, z)) >>= return . snd = mfix f >>= g
Can this be derived from the MonadFix axioms? Or are there reasonable MonadFix instances for which it does not hold?
All the best, Wolfgang _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

I see that a general right shrinking axiom would be a bad idea as it would rule out many sensible instances of MonadFix. However, I think that it is very reasonable to have the following restricted right shrinking axiom: mfix (\ ~(x, _) -> liftM2 (,) (f x) g) = liftM2 (,) (mfix f) g The important difference compared to general right shrinking is that the shape (or effect) of g does not depend on the output of f x. Does this restricted right shrinking follow from the current MonadFix axioms? At the moment, it does not look to me that it would. An interesting fact about this restricted right shrinking is that it makes sense not only for all monads, but for all applicative functors. All the best, Wolfgang Am Donnerstag, den 07.09.2017, 10:11 -0500 schrieb Jonathan S:
Your right shrinking law is almost exactly the (impure) right shrinking law specified in Erkok's thesis on page 22, equation 2.22. The problem with this law, as shown on page 56, is that most of the MonadFix instances we care about do not follow the right shrinking law. In general (see Proposition 3.1.6 on page 27), if (>>=) is strict in its left argument then either the monad is trivial or right shrinking is not satisfied.
On Wed, Sep 6, 2017 at 9:21 PM, David Feuer
wrote: I think you'll at least have to specify that g is lazy, because f may let its argument "leak" arbitrarily into the return value of the action it produces. But I don't have a clear sense of whether this is a good law otherwise.
On Sep 6, 2017 10:04 PM, Wolfgang Jeltsch wrote:
While we are at pure right shrinking, let me bring up another question: Why is there no general right shrinking axiom for MonadFix? Something like the following:
Right Shrinking:
mfix (\ ~(x, _) -> f x >>= \ y -> g y >>= \z -> return (y, z)) >>= return . snd = mfix f >>= g
Can this be derived from the MonadFix axioms? Or are there reasonable MonadFix instances for which it does not hold?
All the best, Wolfgang

Hi Wolfgang:
Your variant of right-shrinking is an interesting one, and I do suspect it
holds for monads like Maybe. But it definitely doesn't follow from MonadFix
axioms, since it is violated by at least the strict state monad and the IO
monad. See below code for a demo. It would be interesting to classify
exactly under which conditions it holds, as it does seem weaker than
right-shrinking, but perhaps not by much as you originally suspected.
Cheers,
-Levent.
import Data.IORef
import Control.Monad.State.Strict
lhs, rhs :: MonadFix m => (a -> m a) -> m b -> m (a, b)
lhs f g = mfix (\ ~(x, _) -> liftM2 (,) (f x) g)
rhs f g = liftM2 (,) (mfix f) g
checkST :: IO ()
checkST = do print $ (take 10 . fst . fst) $ runState (rhs f g) []
print $ (take 10 . fst . fst) $ runState (lhs f g) []
where f :: [Int] -> State [Int] [Int]
f xs = do let xs' = 1:xs
put xs'
return xs'
g :: State [Int] Int
g = do s <- get
case s of
[x] -> return x
_ -> return 1
checkIO :: IO ()
checkIO = do cl <- newIORef []
print =<< (take 10 . fst) `fmap` rhs (f cl) (g cl)
cr <- newIORef []
print =<< (take 10 . fst) `fmap` lhs (f cr) (g cr)
where f :: IORef [Int] -> [Int] -> IO [Int]
f c xs = do let xs' = 1:xs
writeIORef c xs'
return xs'
g :: IORef [Int] -> IO Int
g c = do s <- readIORef c
case s of
[x] -> return x
_ -> return 1
On Fri, Sep 8, 2017 at 5:27 PM, Wolfgang Jeltsch
I see that a general right shrinking axiom would be a bad idea as it would rule out many sensible instances of MonadFix. However, I think that it is very reasonable to have the following restricted right shrinking axiom:
mfix (\ ~(x, _) -> liftM2 (,) (f x) g) = liftM2 (,) (mfix f) g
The important difference compared to general right shrinking is that the shape (or effect) of g does not depend on the output of f x.
Does this restricted right shrinking follow from the current MonadFix axioms? At the moment, it does not look to me that it would.
An interesting fact about this restricted right shrinking is that it makes sense not only for all monads, but for all applicative functors.
All the best, Wolfgang
Am Donnerstag, den 07.09.2017, 10:11 -0500 schrieb Jonathan S:
Your right shrinking law is almost exactly the (impure) right shrinking law specified in Erkok's thesis on page 22, equation 2.22. The problem with this law, as shown on page 56, is that most of the MonadFix instances we care about do not follow the right shrinking law. In general (see Proposition 3.1.6 on page 27), if (>>=) is strict in its left argument then either the monad is trivial or right shrinking is not satisfied.
On Wed, Sep 6, 2017 at 9:21 PM, David Feuer
wrote: I think you'll at least have to specify that g is lazy, because f may let its argument "leak" arbitrarily into the return value of the action it produces. But I don't have a clear sense of whether this is a good law otherwise.
On Sep 6, 2017 10:04 PM, Wolfgang Jeltsch wrote:
While we are at pure right shrinking, let me bring up another question: Why is there no general right shrinking axiom for MonadFix? Something like the following:
Right Shrinking:
mfix (\ ~(x, _) -> f x >>= \ y -> g y >>= \z -> return (y, z)) >>=
return . snd
= mfix f >>= g
Can this be derived from the MonadFix axioms? Or are there reasonable MonadFix instances for which it does not hold?
All the best, Wolfgang
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

Hi, Levent! I see the point: Data can be communicated from the first argument of (>>=) to the second not only through the output of the first argument, but also by the first argument storing the data as part of some state and the second argument fetching it from there. All the best, Wolfgang Am Samstag, den 09.09.2017, 11:57 -0700 schrieb Levent Erkok:
Hi Wolfgang:
Your variant of right-shrinking is an interesting one, and I do suspect it holds for monads like Maybe. But it definitely doesn't follow from MonadFix axioms, since it is violated by at least the strict state monad and the IO monad. See below code for a demo. It would be interesting to classify exactly under which conditions it holds, as it does seem weaker than right-shrinking, but perhaps not by much as you originally suspected.
Cheers,
-Levent.
import Data.IORef import Control.Monad.State.Strict
lhs, rhs :: MonadFix m => (a -> m a) -> m b -> m (a, b) lhs f g = mfix (\ ~(x, _) -> liftM2 (,) (f x) g) rhs f g = liftM2 (,) (mfix f) g
checkST :: IO () checkST = do print $ (take 10 . fst . fst) $ runState (rhs f g) [] print $ (take 10 . fst . fst) $ runState (lhs f g) [] where f :: [Int] -> State [Int] [Int] f xs = do let xs' = 1:xs put xs' return xs'
g :: State [Int] Int g = do s <- get case s of [x] -> return x _ -> return 1
checkIO :: IO () checkIO = do cl <- newIORef [] print =<< (take 10 . fst) `fmap` rhs (f cl) (g cl) cr <- newIORef [] print =<< (take 10 . fst) `fmap` lhs (f cr) (g cr) where f :: IORef [Int] -> [Int] -> IO [Int] f c xs = do let xs' = 1:xs writeIORef c xs' return xs'
g :: IORef [Int] -> IO Int g c = do s <- readIORef c case s of [x] -> return x _ -> return 1
On Fri, Sep 8, 2017 at 5:27 PM, Wolfgang Jeltsch info> wrote:
I see that a general right shrinking axiom would be a bad idea as it would rule out many sensible instances of MonadFix. However, I think that it is very reasonable to have the following restricted right shrinking axiom:
mfix (\ ~(x, _) -> liftM2 (,) (f x) g) = liftM2 (,) (mfix f) g
The important difference compared to general right shrinking is that the shape (or effect) of g does not depend on the output of f x.
Does this restricted right shrinking follow from the current MonadFix axioms? At the moment, it does not look to me that it would.
An interesting fact about this restricted right shrinking is that it makes sense not only for all monads, but for all applicative functors.
All the best, Wolfgang
Your right shrinking law is almost exactly the (impure) right shrinking law specified in Erkok's thesis on page 22, equation 2.22. The problem with this law, as shown on page 56, is that most of
MonadFix instances we care about do not follow the right shrinking law. In general (see Proposition 3.1.6 on page 27), if (>>=) is strict in its left argument then either the monad is trivial or right shrinking is not satisfied.
On Wed, Sep 6, 2017 at 9:21 PM, David Feuer
wrote:
I think you'll at least have to specify that g is lazy, because
f
may let its argument "leak" arbitrarily into the return value of
action it produces. But I don't have a clear sense of whether
Am Donnerstag, den 07.09.2017, 10:11 -0500 schrieb Jonathan S: the the this
is a good law otherwise.
On Sep 6, 2017 10:04 PM, Wolfgang Jeltsch wrote:
While we are at pure right shrinking, let me bring up another question: Why is there no general right shrinking axiom for MonadFix? Something like the following:
Right Shrinking:
mfix (\ ~(x, _) -> f x >>= \ y -> g y >>= \z -> return (y, z)) >>= return . snd = mfix f >>= g
Can this be derived from the MonadFix axioms? Or are there reasonable MonadFix instances for which it does not hold?
All the best, Wolfgang
Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
participants (5)
-
Chris Wong
-
David Feuer
-
Jonathan S
-
Levent Erkok
-
Wolfgang Jeltsch