Wow – tricky stuff! I would never have thought of trying to optimise that program, but it’s fascinating that you get lots and lots of them from FRP.
add1rec = SF (\a -> let !b = a+1 in (b,add1rec))
add1 = SF (\a -> let !b = a+1 in (b,add1rec))
Simon
From: ghc-devs <ghc-devs-bounces@haskell.org>
On Behalf Of Sebastian Graf
Sent: 29 March 2020 15:34
To: Alexis King <lexi.lambda@gmail.com>
Cc: ghc-devs <ghc-devs@haskell.org>
Subject: Re: Fusing loops by specializing on functions with SpecConstr?
Hi Alexis,
I've been wondering the same things and have worked on it on and off. See my progress in
https://gitlab.haskell.org/ghc/ghc/issues/855#note_149482 and
https://gitlab.haskell.org/ghc/ghc/issues/915#note_241520.
The big problem with solving the higher-order specialisation problem through SpecConstr (which is what I did in my reports in #855) is indeed that it's hard to
Note that matching on specific Ids binding functions is just an approximation using representional equality (on the Id's Unique) rather than some sort of more semantic equality. My latest endeavour into the matter in #915 from December
was using types as the representational entity and type class specialisation. I think I got ultimately blocked on thttps://gitlab.haskell.org/ghc/ghc/issues/17548,
but apparently I didn't document the problematic program.
Maybe my failure so far is that I want it to apply and optimise all cases and for more complex stream pipelines, rather than just doing a better best effort job.
Hope that helps. Anyway, I'm also really keen on nailing this! It's one of my high-risk, high-reward research topics. So if you need someone to collaborate/exchange ideas with, I'm happy to help!
All the best,
Sebastian
Am So., 29. März 2020 um 10:39 Uhr schrieb Alexis King <lexi.lambda@gmail.com>:
Hi all,
I have recently been toying with FRP, and I’ve noticed that
traditional formulations generate a lot of tiny loops that GHC does
a very poor job optimizing. Here’s a simplified example:
newtype SF a b = SF { runSF :: a -> (b, SF a b) }
add1_snd :: SF (String, Int) (String, Int)
add1_snd = second add1 where
add1 = SF $ \a -> let !b = a + 1 in (b, add1)
second f = SF $ \(a, b) ->
let !(c, f') = runSF f b
in ((a, c), second f')
Here, `add1_snd` is defined in terms of two recursive bindings,
`add1` and `second`. Because they’re both recursive, GHC doesn’t
know what to do with them, and the optimized program still has two
separate recursive knots. But this is a missed optimization, as
`add1_snd` is equivalent to the following definition, which fuses
the two loops together and consequently has just one recursive knot:
add1_snd_fused :: SF (String, Int) (String, Int)
add1_snd_fused = SF $ \(a, b) ->
let !c = b + 1
in ((a, c), add1_snd_fused)
How could GHC get from `add1_snd` to `add1_snd_fused`? In theory,
SpecConstr could do it! Suppose we specialize `second` at the call
pattern `second add1`:
{-# RULE "second/add1" second add1 = second_add1 #-}
second_add1 = SF $ \(a, b) ->
let !(c, f') = runSF add1 b
in ((a, c), second f')
This doesn’t immediately look like an improvement, but we’re
actually almost there. If we unroll `add1` once on the RHS of
`second_add1`, the simplifier will get us the rest of the way. We’ll
end up with
let !b1 = b + 1
!(c, f') = (b1, add1)
in ((a, c), second f')
and after substituting f' to get `second add1`, the RULE will tie
the knot for us.
This may look like small potatoes in isolation, but real programs
can generate hundreds of these tiny, tiny loops, and fusing them
together would be a big win. The only problem is SpecConstr doesn’t
currently specialize on functions! The original paper, “Call-pattern
Specialisation for Haskell Programs,” mentions this as a possibility
in Section 6.2, but it points out that actually doing this in
practice would be pretty tricky:
> Specialising for function arguments is more slippery than for
> constructor arguments. In the example above the argument was a
> simple variable, but what if it was instead a lambda term? [...]
>
> The trouble is that lambda abstractions are much more fragile than
> constructor applications, in the sense that simple transformations
> may make two abstractions look different although they have the
> same value.
Still, the difference this could make in a program of mine is so
large that I am interested in exploring it anyway. I am wondering if
anyone has investigated this possibility any further since the paper
was published, or if anyone knows of other use cases that would
benefit from this capability.
Thanks,
Alexis
_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs