
Hi Conal,
so if I understand this right, you'd rather not wanted `exampleC` to be
eta-expanded (or the binding of `s` to be floated into the lambda)?
Or is it that you want CSE to find out that you always supply the same `t`
as the first argument and share the partial application and thus the work
of computing `s`?
If it's the former: GHC doesn't normally do this, unless it has found out
that no sharing (of work done to evaluate `s`) would be lost through
eta-expansion.
This is the case when `exampleC` is always called with two arguments, so
that no binding of `s` is shared, for example.
Could you maybe post a complete module/expression representative of all
uses of `exampleC`?
If it's the latter, I'm afraid I can't really help, but surely someone else
can.
Cheers,
Sebastian
On Tue, Jul 18, 2017 at 5:34 PM, Conal Elliott
I'm seeing what looks like repeated computation under a lambda with `-O` and `-O2`. The following definition:
exampleC :: Double -> Double -> Double exampleC = \ t -> let s = sin t in \ x -> x + s
yields this Core:
-- RHS size: {terms: 13, types: 6, coercions: 0} exampleC :: Double -> Double -> Double exampleC = \ (t_afI6 :: Double) (eta_B1 :: Double) -> case eta_B1 of _ { D# x_aj5c -> case t_afI6 of _ { D# x1_aj5l -> D# (+## x_aj5c (sinDouble# x1_aj5l)) } }
The `sinDouble#` here depends only on `t_afI6` (`t`) but still appears under the binding of `eta_B1` (`x`).
I'm concerned because many of my uses of such functions involve computations dependent only on `t` (time) but with millions of uses (space) per `t`. (I'm working on a GHC Core plugin (compiling to categories), with one use generating graphics GPU code.)
Does the optimization I expected somehow happen later in the compilation pipeline?
Are there Core-manipulating functions in GHC that I can use to make it happen earlier (via a `BuiltinRule`, i.e., Core-to-Core function)?
Thanks, -- Conal
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs