Indeed; I've opened D2335 [1] to reenable -fregs-graph and add an
appropriate note to the users guide.
For the record, I have also struggled with register spilling issues in
the past. See, for instance, #10012, which describes a behavior which
arises from the C-- sinking pass's unwillingness to duplicate code
across branches. While in general it's good to avoid the code bloat that
this duplication implies, in the case shown in that ticket duplicating
the computation would be significantly less code than the bloat from
spilling the needed results.
> But I found a few interesting optimizations that llvm did. For example,
> there was a heap adjustment and check in the looping path which was
> redundant and was readjusted in the loop itself without use. LLVM either
> removed the redundant _adjustments_ in the loop or moved them out of the
> loop. But it did not remove the corresponding heap _checks_. That makes me
> wonder if the redundant heap checks can also be moved or removed. If we can
> do some sort of loop analysis at the CMM level itself and avoid or remove
> the redundant heap adjustments as well as checks or at least float them out
> of the cycle wherever possible. That sort of optimization can make a
> significant difference to my case at least. Since we are explicitly aware
> of the heap at the CMM level there may be an opportunity to do better than
> llvm if we optimize the generated CMM or the generation of CMM itself.
>
Very interesting, thanks for writing this down! Indeed if these checks
really are redundant then we should try to avoid them. Do you have any
code you could share that demosntrates this?
It would be great to open Trac tickets to track some of the optimization
There is indeed a question of where we wish to focus our optimization
efforts. However, I think using LLVM exclusively would be a mistake.
LLVM is a rather large dependency that has in the past been rather
difficult to track (this is why we now only target one LLVM release in a
given GHC release). Moreover, it's significantly slower than our
existing native code generator. There are a number of reasons for this,
some of which are fixable. For instance, we currently make no effort to tell
LLVM which passes are worth running and which we've handled; this is
something which should be fixed but will require a rather significant
investment by someone to determine how GHC's and LLVM's passes overlap,
how they interact, and generally which are helpful (see GHC #11295).
Furthermore, there are a few annoying impedance mismatches between Cmm
and LLVM's representation. This can be seen in our treatment of proc
points: when we need to take the address of a block within a function
LLVM requires that we break the block into a separate procedure, hiding
many potential optimizations from the optimizer. This was discussed
further on this list earlier this year [2]. It would be great to
eliminate proc-point splitting but doing so will almost certainly
require cooperation from LLVM.