On 16 October 2016 at 14:03, Michal Terepeta <michal.terepeta@gmail.com> wrote:
Hi,

I was looking at cleaning up a bit the situation with dataflow analysis for Cmm.
In particular, I was experimenting with rewriting the current
`cmm.Hoopl.Dataflow` module:
- To only include the functionality to do analysis (since GHC doesn’t seem to use
  the rewriting part).
  Benefits:
  - Code simplification (we could remove a lot of unused code).
  - Makes it clear what we’re actually using from Hoopl.
- To have an interface that works with transfer functions operating on a whole
  basic block (`Block CmmNode C C`).
  This means that it would be up to the user of the algorithm to traverse the
  whole block.

Ah! This is actually something I wanted to do but didn't get around to.  When I was working on the code generator I found that using Hoopl for rewriting was prohibitively slow, which is why we're not using it for anything right now, but I think that pulling out the basic block transformation is possibly a way forwards that would let us use Hoopl.

A lot of the code you're removing is my attempt at "optimising" the Hoopl dataflow algorithm to make it usable in GHC.  (I don't mind removing this, it was a failed experiment really)
 
  Benefits:
  - Further simplifications.
  - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a copy&paste
    of `analyzeFwd` but ignores the middle nodes (probably for efficiency of
    analyses that only look at the blocks).

Aren't we using this in dataflowAnalFwdBlocks, that's used by procpointAnalysis?
 
Cheers
Simon

  - More flexible (e.g., the clients could know which block they’re processing;
    we could consider memoizing some per block information, etc.).

What do you think about this?

I have a branch that implements the above:
It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2`
module), so that it's possible to run ./validate while comparing the results of
the old implementation with the new one.

Second question: how could we merge this? (assuming that people are generally
ok with the approach) Some ideas:
- Change cmm/Hoopl/Dataflow module itself along with the three analyses that use
  it in one step.
- Introduce the Dataflow2 module first, then switch the analyses, then remove
  any unused code that still depends on the old Dataflow module, finally remove
  the old Dataflow module itself.
(Personally I'd prefer the second option, but I'm also ok with the first one)

I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s the
recommended workflow for code that’s not ready for review…

Thanks,
Michal


_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs