Hi,

I was looking at cleaning up a bit the situation with dataflow analysis for Cmm.
In particular, I was experimenting with rewriting the current
`cmm.Hoopl.Dataflow` module:
- To only include the functionality to do analysis (since GHC doesn’t seem to use
  the rewriting part).
  Benefits:
  - Code simplification (we could remove a lot of unused code).
  - Makes it clear what we’re actually using from Hoopl.
- To have an interface that works with transfer functions operating on a whole
  basic block (`Block CmmNode C C`).
  This means that it would be up to the user of the algorithm to traverse the
  whole block.
  Benefits:
  - Further simplifications.
  - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a copy&paste
    of `analyzeFwd` but ignores the middle nodes (probably for efficiency of
    analyses that only look at the blocks).
  - More flexible (e.g., the clients could know which block they’re processing;
    we could consider memoizing some per block information, etc.).

What do you think about this?

I have a branch that implements the above:
It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2`
module), so that it's possible to run ./validate while comparing the results of
the old implementation with the new one.

Second question: how could we merge this? (assuming that people are generally
ok with the approach) Some ideas:
- Change cmm/Hoopl/Dataflow module itself along with the three analyses that use
  it in one step.
- Introduce the Dataflow2 module first, then switch the analyses, then remove
  any unused code that still depends on the old Dataflow module, finally remove
  the old Dataflow module itself.
(Personally I'd prefer the second option, but I'm also ok with the first one)

I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s the
recommended workflow for code that’s not ready for review…

Thanks,
Michal