
#8638: Optimize by demoting "denormalized" Integers (i.e. J# -> S#) --------------------------------------------+------------------------------ Reporter: hvr | Owner: hvr Type: feature request | Status: infoneeded Priority: normal | Milestone: 7.8.1 Component: libraries (other) | Version: 7.7 Resolution: | Keywords: integer- Operating System: Unknown/Multiple | gmp Type of failure: Runtime performance bug | Architecture: Test Case: | Unknown/Multiple Blocking: | Difficulty: Unknown | Blocked By: | Related Tickets: --------------------------------------------+------------------------------ Comment (by hvr): Replying to [comment:4 hvr]:
I'm not sure yet why `mandel` allocates more as I couldn't see find any significant use of `Integer`s in the implementation. After some profiling I found out that swapping out the implementation of `magnitude` inside `Mandel.diverge` with a more naive one had a ''huge'' effect on the allocation.
Ok, found the culprit; `magnitude` uses `scaleFloat` which in turn uses `decodeFloat`/`encodeFloat`. I removed the `smartJ#` I put into `decodeFloat` and the allocation-delta for the `mandel`-benchmark went back to more or less 0. As the main purpose of `decodeFloat` seems to be to be used in combination with `encodeFloat` I guess there's actually little benefit trying to demote the significant to `S#` anyway... I'll attach the new nofib report shortly. -- Ticket URL: http://ghc.haskell.org/trac/ghc/ticket/8638#comment:5 GHC http://www.haskell.org/ghc/ The Glasgow Haskell Compiler