
Hi, I’ve started the GHC Performance Regression Collection Proposal[1] (Rendered [2]) a while ago with the idea of having a trivially community curated set of small[3] real-world examples with performance regressions. I might be at fault here for not describing this to the best of my abilities. Thus if there is interested, and this sounds like an useful idea, maybe we should still pursue this proposal? Cheers, moritz [1]: https://github.com/ghc-proposals/ghc-proposals/pull/26 [2]: https://github.com/angerman/ghc-proposals/blob/prop/perf-regression/proposal... [3]: for some definition of small
On Dec 5, 2016, at 6:31 PM, Simon Peyton Jones via ghc-devs
wrote: If not, maybe we should create something? IMHO it sounds reasonable to have
separate benchmarks for:
- Performance of GHC itself.
- Performance of the code generated by GHC.
I think that would be great, Michael. We have a small and unrepresentative sample in testsuite/tests/perf/compiler
Simon
From: ghc-devs [mailto:ghc-devs-bounces@haskell.org] On Behalf Of Michal Terepeta Sent: 04 December 2016 19:47 To: ghc-devs
Subject: Measuring performance of GHC Hi everyone,
I've been running nofib a few times recently to see the effect of some changes
on compile time (not the runtime of the compiled program). And I've started
wondering how representative nofib is when it comes to measuring compile time
and compiler allocations? It seems that most of the nofib programs compile
really quickly...
Is there some collections of modules/libraries/applications that were put
together with the purpose of benchmarking GHC itself and I just haven't
seen/found it?
If not, maybe we should create something? IMHO it sounds reasonable to have
separate benchmarks for:
- Performance of GHC itself.
- Performance of the code generated by GHC.
Thanks,
Michal
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs