Nod nod.

amazonka-ec2 has a particularly painful module containing just a couple of hundred type definitions and associated instances and stuff. None of the types is enormous. There's an issue open on GitHub[1] where I've guessed at some possible better ways of splitting the types up to make GHC's life easier, but it'd be great if it didn't need any such shenanigans. It's a bit of a pathological case: auto-generated 15kLoC and lots of deriving, but I still feel it should be possible to compile with less than 2.8GB RSS.
 

Cheers,

David

On 4 Dec 2016 19:51, "Alan & Kim Zimmerman" <alan.zimm@gmail.com> wrote:
I agree.

I find compilation time on things with large data structures, such as working with the GHC AST via the GHC API get pretty slow.

To the point where I have had to explicitly disable optimisation on HaRe, otherwise the build takes too long.

Alan


On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta <michal.terepeta@gmail.com> wrote:
Hi everyone,

I've been running nofib a few times recently to see the effect of some changes
on compile time (not the runtime of the compiled program). And I've started
wondering how representative nofib is when it comes to measuring compile time
and compiler allocations? It seems that most of the nofib programs compile
really quickly...

Is there some collections of modules/libraries/applications that were put
together with the purpose of benchmarking GHC itself and I just haven't
seen/found it?

If not, maybe we should create something? IMHO it sounds reasonable to have
separate benchmarks for:
- Performance of GHC itself.
- Performance of the code generated by GHC.

Thanks,
Michal


_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs