
On 2016-12-08 17:04, Joachim Breitner wrote:
Hi,
Am Donnerstag, den 08.12.2016, 01:03 -0500 schrieb Joachim Breitner:
I am not sure how useful this is going to be: + Tests lots of common and important real-world libraries. − Takes a lot of time to compile, includes CPP macros and C code. (More details in the README linked above).
another problem with the approach of taking modern real-world code: It uses a lot of non-boot libraries that are quite compiler-close and do low-level stuff (e.g. using Template Haskell, or stuff like the). If we add that not nofib, we’d have to maintain its compatibility with GHC as we continue developing GHC, probably using lots of CPP. This was less an issue with the Haskell98 code in nofib.
But is there a way to test realistic modern code without running into this problem?
This may be a totally crazy idea, but has any thought been given a "Phone Home"-type model? Very simplistic approach: a) Before it compiles, GHC computes a hash of the file. b) GHC has internal profiling "markers" in its compilation pipeline. c) GHC sends those "markers" + hash to some semi-centralized highly-available service somewhere under *.haskell.org. The idea is that the fact that "hashes are equal" => "performance should be comparable". Ideally, it'd probably be best to be able to have the full source, but that may be a tougher sell, obviously. (Obviously would have to be opt-in, either way.) There are a few obvious problems with this, but an obvious win would be that it could be done on a massively *decentralized* scale. Most problematic part might be that it wouldn't be able to track things like "I changed $this_line and now it compiles twice as slow". Actually, now that I think about it: What about if this were integrated into the Cabal infrastructure? If I specify "upload-perf-numbers: True" in my .cabal file, any project on (e.g.) GitHub that wanted to opt-in could do so, they could build using Travis, and voila! What do you think? Totally crazy, or could it be workable? Regards,