
On Sun, Jan 26, 2014 at 5:16 AM, Joachim Breitner
just to clarify: For what purpose do you want the nightlies? To check whether GHC validates cleanly, to compare performance numbers, or to get hold of up-to-date binary distributions?
In practice: all three. Developers want logs to see what went wrong. Users want snapshot consistent distribution of snapshots to test against. Both are legitimate uses that are covered by such infrastructure.
For the first, I’d really really like to see something that runs before a change enters master, so that non-validating mistakes like http://git.haskell.org/ghc.git/commitdiff/b26e2f92c5c6f77fe361293a128da637e7... (without the corresponding change in http://git.haskell.org/ghc.git/commitdiff/59f491a933ec7380698b776e14c3753c2a...) do not reach master in the first place.
I’m happy to help setting up such an infrastructure, including designing the precise workflow.
This is doable, but the question is to what extent? There are literally dozens of build configurations that could break with any given patch, without others breaking: * Profiling could break. Or profiling GHC (but not other smaller things) could break. * Dynamic linking could break. * Rarer configurations could break but only for some cases, e.g. threaded + profiling. Or LLVM + Profiling, or LLVM + dynamic linking, etc etc. * Static linking for GHCi could break on platforms that now use dynamic linking by default (as we saw happen when I broke it.) * GHC may only expose certain faulty behavior at certain optimization levels (both in bootstrapping itself and in the tests - so maybe ./validate looks mostly OK, but -O2 is not.) * Bootstrapping the build with different compilers may break (i.e. an unintentional backwards incompatible change is introduced in the stage1 build) * Any of these could theoretically break depending on things like the host platform. * The testsuite runs 'fast' by default. It would need to run slowly to potentially uncover more problems, but this greatly increases the runtime. * Not all machines are equal, and some will take dramatically longer or shorter amounts of time to build (and subsequently) uncover these problems. In my experience, all of the above are absolutely possible scenarios for something wrong to happen. Also, in practice, a lot of these things either need an incredible amount of cross-communication to fix (between the bot runner and the developer,) or require direct access to the machine in order to debug. Not everyone has that hardware, and not everyone will even be willing to give access (for legitimate reasons - some people have offered to run build bots, but behind corporate infrastructure at places like IBM.) And with the amount of time that many configurations requires, the turnaround time for some things could become incredibly large and frustrating. I think if we were to introduce pre-push validation, the only thing it could reasonably test would be ./validate and nothing else. And even then, e.g. on high-powered ARM platforms, this will still seriously take *hours*, and that's a significantly longer time-to-wait than most people are used to.
For the second and third, a build farm like the builders would of course be great. I actually once got a Igloo snowboall from Linaro for that purpose, but never finished setting it up properly. So once the builders are going to be revived, I’d like to finally do that.
If you're willing to contribute ARM builders, both Ben Gamari and I would be very happy to have you do so (me and him are the only people actively doing lots of ARM work, and frankly, Ben is doing most of it.)
Greetings, Joachim
-- Joachim “nomeata” Breitner mail@joachim-breitner.de • http://www.joachim-breitner.de/ Jabber: nomeata@joachim-breitner.de • GPG-Key: 0x4743206C Debian Developer: nomeata@debian.org
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
-- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/