
Going through some validate.sh results, I found compilation errors due to missing libraries, like this one: =====> stm052(normal) 4088 of 4108 [0, 21, 0] cd ../../libraries/stm/tests && 'C:/msys64/home/Gintas/ghc/bindisttest/install dir/bin/ghc.exe' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopt s -fno-warn-tabs -fno-ghci-history -o stm052 stm052.hs -package stm
stm052.comp.stderr 2>&1 Compile failed (status 256) errors were:
stm052.hs:10:8: Could not find module ‘System.Random’ Use -v to see a list of the files searched for. I was surprised to see that these are not listed in the test summary at the end of the test run, but only counted towards the "X had missing libraries" row. That setup makes it really easy to miss them, and I can't think of a good reason to sweep such tests under the rug; a broken test is a failing test. How about at least listing such failed tests in the list of failed tests of the end? At least in this case the error does not seem to be due to some missing external dependencies (which probably would not be a great idea anyway...). The test does pass if I remove the "-no-user-package-db" argument. What was the intention here? Does packaging work somehow differently on Linux? (I'm currently testing on Windows.) On a related note, how about separating test failures from failing performance tests ("stat too good" / "stat not good enough")? The latter are important, but they seem to be much more prone to fail without good reason. Perhaps do some color coding of the test runner output? That would also help. -- Gintautas Miliauskas