
On Thu, Apr 1, 2010 at 3:52 PM, Thomas Tuegel
I propose to build a test suite as its own executable, but to avoid the problem of granularity by producing an output file detailing the success or failure of individual tests and any relevant error messages. The format of the file would be standardized through library routines I propose to write; these routines would run tests with HUnit or QuickCheck and process them into a common format. Cabal, or any other utility, could read this file to determine the state of the test suite. Perhaps Cabal could even warn the user about installing packages with failing tests.
There are a few frameworks that provide limited degrees of this functionality. I've recently added to test-framework so that the results can be gathered into an xml format that complies with at least some (maybe all?) junit xml parsers. I specifically targeted junit xml so it would be easy to use existing continuous integration systems as-is, but the format is not for haskell tests. It would be nice, for example, to see how many successful quickcheck inputs were run; and the concept of packages and classes had to be munged to work with Haskell modules and test groupings. I need to clean up the code and get it over to Max for review before it'll be widely available, but that's just a matter of finding the time (possibly next week).
module Main where
import Foo import Test.QuickCheck import Distribution.Test -- This module is part of the project I propose
main = runTests [ ("testBar", wrap $ testBar), ("testBaz", wrap $ testBaz) ] -- (name, test)
'runTests' and 'wrap' would be provided by 'Distribution.Test'. 'wrap' would standardize the output of test routines. For QuickCheck tests, it would probably look like:
This is very similar to what test-framework (and other libs.) are doing -- it's well worth looking into them.
wrap :: Testable a => a -> IO (Bool, String)
where the Bool indicates success and the String can be an error message the test produced. 'runTests' would take the list of tests, format their results, and write the output to a file:
Keep in mind that there are at least two ways a test can fail -- through errors or false assertions, and it's useful to distinguish between those. As indicated above, I think this bit of the problem has been largely solved -- at the least, there has been a lot of work on designing test frameworks for most languages, and we should be able to take advantage of that here.
The test suite would be included in the package description file with a stanza such as:
Test main-is: Test.hs build-depends: foo, QuickCheck, Cabal
I've been thinking about this as well, and I like this general idea, but I'm not (yet) convinced it's the best. That's probably just because I'm cautious though :)
This would take all the same options as an 'Executable' stanza, but would tell Cabal to run this executable when './Setup test' is invoked. This of course requires Cabal to support building executables that depend on the library in the same package. Since version 1.8, Cabal supposedly supports this, but my experiments indicate the support is a little broken. (GHC is invoked with the '-package-id' option, but Cabal only gives it the package name. Fixing this would naturally be on the agenda for this project.)
At this point, the package author need only run:
$ ./Setup configure $ ./Setup build $ ./Setup test
My general feeling has been that Setup is being discouraged in favor of using 'cabal <foo>', but I don't have any solid evidence for that (and I could very well be wrong!). They do do slightly different things, so I think it's wise to figure out which idiom is most likely to be used and work with that. --Rogan