ANNOUNCE: Chell: A quiet test runner (low-output alternative to test-framework)

Hi John, I am wondering if you have seen the hspec package? [1] It seems to solve all the problems you are with chell, including that it silences Hunit output. We are using it for all the Yesod tests now. Thanks, Greg Weber [1]: http://hackage.haskell.org/packages/archive/hspec/0.6.1/doc/html/Test-Hspec....

I have, but it's not quite what I'm looking for:
- I don't want to silence HUnit's output, I just don't want anything
to show on the console when a test *passes*. Showing output on a
failure is good.
- I'm not interested in BDD. Not to say it's not useful, but it
doesn't match my style of testing (which uses mostly pass/fail
assertions and properties).
On Thu, Aug 11, 2011 at 07:18, Greg Weber
Hi John, I am wondering if you have seen the hspec package? [1] It seems to solve all the problems you are with chell, including that it silences Hunit output. We are using it for all the Yesod tests now. Thanks, Greg Weber [1]: http://hackage.haskell.org/packages/archive/hspec/0.6.1/doc/html/Test-Hspec....

It silences HUnit's output, but will tell you what happens when there is a
failure- which I think is what you want. There are a few available output
formatters if you don't like the default output, or you can write your own
output formatter.
BDD is really a red herring. Instead of using function names to name tests
you can use strings, which are inherently more descriptive. In chell you
already have `assertions "numbers"`, in hspec it would be `it "numbers"`.
The preferred style it to remove `test test_Numbers and the test_Numbers
definition` which are redundant in this case, and instead place that inline
where you define the suite, although that is optional.
So I really can't tell any difference betwee "BDD" and "pass/fail
assertions". You still just use assertions in hspec.
On Thu, Aug 11, 2011 at 7:36 AM, John Millikin
I have, but it's not quite what I'm looking for:
- I don't want to silence HUnit's output, I just don't want anything to show on the console when a test *passes*. Showing output on a failure is good.
- I'm not interested in BDD. Not to say it's not useful, but it doesn't match my style of testing (which uses mostly pass/fail assertions and properties).
On Thu, Aug 11, 2011 at 07:18, Greg Weber
wrote: Hi John, I am wondering if you have seen the hspec package? [1] It seems to solve all the problems you are with chell, including that it silences Hunit output. We are using it for all the Yesod tests now. Thanks, Greg Weber [1]: http://hackage.haskell.org/packages/archive/hspec/0.6.1/doc/html/Test-Hspec....

On Thu, Aug 11, 2011 at 07:52, Greg Weber
It silences HUnit's output, but will tell you what happens when there is a failure- which I think is what you want. There are a few available output formatters if you don't like the default output, or you can write your own output formatter.
I'm a bit confused. From what I can tell, HUnit does not output *anything* just from running a test -- the result has to be printed manually. What are you silencing?
BDD is really a red herring. Instead of using function names to name tests you can use strings, which are inherently more descriptive. In chell you already have `assertions "numbers"`, in hspec it would be `it "numbers"`. The preferred style it to remove `test test_Numbers and the test_Numbers definition` which are redundant in this case, and instead place that inline where you define the suite, although that is optional. So I really can't tell any difference betwee "BDD" and "pass/fail assertions". You still just use assertions in hspec.

I am confused also, as to both what output you don't like that motivated
chell and what exactly hspec silences :) Suffice to say I am able to get a
small relevant error message on failure with hspec. I am adding the hspec
maintainer to this e-mail- he can answer any of your questions.
On Thu, Aug 11, 2011 at 8:03 AM, John Millikin
On Thu, Aug 11, 2011 at 07:52, Greg Weber
wrote: It silences HUnit's output, but will tell you what happens when there is a failure- which I think is what you want. There are a few available output formatters if you don't like the default output, or you can write your own output formatter.
I'm a bit confused. From what I can tell, HUnit does not output *anything* just from running a test -- the result has to be printed manually. What are you silencing?
BDD is really a red herring. Instead of using function names to name tests you can use strings, which are inherently more descriptive. In chell you already have `assertions "numbers"`, in hspec it would be `it "numbers"`. The preferred style it to remove `test test_Numbers and the test_Numbers definition` which are redundant in this case, and instead place that inline where you define the suite, although that is optional. So I really can't tell any difference betwee "BDD" and "pass/fail assertions". You still just use assertions in hspec.

On Thu, Aug 11, 2011 at 08:17, Greg Weber
I am confused also, as to both what output you don't like that motivated chell and what exactly hspec silences :) Suffice to say I am able to get a small relevant error message on failure with hspec. I am adding the hspec maintainer to this e-mail- he can answer any of your questions.
The output I didn't like wasn't coming from HUnit, it was coming from the test aggregator I used (test-framework). It prints one line per test case run, whether it passed or failed. That means every time I ran my test suite, it would print *thousands* of lines to the terminal. Any failure immediately scrolled up and out of sight, so I'd have to either Ctrl-C and hunt it down, or wait for the final report when all the tests had finished running. Chell does the same thing as test-framework (aggregates tests into suites, runs them, reports results), but does so quietly. It only reports failed and aborted tests.

Is this different than the "--hide-successes" flag for test-framework? Looks
like it was added a few months back:
https://github.com/batterseapower/test-framework/commit/afd7eeced9a4777293af...
-n
On Thu, Aug 11, 2011 at 8:21 AM, John Millikin
The output I didn't like wasn't coming from HUnit, it was coming from the test aggregator I used (test-framework). It prints one line per test case run, whether it passed or failed.
That means every time I ran my test suite, it would print *thousands* of lines to the terminal. Any failure immediately scrolled up and out of sight, so I'd have to either Ctrl-C and hunt it down, or wait for the final report when all the tests had finished running.
Chell does the same thing as test-framework (aggregates tests into suites, runs them, reports results), but does so quietly. It only reports failed and aborted tests.

Possible -- I ran into dependency conflicts between
t-f/t-f-q/quickcheck when trying to migrate to test-framework 0.4, so
I clamped all my test subprojects to 0.3.
On Thu, Aug 11, 2011 at 09:09, Nathan Howell
Is this different than the "--hide-successes" flag for test-framework? Looks like it was added a few months back: https://github.com/batterseapower/test-framework/commit/afd7eeced9a4777293af...

As Greg pointed out, HSpec does have an option to output just the failed tests. I looked at the example on the Chell project home page and converted the example tests into these hspec style specs:
import Test.Hspec (Specs, descriptions, describe, it) import Test.Hspec.Runner (hHspecWithFormat) import Test.Hspec.Formatters (failed_examples) import Test.Hspec.HUnit import Test.HUnit import System.IO (stdout)
-- some functions to test equal = (==) greater = (>) equalWithin = undefined equalLines = (==)
specs :: IO Specs specs = descriptions [ describe "number comparison module" [ it "can check for equality" (assertBool "1 should equal 1" $ equal 1 1), it "can compare order" (assertBool "2 should be greater than 1" $ greater 2 1), it "can compare eqauality with floating point numbers" (assertBool "1.0001 should be close enough to 1.0" $ equalWithin 1.0001 1.0 0.01) ], describe "text comparison module" [ it "can compare strings for equality" (let str1 = "foo\nbar\nbaz" :: String str2 = "foo\nbar\nqux" :: String in assertBool "foo\\nbar\\nbaz shouldn't equal foo\\nbar\\nqux" $ equalLines str1 str2) ]]
main = hHspecWithFormat (failed_examples True) stdout specs
And when run, got the following output in red text since it's only reporting failures: ] x can compare eqauality with floating point numbers FAILED [1] ] x can compare strings for equality FAILED [2] ] ] 1) number comparison module can compare eqauality with floating point numbers FAILED ] Prelude.undefined ] ] 2) text comparison module can compare strings for equality FAILED ] foo\nbar\nbaz shouldn't equal foo\nbar\nqux ] ] Finished in 0.0000 seconds ] ] 4 examples, 2 failures You can write provide your own formatter if that's not what you'd like to see. You also don't have to use the HUnit assertion text either; you could use the following function to make your specs even more like your Chell example, at the cost of losing the extra output description:
assert = assertBool ""
Hspec uses HUnit TestCases and assertions but also supports QuickCheck properties almost exactly the same way Chell does. The hspec project homepage (https://github.com/trystan/hspec) has more examples, including the specs for hspec itself.
Trystan Spangler
From: "John Millikin"
I am confused also, as to both what output you don't like that motivated chell and what exactly hspec silences :) Suffice to say I am able to get a small relevant error message on failure with hspec. I am adding the hspec maintainer to this e-mail- he can answer any of your questions.
The output I didn't like wasn't coming from HUnit, it was coming from the test aggregator I used (test-framework). It prints one line per test case run, whether it passed or failed. That means every time I ran my test suite, it would print *thousands* of lines to the terminal. Any failure immediately scrolled up and out of sight, so I'd have to either Ctrl-C and hunt it down, or wait for the final report when all the tests had finished running. Chell does the same thing as test-framework (aggregates tests into suites, runs them, reports results), but does so quietly. It only reports failed and aborted tests.
participants (4)
-
Greg Weber
-
John Millikin
-
Nathan Howell
-
trystan.s@comcast.net