Re: Any remaining test patches?

On 18 May 2011 15:01, Thomas Tuegel
On Wed, May 18, 2011 at 4:34 AM, Johan Tibell
wrote: Sorry for having to leave so abruptly yesterday. Could someone please summarize the discussion? Are we going with GetOpt?
We didn't reach a decision; primarily, I think, because I had to leave before Duncan really had a chance to review the patch. (There really wasn't any discussion after you had to leave.) But, my understanding of the discussion before you left is that Duncan would prefer a custom solution, rather than GetOpt. (Duncan, please correct me if that isn't the case).
Ok, I've looked at the Distribution.TestSuite API and had a go at designing something along the lines of what I was thinking. I'll try and explain this design and compare three designs: Thomas's current patch, my prototype and the the test-framework test interface. So the design in the current patch is nice and short: class Testable t where name :: t -> String options :: [OptDescr (t -> t)] defaultOptions :: t -> IO [String] run :: t -> IO Result data Result = Pass | Fail ErrorMessage | Error ErrorMessage deriving (Read, Show, Eq) data Test = forall t. Testable t => Test t The test suite will then expose a value of type [Test]. So the interesting aspects are: * the API uses a type class and an existential wrapper to hide the specific type of each test. * the default options are obtained using IO * it uses GetOpt to accumulate key/value style string options into the test instance, which can then be run (Presumably at least the default options must be added before the test can be run.) Here's the equivalent bit of my design (the TestResult is the same): data TestInstance = TestInstance { run :: IO TestResult name :: String, concurrentSafe :: Bool, expectedFail :: Bool, options :: [OptionDescr] setOption :: String -> String -> Either String TestInstance } Minor difference: extra flags for indicating whether the test is safe for concurrent execution (e.g. if it's pure or has only local side effects) and whether the test case is in fact expected to fail (so numbers of expected vs unexpected failures can be reported). The first major difference is that we do the "OO" bit differently. Instead of using a type class and an existential wrapper, we use a record. These two styles are pretty similar but the record avoids the use of existential types. Since the test agents using this interface never know or need to know the actual type of the test (they will always use Test above and manipulate it via the type class interface) then there is no advantage to using a type class. The option setting is rather similar in both designs. In both you end up using a function of type String -> String -> Test -> Test to accumulate a key/value option pair into the test instance. This lets the user add any number of options before running the test. The slight difference in style is down to the different OO encoding (class based vs record based). The major difference with the options is of course how they are validated and described (which I have not yet detailed). GetOpt does not actually provide any validation of options, that is, for an option --foo=bar, there is no parsing/validation of the string "bar". GetOpt just parses the overall command line string ["--foo=bar", ...]. This setOption function gives the possibility of validating the value string and producing an error message. Note that I don't have an IO action to get any default options. In this design it must always be possible to run the test with no options supplied. Internally, if a test does need some options specified then it just has to fill them in with its internal defaults. So, all the options are just (String, String) key value pairs and we can validate them. Obviously we also need some discoverability of what options are available and some information to construct a sensible user interface (either command line, config file, web UI or IDE GUI). data OptionDescr = OptionDescr { optionName :: String, optionDescription :: String, optionType :: OptionType, optionDefault :: Maybe String } The name and description are much like in GetOpt. The optional default is purely informative, for user interfaces that are able to display defaults (eg a web form). The OptionType is to give a user interface some extra info on what is expected so that they can present an appropriate interface and do some up front validation. Of course the real validation is done via setOption, or for things that can only really be validated by doing IO (like reading files) then they have to be deferred to when the test is actually run. So this list is just informative: data OptionType = OptionFile { optionFileMustExist :: Bool, optionFileIsDir :: Bool, optionFileExtensions :: [String] } | OptionString { optionStringMultiline :: Bool } | OptionNumber { optionNumberIsInt :: Bool, optionNumberBounds :: (Maybe String, Maybe String) } | OptionBool | OptionEnum [String] | OptionSet [String] | OptionRngSeed You can imagine how each of these could be used as hints to make a more helpful web form or GUI interface. They could also let a command line ui do some up front validation too. The most general case is OptionString for when no more specific alternative is appropriate. Now, I also looked again at what test-framework does and it has some interesting things and some things that are unnecessary for our purposes. There are two things there that I think we should consider. The first is the grouping of tests into a related bundles, or in general a hierarchy: data Test = forall i r t. Testlike i r t => Test TestName t | TestGroup TestName [Test] | PlusTestOptions TestOptions Test So we either have a individual test or a group of tests (which themselves can contain groups etc). I'm sure this is nice for displaying in a UI what tests can be run and displaying results of a run. Imagine an interactive UI that lets the user select a group of tests to be run, or an html report showing results by group. We could do something like this: data Tests = Test TestInstance | TestGroup TestName [Tests] The PlusTestOptions is interesting. As I understand this, it is really from the point of view of the author of the test suite: they are providing additional default test options for this test or group of tests. I don't think this is necessary from the point of view of the test agent, they do not need to know this. The above could be lifted into a structure without PlusTestOptions simply by pre-applying the test options to that subtree. On the other hand, perhaps it is useful for an agent presenting a user interface to be able to know that a whole group of test all take the same set of options. So perhaps we should have: data Tests = Test TestInstance | TestGroup TestName [Tests] | ExtraTestOptions [OptionDescr] Tests So this means that all of the tests below this ExtraTestOptions take the given set of options. Then individual tests would usually have an empty options set, or perhaps just add one or two for that special case. This grouping of test options should enable better user interfaces. Another interesting part of test-framework is what you get when you run a test: runTest :: t -> IO (i :~> r, IO ()) So I the main point here is that it gives us progress / logging while the test is being run. The i :~> r thing is basically just a lazy list with values of type i spat out along the way and finally ending in a result of type r. The way it's done in test-framework is to have a pure i :~> r lazy list and a separate IO () action that does the real work. The IO action performs the test and pokes the results into a Concurrent.Chan. The i :~> r lazy list is extracted from the Chan using getChanContents. Honestly, I don't know why Max did it this way, seems to me a much more direct approach is: data TestProgress = Finished TestResult | Progress String (IO TestProgress) So it's not a lazy list, it's a list with explicit IO to get the tail. The test-framework approach could be lifted into this fairly easily I think. Note that this kind of progress reporting is actually quite important. You cannot have each test logging information to stdout if you're going to run any tests concurrently. But it is quite nice to see progress (e.g. from QuickCheck) for the tests that are currently running, which is exactly what test-framework does using a 'top' like console interface (you can imagine GUI ones too). Comments, thoughts on any/all of this appreciated. Duncan

On 21 May 2011 15:20, Duncan Coutts
So we either have a individual test or a group of tests (which themselves can contain groups etc). I'm sure this is nice for displaying in a UI what tests can be run and displaying results of a run. Imagine an interactive UI that lets the user select a group of tests to be run, or an html report showing results by group.
We could do something like this:
data Tests = Test TestInstance | TestGroup TestName [Tests]
Actually, we need more than this. We need to be able to do IO to enumerate the tests in the group. Consider the ghc tests suite. It is an interesting and reasonably large scale example. I think we ought to make sure that our test suite interface enables us to wrap the current ghc testsuite without having to do a major rewrite. The ghc test suite is implemented as a whole bunch of files arranged into directories with a few test-specific scripts in a few places. So it matches the heirarchy notion well enough but to list the tests in each group (in each dir in the ghc testuite case) means doing IO to list the files in the directory and pull out the ones that are recognised as test cases. The ghc example also makes me thing that we want a bit more declarative info that could be presented in a web/gui interface. I think we want optional descriptions on each group of tests, or on individual tests. Many of ghc tests have a notion of "way", that is you can run the same test with normal, ghci, profiled, dynamic etc ways. Is there anything we can do to expose that information? I don't think we want to make individual test instances paramaterised, but perhaps we can do something slightly more detailed than TestGroup to indicate that some test instances are related to each other in certain ways. For the ghc "way" example, suppose we annotated each test with its way. A UI could then allow things like filtering on these attributes. For ghc's testsuite the use case is to let users do things like only run tests the ghci way, or exclude the opt way. In the ghc test suite the current driver does things like grouping failing tests so that the test name is listed only once, and each failing way is listed together, e.g.: T5636 (ghci, opt, prof) So what if in addition to this system of test options (inputs) we had a similar declarative system for describing test attributes. What might it look like and how could test agents expose this so that users can make use of it? Duncan

On Sat, May 21, 2011 at 7:36 PM, Duncan Coutts
Actually, we need more than this. We need to be able to do IO to enumerate the tests in the group.
Consider the ghc tests suite. It is an interesting and reasonably large scale example. I think we ought to make sure that our test suite interface enables us to wrap the current ghc testsuite without having to do a major rewrite.
The ghc test suite is implemented as a whole bunch of files arranged into directories with a few test-specific scripts in a few places. So it matches the heirarchy notion well enough but to list the tests in each group (in each dir in the ghc testuite case) means doing IO to list the files in the directory and pull out the ones that are recognised as test cases.
The ghc example also makes me thing that we want a bit more declarative info that could be presented in a web/gui interface. I think we want optional descriptions on each group of tests, or on individual tests. Many of ghc tests have a notion of "way", that is you can run the same test with normal, ghci, profiled, dynamic etc ways.
Is there anything we can do to expose that information? I don't think we want to make individual test instances paramaterised, but perhaps we can do something slightly more detailed than TestGroup to indicate that some test instances are related to each other in certain ways. For the ghc "way" example, suppose we annotated each test with its way. A UI could then allow things like filtering on these attributes. For ghc's testsuite the use case is to let users do things like only run tests the ghci way, or exclude the opt way. In the ghc test suite the current driver does things like grouping failing tests so that the test name is listed only once, and each failing way is listed together, e.g.:
T5636 (ghci, opt, prof)
So what if in addition to this system of test options (inputs) we had a similar declarative system for describing test attributes. What might it look like and how could test agents expose this so that users can make use of it?
I'm worried about generalizing from a single instance (i.e. GHC). If we'd like to add some flexibility we could allow tests to carry arbitrary tags: class TestInstance a where tags :: a -> [String] Test agents could use tags to group tests in different ways. We could add support for tags in Cabal's test runner so that they can be used to select which tests to run. For example, cabal test --test-tags="smoke -flaky" could run all non-flaky smoke tests. Johan

On Mon, May 23, 2011 at 03:48:05PM +0200, Johan Tibell wrote:
On Sat, May 21, 2011 at 7:36 PM, Duncan Coutts
T5636 (ghci, opt, prof)
So what if in addition to this system of test options (inputs) we had a similar declarative system for describing test attributes. What might it look like and how could test agents expose this so that users can make use of it?
I'm worried about generalizing from a single instance (i.e. GHC).
If we'd like to add some flexibility we could allow tests to carry arbitrary tags:
class TestInstance a where tags :: a -> [String]
Just because GHC's testsuite distinguishes between ways and tests, I'm not sure if Cabal's testsuite framework needs to do so. Ways could just be encoded into the hierarchy, e.g. GHC's testsuite could include the tests typechecker.T5636.ghci typechecker.T5636.opt typechecker.T5636.prof rather than the test typechecker.T5636 with 3 tagged variants. Perhaps having tags /as well/ would make sense, so typechecker.T5636.ghci would be tagged ["typechecker", "ghci"] and it would then be possible to run 'all tests tagged "ghci"' or 'all tests tagged "typechecker"'. Thanks Ian

On Mon, 2011-05-23 at 15:14 +0100, Ian Lynagh wrote:
If we'd like to add some flexibility we could allow tests to carry arbitrary tags:
class TestInstance a where tags :: a -> [String]
Just because GHC's testsuite distinguishes between ways and tests, I'm not sure if Cabal's testsuite framework needs to do so. Ways could just be encoded into the hierarchy, e.g. GHC's testsuite could include the tests typechecker.T5636.ghci typechecker.T5636.opt typechecker.T5636.prof rather than the test typechecker.T5636 with 3 tagged variants.
Perhaps having tags /as well/ would make sense, so typechecker.T5636.ghci would be tagged ["typechecker", "ghci"] and it would then be possible to run 'all tests tagged "ghci"' or 'all tests tagged "typechecker"'.
Right, I was thinking of the latter, that each way is a separate test but that they're tagged so you know they're related in some way. Duncan

On Mon, May 23, 2011 at 8:48 AM, Johan Tibell
I'm worried about generalizing from a single instance (i.e. GHC).
If we'd like to add some flexibility we could allow tests to carry arbitrary tags:
class TestInstance a where tags :: a -> [String]
Test agents could use tags to group tests in different ways. We could add support for tags in Cabal's test runner so that they can be used to select which tests to run. For example,
cabal test --test-tags="smoke -flaky"
could run all non-flaky smoke tests.
I think this is a great, inexpensive way to add a lot of flexibility to the test system. Anyway, it seems tags are all the rage these days! (Could this be a marketing point? "Cabal Test: Now with Tags!") -- Thomas Tuegel

On Sat, May 21, 2011 at 4:20 PM, Duncan Coutts
Here's the equivalent bit of my design (the TestResult is the same):
data TestInstance = TestInstance { run :: IO TestResult name :: String,
concurrentSafe :: Bool, expectedFail :: Bool,
options :: [OptionDescr] setOption :: String -> String -> Either String TestInstance }
I cannot think of a straightforward way to implement setOption in the above design. One would have to "store" options in the run closure. A type class approach would allow the test framework to use extra fields in the record that implements the type class to store the options e.g. class TestInstance a where setOption :: String -> String -> a -> Either String TestInstance run :: a -> IO TestResult data MyTestType where options :: [MyOptionType] -- Test framework specific property :: Property -- Test framework specific instance TestInstance MyTestType where setOption k v t = case parse k v of Left -> Left "Failed to parse option" Right opt -> Right $ t { options = opt : options t } run t = runProperty (property t) (options t) --| Convert specific test type in this test framework to the test type wrapper. test prop = MyTestType [] prop
Minor difference: extra flags for indicating whether the test is safe for concurrent execution (e.g. if it's pure or has only local side effects) and whether the test case is in fact expected to fail (so numbers of expected vs unexpected failures can be reported).
I prefer "exclusive" to "concurrentSafe", as there might be tests that are concurrency safe but should still be run in isolation. Not a big difference in practice though. Do we really need expectedFail, it feels quite GHC specific and there are other options, like commenting out the test or using a tags mechanism (see my reply to your other email). Here are some other attributes me might want to consider: * size - How long is this test expected to take? You might want to run all small and medium size tests on every commit but reserve large and huge tests for before a release. * timeout - The maximum amount of time the test is expected to run. The test agent should kill the test after this time has passed. Timeout could get a default value based on size. Test agents should probably apply some sort of timeout even if we don't let users specify it on a per test basis. Johan

On Mon, 2011-05-23 at 15:39 +0200, Johan Tibell wrote:
On Sat, May 21, 2011 at 4:20 PM, Duncan Coutts
wrote: Here's the equivalent bit of my design (the TestResult is the same):
data TestInstance = TestInstance { run :: IO TestResult name :: String,
concurrentSafe :: Bool, expectedFail :: Bool,
options :: [OptionDescr] setOption :: String -> String -> Either String TestInstance }
I cannot think of a straightforward way to implement setOption in the above design. One would have to "store" options in the run closure.
data MyTestType = MyTestType (IO Bool) FooOption myTestType :: MyTestType -> TestInstance myTestType = myTestType' defaultFooOption where myTestType' foo = emptyTestInstance { run = fmap convertTesult (runMyTest foo), options = [optionDescr "foo" OptionString] setOption = \name val -> case name of "foo" -> Right (myTestType' (parseAsFooOption val)) } The function myTestType' is the TestInstance closure with all the private parameters/fields exposed. Hurrah for abstraction via lambdas. BTW, this is not exotic. It's a standard "OO in FP" abstraction technique that we don't use enough.
A type class approach would allow the test framework to use extra fields in the record that implements the type class to store the options e.g.
It's more or less the same except that using lambdas/closures means we do not need an existential type wrapper. It's H98. Note also that only the framework implementers need to provide this interface so we will not confuse casual users with this OO style.
I prefer "exclusive" to "concurrentSafe", as there might be tests that are concurrency safe but should still be run in isolation. Not a big difference in practice though.
Do we really need expectedFail, it feels quite GHC specific and there are other options, like commenting out the test or using a tags mechanism (see my reply to your other email).
Those were just suggestions. I'm not totally wedded to them. So the first, what property are we asking test authors to declare? Whether the test can be run concurrently with others or whether it must be run in isolation. I think we actually mean the same thing here, just expressing it as a positive (safe to run this test concurrently with others) or as a negative (must run this test exclusively, not when any others are running). We just need to pick something that is clear and document it properly.
Here are some other attributes me might want to consider:
* size - How long is this test expected to take? You might want to run all small and medium size tests on every commit but reserve large and huge tests for before a release.
* timeout - The maximum amount of time the test is expected to run. The test agent should kill the test after this time has passed. Timeout could get a default value based on size. Test agents should probably apply some sort of timeout even if we don't let users specify it on a per test basis.
So in your other email you suggest a simple attribute system where we use a set of named tags, but with no meanings that a generic test agent will know about, just to be used as way for users to filter on tests. Then here you've got a few suggestions for attributes with particular meanings to the test agent. Perhaps that kind of combination is enough, and we don't need anything to declare any kind of meaning. I think it's probably worth thinking about this part a bit more though. Duncan

On Mon, May 23, 2011 at 4:19 PM, Duncan Coutts
So in your other email you suggest a simple attribute system where we use a set of named tags, but with no meanings that a generic test agent will know about, just to be used as way for users to filter on tests.
Then here you've got a few suggestions for attributes with particular meanings to the test agent. Perhaps that kind of combination is enough, and we don't need anything to declare any kind of meaning. I think it's probably worth thinking about this part a bit more though.
I think widely used attribute such as 'size' should be fields in the record while more specific ones such as 'css-tests' should go into tags. If some attributes in tags turns out to be of general enough interest it could be promoted to a record field in a future API version. Johan

On Sat, May 21, 2011 at 9:20 AM, Duncan Coutts
data TestInstance = TestInstance { run :: IO TestResult name :: String,
concurrentSafe :: Bool, expectedFail :: Bool,
options :: [OptionDescr] setOption :: String -> String -> Either String TestInstance }
Note that I don't have an IO action to get any default options. In this design it must always be possible to run the test with no options supplied. Internally, if a test does need some options specified then it just has to fill them in with its internal defaults.
data OptionDescr = OptionDescr { optionName :: String, optionDescription :: String, optionType :: OptionType, optionDefault :: Maybe String }
The reason I used an IO action to get at the default options was to allow extraction of the random seed that was going to be used, e.g., by QuickCheck tests, so the test would be reproducible. Since this design doesn't use IO anywhere, is it still possible for the test agent to record the random seed used? That is, if optionType == OptionRngSeed, can optionDefault be anything other than Nothing?
data OptionType = OptionFile { optionFileMustExist :: Bool, optionFileIsDir :: Bool, optionFileExtensions :: [String] } | OptionString { optionStringMultiline :: Bool } | OptionNumber { optionNumberIsInt :: Bool, optionNumberBounds :: (Maybe String, Maybe String) } | OptionBool | OptionEnum [String] | OptionSet [String] | OptionRngSeed
You can imagine how each of these could be used as hints to make a more helpful web form or GUI interface. They could also let a command line ui do some up front validation too.
The most general case is OptionString for when no more specific alternative is appropriate.
Perhaps we could add a field to the OptionString constructor to take a human-readable description of what is expected? I wouldn't expect a test framework to necessarily accept just any string, for example; I can see this option type being used for custom types that don't really fit any of the predefined types.
Honestly, I don't know why Max did it this way, seems to me a much more direct approach is:
data TestProgress = Finished TestResult | Progress String (IO TestProgress)
So it's not a lazy list, it's a list with explicit IO to get the tail. The test-framework approach could be lifted into this fairly easily I think.
I really like this approach. When I wrote my interface, I wanted to allow tests to return partial progress, but I couldn't devise a way to do it without Max's Concurrent-based approach (and I wasn't sure that was appropriate for Cabal). -- Thomas Tuegel

On 21 May 2011 15:20, Duncan Coutts
data Test = forall i r t. Testlike i r t => Test TestName t | TestGroup TestName [Test] | PlusTestOptions TestOptions Test
FYI, I recently added another alternative: | BuildTest (IO Test) The main purpose is to support a combinator mutuallyExclusive :: Test -> Test that only allows one of the child tests in a Test to run at once (see https://github.com/batterseapower/test-framework/blob/master/core/Test/Frame...). This is just a convenience that means that the user doesn't have to explicitly sequence in this IO action themselves.
The way it's done in test-framework is to have a pure i :~> r lazy list and a separate IO () action that does the real work. The IO action performs the test and pokes the results into a Concurrent.Chan. The i :~> r lazy list is extracted from the Chan using getChanContents.
Honestly, I don't know why Max did it this way
I would not copy this part of the design -- it is the component I am least happy with. As you say, it is a rather strange interface, and your proposal seems much clearer. Cheers, Max

On Sat, May 21, 2011 at 9:20 AM, Duncan Coutts
data TestInstance = TestInstance { run :: IO TestResult name :: String,
concurrentSafe :: Bool, expectedFail :: Bool,
options :: [OptionDescr] setOption :: String -> String -> Either String TestInstance }
I've been working on this new interface and a test runner for it, and I've made a discovery about this property of "exclusivity" (alternately, "concurrent safety") that I think I should share. Exclusivity is not a property of individual tests, but of test groups. If you think carefully about the (admittedly pathological) case of a test suite with only one test case, you see that it doesn't matter whether it is marked as exclusive or not: it will be run both concurrently and sequentially with all the other tests. Exclusivity only exists when there is more than one test. I am working on a prototype interface that is essentially a replica of what Duncan outlined here, but I have moved the "concurrentSafe" field to the TestGroup constructor. Not only is this "more correct," but it buys a lot of expressiveness. With exclusivity as a property of individual tests, test runners must simply run all the unsafe tests in sequence. In reality, though, it is likely that there will be groups of tests which can be run concurrently. For example, suppose we have four tests which conflict with each other in pairs, i.e., tests 1 and 2 cannot be run at the same time and neither can tests 3 and 4: we can still run tests 1 and 3 in parallel and tests 2 and 4 in parallel. With "concurrentSafe" as a field of TestGroup, this dependency can be expressed and test runners can maximize concurrency. -- Thomas Tuegel

On Tue, Jun 28, 2011 at 7:28 PM, Thomas Tuegel
I've been working on this new interface and a test runner for it, and I've made a discovery about this property of "exclusivity" (alternately, "concurrent safety") that I think I should share.
Exclusivity is not a property of individual tests, but of test groups. If you think carefully about the (admittedly pathological) case of a test suite with only one test case, you see that it doesn't matter whether it is marked as exclusive or not: it will be run both concurrently and sequentially with all the other tests. Exclusivity only exists when there is more than one test.
I don't think this is necessarily the case. If you put concurrentSafe on a test you'd then partition tests like so (safe, unsafe) = partition isConcurrentSafe tests and then run all of the safe tests (possibly in parallel) and then all the unsafe tests one by one. Johan

On Wed, Jun 29, 2011 at 2:38 AM, Johan Tibell
On Tue, Jun 28, 2011 at 7:28 PM, Thomas Tuegel
wrote: Exclusivity is not a property of individual tests, but of test groups. If you think carefully about the (admittedly pathological) case of a test suite with only one test case, you see that it doesn't matter whether it is marked as exclusive or not: it will be run both concurrently and sequentially with all the other tests. Exclusivity only exists when there is more than one test.
I don't think this is necessarily the case. If you put concurrentSafe on a test you'd then partition tests like so
(safe, unsafe) = partition isConcurrentSafe tests
and then run all of the safe tests (possibly in parallel) and then all the unsafe tests one by one.
Your suggestion is guaranteed to be safe, but someone will look at the test interface and think the following is acceptable: Partition the safe and unsafe tests; run the safe tests all in parallel and, at the same time, run the unsafe tests one by one. This hypothetical programmer might rationalize this reading by saying "If it isn't safe to run concurrentSafe tests concurrently with _any_ other test, even an unsafe one, then concurrentSafe tests aren't very safe, are they?" This isn't our interpretation, so there will have to be an officially sanctioned parallelization scheme in the documentation. I don't dispute that a simple bit of documentation fixes the ambiguity (as long as future test runner programmers read the docs very carefully), but I would prefer to modify the data structure. If groups are marked concurrentSafe instead of tests, there is absolutely no ambiguity about which tests can safely be run concurrently. As an example, the parallelization scheme you came up with would look like this: TestGroup { groupName = "AllTests" , concurrentSafe = False , groupTests = [ TestGroup { groupName = "SafeTests" , concurrentSafe = True , groupTests = [...] } , TestGroup { groupName = "UnsafeTests" , concurrentSafe = False , groupTests = [...] } ] } whereas my alternative interpretation would be: TestGroup { groupName = "AllTests" , concurrentSafe = True , groupTests = [ ... ] } (Only the third line changes.) With concurrentSafe at the Group level, the difference is clear. More importantly, this gives package and test framework authors more power to express how their tests can be run without increasing the complexity of test runners. -- Thomas Tuegel
participants (5)
-
Duncan Coutts
-
Ian Lynagh
-
Johan Tibell
-
Max Bolingbroke
-
Thomas Tuegel