
On Thursday 28 August 2003 12:57, Malcolm Wallace wrote:
The Hat solution is to trace everything, and then use a specialised query over the trace to narrow it to just the points you are interested
That sounds risky with programs that treat megabytes of data. It isn't always possible to test with small data sets, e.g. when different algorithms are used depending on the size of the problem.
in. At the moment, the hat-observe browser is the nearest to what you want - like HOOD, it permits you to see the arguments and results of a named function call, but additionally you can restrict the output
I am mostly interested in intermediate values.
Another idea is to permit real Haskell expressions to post-process the result of the trace query, rather like meta-programming. So your second
That sounds like a very flexible approach. Could one perhaps do this *while* the trace is being constructed, in a lazy evaluation fashion, such that unwanted trace data is never generated and stored?
QuickCheck and Hat can be made to work together nicely. There is a version of QuickCheck in development which works by first running the ordinary program with lots of random test data. If a failure is found, it prunes the test case to the minimal failing case, and passes that minimal case to a Hat-enabled version of the program, which can then be used to investigate the cause of the failure.
That sounds very useful. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hinsen@cnrs-orleans.fr Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------------