
On 27 June 2014 00:22, Richard A. O'Keefe
wrote: For one thing, tests can be automatically generated from a grammar. The first time I demonstrated this to a class of students, I said "now, here I've got this program that I wrote a few years ago and I know it works, this is just demonstrating how you can generate tests from a grammar." And then the very first generated test revealed a bug. At that point I was Enlightened.
Sure that is awesome, same kind of thing as using QuickCheck in a way... but do you consider this as TDD?
No, OF COURSE NOT. I never said it was. All I was doing is saying that generating tests from a grammar is easy, so don't ever believe "this might not be the real grammar so it would be too much wasted work testing from it."
I always seen TDD as "writing test first", where here you generate them once you have the grammar... that feel different to me, but as said before I'm realizing my definition was wrong.
Having a GRAMMAR is one thing. Having a PARSER is another. Suppose I say "write me some tests for function f". You will say back, if you have any sense: "I cannot possibly do that until I have some idea what f is supposed to do." Not the most rabid advocate of TDD would claim that you can write tests in a knowledge vacuum. The strongest claim I've seen is that the process of constructing the specification and the process of constructing the tests can be so interwoven that they are the same activity, but even then you have to have *some* idea of what you want before you start.
But don't you think this particular step can be achieved in your head? Maybe not for complex thing, but at least it work for me for simple case...
Everything works for simple cases. The problem is that some of the things we think are simple aren't. Programmers tend to be an optimistic lot, and we tend not to think of all the things that can go wrong. (Like someone I once worked with who used to teach students NOT to check the result of malloc() because on today's machines they would never run out of memory. I kid you not.) For example, there was one specification I implemented where there was a signed integer parameter and the original designers neither said what was supposed to happen if it was negative, nor said that it was an error or undefined or anything. I reckon they just never thought about it. Here's another one. There is a standard for a certain programming language which states that (1) the timestamp datatype uses UTC (2) timestamps can be calculated for thousands of years in the future. This requires a computer that can predict the choices of a human committee far into the future, as you discover as soon as you try to construct test cases and realise that you don't actually know what addSeconds(new TimeStamp(), 10*1000*1000) should be because your leap second table doesn't extend into the future.
Once you deliver a piece of work, it should be proven it work... Maybe doing it earlier help, but that's more about fighting procrastination than anything else, because I'm sure you hate as much as me writing tests ;-)
You will find that most traditional books about testing (including Boris Beizer's, for example) say that the earlier a bug goes into a system, the longer it stays and the harder it is to get it out. That's not a *proof* that doing testing as early as you can will save you effort, but it's suggestive. And one thing is certain, if you think about testing early, you'll aim for a design that *can* be tested.