
On 02/09/2013 10:50 AM, Johan Holmquist wrote:
I guess I fall more to the "reason about code" side of the scale rather than "testing the code" side. Testing seem to induce false hopes about finding all defects even to the point where the tester is blamed for not finding a bug rather than the developer for introducing it.
Oh, I'm definitely also on that side, but you have to do the best you can with the tools you have :).
[Bardur]
It's definitely a valid point, but isn't that an argument *for* testing for preformance regressions rather than *against* compiler optimizations?
We could test for regressions and pass. Then upgrade to a new version of compiler and test would no longer pass. And vice versa. Maybe that's your point too. :)
Indeed :).
[Iustin]
Surely there will be a canary period, parallel running of the old and new system, etc.?
Is that common? I have not seen it and I do think my workplace is a rather typical one.
I don't know about "common", but I've seen it done a few times. However, it's mostly been in situations where major subsystems have been rewritten and you _really_ want to make sure things still work as they should in production. Sometimes you can get away with just making the new-and-shiny code path a configure-time option and keeping the old-and-beaten code path. (Tends to be messy code-wise until you can excise the old code path, but what're you gonna do?)
Also, would we really want to preserve the old "bad" code just because it happened to trigger some optimization?
These things depend a lot on the situation at hand -- if it's something 99% of your users will hit, then yes, probably... until you can figure out why the new-and-shiny code *doesn't* get optimized appropriately.
Don't get me wrong, I am all for compiler optimizations and the benefits they bring as well as testing. Just highlighting some potential downsides.
It's all tradeoffs :). Regards,