
On 4/06/2013, at 4:22 PM, Rustom Mody wrote:
On Tue, Jun 4, 2013 at 7:35 AM, Richard A. O'Keefe
wrote: On 3/06/2013, at 6:58 PM, Carter Schonwald wrote:
If the Int type had either of these semantics by default, many many performance sensitive libraries would suddenly have substantially less compelling performance. Every single operation that was branchless before would have a branch *every* operation..... this would be BAD.
Actually, the x86 can be configured to trap integer overflows, so on that not entirely unpopular platform, there need be NO extra branches.
Well yes and no. See http://software.intel.com/en-us/forums/topic/306156
I made a mistake, for which I apologise. There were two things I wanted the x86 to trap, several years ago, and I found that one of them *could* be trapped and the other could not. The one that couldn't was integer overflow. I do note that the page cited answers a *different* question which is "does the Intel COMPILER support integer overflow trapping." The question I answered wrongly was "does the Intel HARDWARE support integer overflow trapping (by raising an exception on integer overflow if a bit is set in a certain control register)." Having apologised for my error, I close with the observation that Jacob Navia, developer of lcc-win32 (he started with the LCC compiler but added serious x86-specific optimisation and other Windows goodness), claims that sticking JO after signed integer operations adds very little to run time because it is predicted very well by the branch prediction hardware, since it is almost never taken.
In Discipline of Programming (in 1976!) Dijkstra exactly described this problem, and squarely put the blame on poorly engineered machines. He introduced 3 concepts/terms: UM : unbounded machine SLM : sufficiently large machine HSLM : hopefully sufficiently large machine
Dijkstra was a Burroughs Research Fellow, and the B6700 was a textbook example of an HSLM. I couldn't believe how primitive other systems were after using that. The signed-integer-overflow-trapping C compiler I mentioned was a MIPS one (MIPS distinguishing between ADD and ADDU, &c).