
On Thu, Mar 17, 2011 at 2:35 PM, wren ng thornton
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
(1) If the result would drop below zero then throw an overflow error;
(2) Use saturating subtraction, i.e. if the result would drop below zero then return zero;
(3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right.
Dependently typed languages tend to use the second definition because it gives a pure total function (unlike the first) and because it's less obnoxious to use than the third. In Haskell the third would be even more obnoxious.
So my question is, mathematically speaking, is there any reason to prefer the second option over the first or third? Purity aside, I mean; do the natural numbers with saturating subtraction form any interesting mathematical object, or is it just what we're stuck with?
In "What About the Natural Numbers", Colin Runciman argues for
definition 2, but he doesn't provide much analysis beyond noting that
it gives you
length (drop n xs) = length xs - n
He also argues for setting x / 0 = x for strict naturals or x / 0 =
infinity for lazy naturals.
There are some use cases for both in the paper.
--
Dave Menendez