
If you are implementing lazy naturals, I wonder if defining 0 - 1 to
be infinity makes mathematical sense. It's like arithmetic mod
infinity....
Luke
On Thu, Mar 17, 2011 at 12:35 PM, wren ng thornton
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
(1) If the result would drop below zero then throw an overflow error;
(2) Use saturating subtraction, i.e. if the result would drop below zero then return zero;
(3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right.
Dependently typed languages tend to use the second definition because it gives a pure total function (unlike the first) and because it's less obnoxious to use than the third. In Haskell the third would be even more obnoxious.
So my question is, mathematically speaking, is there any reason to prefer the second option over the first or third? Purity aside, I mean; do the natural numbers with saturating subtraction form any interesting mathematical object, or is it just what we're stuck with?
In a similar vein. Given a finite set (e.g., the natural numbers with some finite upper bound) and assuming we've already decided not to use modular arithmetic, is there any mathematical reason to prefer saturating addition and/or multiplication instead of throwing an overflow error?
-- Live well, ~wren _______________________________________________ Agda mailing list Agda@lists.chalmers.se https://lists.chalmers.se/mailman/listinfo/agda