Defining subtraction for naturals

Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions: (1) If the result would drop below zero then throw an overflow error; (2) Use saturating subtraction, i.e. if the result would drop below zero then return zero; (3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right. Dependently typed languages tend to use the second definition because it gives a pure total function (unlike the first) and because it's less obnoxious to use than the third. In Haskell the third would be even more obnoxious. So my question is, mathematically speaking, is there any reason to prefer the second option over the first or third? Purity aside, I mean; do the natural numbers with saturating subtraction form any interesting mathematical object, or is it just what we're stuck with? In a similar vein. Given a finite set (e.g., the natural numbers with some finite upper bound) and assuming we've already decided not to use modular arithmetic, is there any mathematical reason to prefer saturating addition and/or multiplication instead of throwing an overflow error? -- Live well, ~wren

If you are implementing lazy naturals, I wonder if defining 0 - 1 to
be infinity makes mathematical sense. It's like arithmetic mod
infinity....
Luke
On Thu, Mar 17, 2011 at 12:35 PM, wren ng thornton
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
(1) If the result would drop below zero then throw an overflow error;
(2) Use saturating subtraction, i.e. if the result would drop below zero then return zero;
(3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right.
Dependently typed languages tend to use the second definition because it gives a pure total function (unlike the first) and because it's less obnoxious to use than the third. In Haskell the third would be even more obnoxious.
So my question is, mathematically speaking, is there any reason to prefer the second option over the first or third? Purity aside, I mean; do the natural numbers with saturating subtraction form any interesting mathematical object, or is it just what we're stuck with?
In a similar vein. Given a finite set (e.g., the natural numbers with some finite upper bound) and assuming we've already decided not to use modular arithmetic, is there any mathematical reason to prefer saturating addition and/or multiplication instead of throwing an overflow error?
-- Live well, ~wren _______________________________________________ Agda mailing list Agda@lists.chalmers.se https://lists.chalmers.se/mailman/listinfo/agda

On 3/17/11 2:50 PM, Luke Palmer wrote:
If you are implementing lazy naturals, I wonder if defining 0 - 1 to be infinity makes mathematical sense. It's like arithmetic mod infinity....
Actually, I'm specifically implementing strict naturals :) There are a number of libraries for lazy naturals out there already, but for my current project I need the efficiency of Int/Integer--- without the errors or defensive programming that comes from using them when we mean Nat/Natural. So far I haven't actually needed arithmetic operations (Enum, Eq, and Ord have been enough) but I figure I should square that away before splitting the code off for a public release. Since the whole thing is done as newtypes I can always make the Num instances the "right" ones, since people who obsess over performance can manually unwrap them to perform raw operations they know/hope will succeed and then revalidate the invariants when they're done with the arithmetic section. I just want to make sure the instance I give has the nice mathematical properties it could/should. -- Live well, ~wren

On Thu, 17 Mar 2011, wren ng thornton wrote:
(1) If the result would drop below zero then throw an overflow error;
(2) Use saturating subtraction, i.e. if the result would drop below zero then return zero;
(3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right.
My practical experiences with the non-negative package are: I started with a saturation subtraction as in (2), but it turned out, that I hardly need that definition in further code. Thus I switched to (4) Use a symmetric subtraction that returns the minimum, the order and the absolute difference of both operands. Precise definitions and reasons in 'split' function: http://hackage.haskell.org/packages/archive/non-negative/0.1/doc/html/Numeri...

On Thu, Mar 17, 2011 at 2:35 PM, wren ng thornton
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
(1) If the result would drop below zero then throw an overflow error;
(2) Use saturating subtraction, i.e. if the result would drop below zero then return zero;
(3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right.
Dependently typed languages tend to use the second definition because it gives a pure total function (unlike the first) and because it's less obnoxious to use than the third. In Haskell the third would be even more obnoxious.
So my question is, mathematically speaking, is there any reason to prefer the second option over the first or third? Purity aside, I mean; do the natural numbers with saturating subtraction form any interesting mathematical object, or is it just what we're stuck with?
In "What About the Natural Numbers", Colin Runciman argues for
definition 2, but he doesn't provide much analysis beyond noting that
it gives you
length (drop n xs) = length xs - n
He also argues for setting x / 0 = x for strict naturals or x / 0 =
infinity for lazy naturals.
There are some use cases for both in the paper.
--
Dave Menendez

On 2011-03-17 19:35, wren ng thornton wrote:
Dependently typed languages tend to use the second definition because it gives a pure total function (unlike the first) and because it's less obnoxious to use than the third. In Haskell the third would be even more obnoxious.
There is a fourth option: a comparison view. See "The view from the left" by Conor McBride and James McKinna (Section 6). (This option is similar to what Henning Thielemann suggested.) -- /NAD

On 17 Mar 2011, at 18:35, wren ng thornton wrote:
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
No there aren't. Conor

On 3/17/11 5:12 PM, Conor McBride wrote:
On 17 Mar 2011, at 18:35, wren ng thornton wrote:
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
No there aren't.
How about "pragmatically efficacious"? -- Live well, ~wren

On Thu, Mar 17, 2011 at 6:41 PM, wren ng thornton
On 3/17/11 5:12 PM, Conor McBride wrote:
On 17 Mar 2011, at 18:35, wren ng thornton wrote:
Another question on particulars. When dealing with natural numbers, we run into the problem of defining subtraction. There are a few reasonable definitions:
No there aren't.
How about "pragmatically efficacious"?
If you must, you could always fall back on an encoding similar to the one that Brent used when introducing virtual species: http://byorgey.wordpress.com/2010/11/24/species-subtraction-made-simple/ along with some setoid that normalized for comparison on them, or you could just switch to type level Integers. -Edward
-- Live well, ~wren
_______________________________________________ Agda mailing list Agda@lists.chalmers.se https://lists.chalmers.se/mailman/listinfo/agda

On Thursday 17 March 2011 6:41:33 PM wren ng thornton wrote:
How about "pragmatically efficacious"?
Well...
(3) Use type hackery to disallow performing subtraction when the result would drop below zero, e.g. by requiring a proof that the left argument is not less than the right.
As far as this goes, one has to ask where we'd get this proof. It's unlikely that we'd just have one already, so the most likely answer is that we'd have to compute the proof. But, the way to compute the proof of whether we're allowed to do subtraction is to *do subtraction*. So, if we don't want saturating subtraction, arguably we don't want subtraction at all, but a prior *comparison* of our two numbers, which will tell us: compare m n m = n + k + 1 m = n m + k + 1 = n in which case we already have the information we want. I think this article is relevant: http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/ Inasmuch as we shouldn't be throwing away all propositional information by crushing to a boolean, we also shouldn't be throwing away information that we will later have to recompute by deciding the wrong proposition. -- Dan
participants (8)
-
Conor McBride
-
Dan Doel
-
David Menendez
-
Edward Kmett
-
Henning Thielemann
-
Luke Palmer
-
Nils Anders Danielsson
-
wren ng thornton