Burning more bridges (mostly numeric ones)

The Burning Bridges thread got lots done, but seemed to miss a few things, and didn't even touch on the Numeric classes. The Numeric classes should be fixed at some point, and sooner is better than later. However, it would be a large change and would go nicely with a major version bump in base. 5 is coming up soon. Proposals, ordered from relatively controversial to insanely so (at least IMO): 1. Replace (.) and id from versions from Control.Category in Prelude This is a small change, and has close to the same effect as the Foldable/ Traversable change. The key difference is that this is a much smaller change and there is little current use for the versions from Control.Category However, historically they have seen use with the other lens-ish libraries, and AFAICT are the reason the lenses in `lens` are "backwards", or at least called so my some. 1.2 Use Semigroupoid for (.) and Ob for id instead. Personally, I really like this idea, but I think it would be much more controversial. 2. Move Semigroup into Prelude 2.1 Make Monoid depend on Semigroup. 3. Do something with the Numeric classes. This isn't so much of a proposal as a request for discussion from people more experienced than me, but I still think a general idea if people think that doing *anything* is a good idea would be useful. 3.1 Split each numeric operation into it's own class. Say no to 3.2 and yes here for no hierarchy in them/ConstraintKinds/empty classes. Pros: EDSLs, convenience. Cons: Would be major breakage, would need ConstraintKinds/empty classes to have a hierarchy. 3.2 Hierarchy. the classes are TBD, this is here for a straw poll.

Darn, theres another carter on this list!!! (welcome!)
These are some good points to push on, but *the two weeks* before ghc 7.8
is tentatively due for release!
Also, 3 is * too big* to be included in this thread, the ones before are
worth several threads alone. I humbly ask all subsequent respondents to
focus on #'s 1 and 2.
fixing the numeric components of prelude actually will require some
innovation on the way we can even organize / structure type classes if we
really wish to map the standard pen+paper algebraic structures to their
computational analogues in a prelude friendly way. I've got many good
reasons to care about improving this piece of Base, including the fact that
I'm spending (a professionally inadvisable) large amount of time figuring
out how to improve the entire numerical computing substrate for Haskell.
And i'm leaning towards figuring out the numeric prelude that needs to be
*correct and good* and then pushing for a subset thereof for getting into
base. This is one of those areas that "commitee" doesn't matter. the
design has to work. It has to be useable. And i don't think theres
currently any strong "heres the right design" choice. Also whatever new
design lands in GHC BASE defacto determines the next haskell standard
(ishhh).
That said, I think after the split-base work lands, doing surgery on the
default numerical classes becomes more tenable
cheers :)
On Sun, Feb 23, 2014 at 10:13 PM, Carter Charbonneau
The Burning Bridges thread got lots done, but seemed to miss a few things, and didn't even touch on the Numeric classes. The Numeric classes should be fixed at some point, and sooner is better than later. However, it would be a large change and would go nicely with a major version bump in base. 5 is coming up soon. Proposals, ordered from relatively controversial to insanely so (at least IMO):
1. Replace (.) and id from versions from Control.Category in Prelude This is a small change, and has close to the same effect as the Foldable/ Traversable change. The key difference is that this is a much smaller change and there is little current use for the versions from Control.Category However, historically they have seen use with the other lens-ish libraries, and AFAICT are the reason the lenses in `lens` are "backwards", or at least called so my some.
1.2 Use Semigroupoid for (.) and Ob for id instead. Personally, I really like this idea, but I think it would be much more controversial.
2. Move Semigroup into Prelude
2.1 Make Monoid depend on Semigroup.
3. Do something with the Numeric classes. This isn't so much of a proposal as a request for discussion from people more experienced than me, but I still think a general idea if people think that doing *anything* is a good idea would be useful.
3.1 Split each numeric operation into it's own class. Say no to 3.2 and yes here for no hierarchy in them/ConstraintKinds/empty classes. Pros: EDSLs, convenience. Cons: Would be major breakage, would need ConstraintKinds/empty classes to have a hierarchy.
3.2 Hierarchy. the classes are TBD, this is here for a straw poll.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Carter Charbonneau wrote
2. Move Semigroup into Prelude
2.1 Make Monoid depend on Semigroup.
NonEmpty seems to be frequently reimplemented, particularly by beginners. Including Semigroup in Prelude would save all this duplication. -- View this message in context: http://haskell.1045720.n5.nabble.com/Burning-more-bridges-mostly-numeric-one... Sent from the Haskell - Libraries mailing list archive at Nabble.com.

On #3: The library numeric-prelude achieves many of these goals (Plus a
bunch more). If the experiences of using numeric-prelude are positive then
using this or a subset of this as the standard numeric prelude might
resolve these goals easily.
http://hackage.haskell.org/package/numeric-prelude
-Corey O'Connor
coreyoconnor@gmail.com
http://corebotllc.com/
On Mon, Feb 24, 2014 at 5:13 AM, harry
Carter Charbonneau wrote
2. Move Semigroup into Prelude
2.1 Make Monoid depend on Semigroup.
NonEmpty seems to be frequently reimplemented, particularly by beginners. Including Semigroup in Prelude would save all this duplication.
-- View this message in context: http://haskell.1045720.n5.nabble.com/Burning-more-bridges-mostly-numeric-one... Sent from the Haskell - Libraries mailing list archive at Nabble.com. _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

lets not talk about this while people are buried with 7.8 release
engineering, please :)
there are certainly good ideas in Henning's (amazing) work,
HOWEVER, one thing people often forget is that theres more than one valid
computational formalization of a given mathematical concept (which itself
can often have a multitude of equivalent definitions).
and thats even ignoring the fact that the haddocks are full of readable
Qualified names like
"class Chttp://hackage.haskell.org/package/numeric-prelude-0.4.1/docs/Algebra-Algebr...
a
=> C a where" :)
On Fri, Mar 21, 2014 at 1:08 PM, Corey O'Connor
On #3: The library numeric-prelude achieves many of these goals (Plus a bunch more). If the experiences of using numeric-prelude are positive then using this or a subset of this as the standard numeric prelude might resolve these goals easily.
http://hackage.haskell.org/package/numeric-prelude
-Corey O'Connor coreyoconnor@gmail.com http://corebotllc.com/
On Mon, Feb 24, 2014 at 5:13 AM, harry
wrote: Carter Charbonneau wrote
2. Move Semigroup into Prelude
2.1 Make Monoid depend on Semigroup.
NonEmpty seems to be frequently reimplemented, particularly by beginners. Including Semigroup in Prelude would save all this duplication.
-- View this message in context: http://haskell.1045720.n5.nabble.com/Burning-more-bridges-mostly-numeric-one... Sent from the Haskell - Libraries mailing list archive at Nabble.com. _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Am 21.03.2014 18:22, schrieb Carter Schonwald:
lets not talk about this while people are buried with 7.8 release engineering, please :)
there are certainly good ideas in Henning's (amazing) work, HOWEVER, one thing people often forget is that theres more than one valid computational formalization of a given mathematical concept (which itself can often have a multitude of equivalent definitions).
and thats even ignoring the fact that the haddocks are full of readable Qualified names like "classC http://hackage.haskell.org/package/numeric-prelude-0.4.1/docs/Algebra-Algebr... a => C a where" :)
I tried three times to make Haddock show qualifications, but it is really frustrating. Last time I tried I despaired of GHC data structures. Somewhere the qualification must have been recognized by GHC but it is thrown away early and hard to restore.

On 21 March 2014 20:10, Henning Thielemann wrote: I tried three times to make Haddock show qualifications, but it is really
frustrating. Last time I tried I despaired of GHC data structures.
Somewhere the qualification must have been recognized by GHC but it is
thrown away early and hard to restore. I actually implemented qualified names in haddock a long time ago (
http://projects.haskell.org/pipermail/haddock/2010-August/000649.html)
exactly because of numeric-prelude.
Here is a screenshot:
http://projects.haskell.org/pipermail/haddock/attachments/20100828/e91f52de/...
However, I don't think that patch would still work, but it might give you
some hints how to do it.

Am 22.03.2014 08:22, schrieb Tobias Brandt:
On 21 March 2014 20:10, Henning Thielemann
mailto:schlepptop@henning-thielemann.de> wrote: I tried three times to make Haddock show qualifications, but it is really frustrating. Last time I tried I despaired of GHC data structures. Somewhere the qualification must have been recognized by GHC but it is thrown away early and hard to restore.
I actually implemented qualified names in haddock a long time ago (http://projects.haskell.org/pipermail/haddock/2010-August/000649.html) exactly because of numeric-prelude. Here is a screenshot: http://projects.haskell.org/pipermail/haddock/attachments/20100828/e91f52de/...
However, I don't think that patch would still work, but it might give you some hints how to do it.
I know, but as far as I remember it was not completely satisfying. I prefered abbreviated qualifications, e.g. Field.C instead of Algebra.Field.C and there were differences between imports from modules of the same package and imports from external modules. http://trac.haskell.org/haddock/ticket/22

On 22 March 2014 08:28, Henning Thielemann wrote: I know, but as far as I remember it was not completely satisfying. I
prefered abbreviated qualifications, e.g. Field.C instead of
Algebra.Field.C and there were differences between imports from modules of
the same package and imports from external modules. There are four modes of qualification in my patch:
* None: as now
* Full: everything is fully qualified
* Local: only imported names are fully qualified, like in the screenshot
* Relative: like local, but prefixes in the same hierarchy are stripped.
Algebra.Field.C would become Field.C when shown in the documentation for
Algebra.VectorSpace.
The last one probably comes closest to what you want. Preserving the
original qualification (as written in the source code) would probably be
perfect, but that's already thrown away when we get to it in haddock.

On Fri, Mar 21, 2014 at 1:08 PM, Corey O'Connor
On #3: The library numeric-prelude achieves many of these goals (Plus a bunch more). If the experiences of using numeric-prelude are positive then using this or a subset of this as the standard numeric prelude might resolve these goals easily.
One of my major complaints against numeric-prelude is the same as my major complaint against every other such project I've seen put forward: they completely ignore semirings and related structures. Semirings are utterly ubiquitous and this insistence that every notion of addition comes equipped with subtraction is ridiculous. In my work I deal with semirings and semimodules on a daily basis, whereas rings/modules show up far less often, let alone fields/vectorspaces. When not dealing with semirings, the other structures I work with are similarly general (e.g., semigroups, quasigroups,...). But this entire area of algebra is completely overlooked by those libraries which start at abelian groups and then run headlong for normed Euclidean vector spaces. My main complaint against the Num class is that it already assumes too much structure. So developing the hierarchy even further up than Num does little to help me. -- Live well, ~wren

agreed, hence my earlier remarks
:)
On Sat, Mar 22, 2014 at 5:09 PM, wren romano
On #3: The library numeric-prelude achieves many of these goals (Plus a bunch more). If the experiences of using numeric-prelude are positive
On Fri, Mar 21, 2014 at 1:08 PM, Corey O'Connor
wrote: then using this or a subset of this as the standard numeric prelude might resolve these goals easily.
One of my major complaints against numeric-prelude is the same as my major complaint against every other such project I've seen put forward: they completely ignore semirings and related structures.
Semirings are utterly ubiquitous and this insistence that every notion of addition comes equipped with subtraction is ridiculous. In my work I deal with semirings and semimodules on a daily basis, whereas rings/modules show up far less often, let alone fields/vectorspaces. When not dealing with semirings, the other structures I work with are similarly general (e.g., semigroups, quasigroups,...). But this entire area of algebra is completely overlooked by those libraries which start at abelian groups and then run headlong for normed Euclidean vector spaces.
My main complaint against the Num class is that it already assumes too much structure. So developing the hierarchy even further up than Num does little to help me.
-- Live well, ~wren _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

My complaint with numeric-prelude is that it doesn't (and arguably can't) fix the problems that for me make Haskell borderline usable for actual engineering work involving actual "normal" numbers, and genuinely somewhat unusable for teaching: Off the top of my head: * The lack of implicit conversions (except for the weird defaulting of literals, which means that I am constantly writing `fromIntegral` and `toRealFrac` in places where there is only one reasonable choice of type conversion, and occasionally having things just malfunction because I didn't quite understand what these conversion functions would give me as a result. Prelude> 3 + 3.5 6.5 Prelude> let x = 3 Prelude> x + 3.5 <interactive>:4:5: No instance for (Fractional Integer) arising from the literal `3.5' Possible fix: add an instance declaration for (Fractional Integer) In the second argument of `(+)', namely `3.5' In the expression: x + 3.5 In an equation for `it': it = x + 3.5 Prelude> I mean, seriously? We expect newbies to just roll with this kind of thing? Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`. * The inability of Haskell to handle unary negation sanely, which means that I and newbies alike are constantly having to figure things out and parenthesize. From my observations of students, this is a huge barrier to Haskell adoption: people who can't write 3 + -5 just give up on a language. (I love the current error message here, "cannot mix `+' [infixl 6] and prefix `-' [infixl 6] in the same infix expression", which is about as self-diagnosing of a language failing as any error message I've ever seen.) * The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.) * The incredible awkwardness of hex/octal/binary input handling, which requires digging a function with an odd and awkward return convention (`readHex`) out of an oddly-chosen module (or rolling my own) in order to read a hex value. (Output would be just as bad except for `Text.Printf` as a safety hatch.) Lord knows what you're supposed to do if your number might have a C-style base specifier on the front, other than the obvious ugly brute-force thing? * Defaulting numeric types with "-Wall" on producing scary warnings. Prelude> 3 + 3 <interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' In the expression: 3 + 3 In an equation for `it': it = 3 + 3 <interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' at <interactive>:2:3 (Show a0) arising from a use of `print' at <interactive>:2:1-5 In the expression: 3 + 3 In an equation for `it': it = 3 + 3 6 and similarly for 3.0 + 3.0. If you can't even write simple addition without turning off or ignoring warnings, well, I dunno. Something. Oh, and try to get rid of those warnings. The only ways I know are `3 + 3 :: Integer` or `(3 :: Integer) + 3`, both of which make code read like a bad joke. Of course, if you write everything to take specific integral or floating types rather than `Integral` or `RealFloat` or `Num` this problem mostly goes away. So everyone does, turning potentially general code into needlessly over-specific code. Not sure I'm done, but running out of steam. But yeah, while I'm fine with fancy algebraic stuff getting fixed, I'd also like to see simple grade-school-style arithmetic work sanely. That would let me teach Haskell more easily as well as letting me write better, clearer, more correct Haskell for that majority of my real-world problems that involve grade-school numbers. --Bart

On 23 March 2014 11:58, Bart Massey
My complaint with numeric-prelude is that it doesn't (and arguably can't) fix the problems that for me make Haskell borderline usable for actual engineering work involving actual "normal" numbers, and genuinely somewhat unusable for teaching: Off the top of my head:
* The lack of implicit conversions (except for the weird defaulting of literals, which means that I am constantly writing `fromIntegral` and `toRealFrac` in places where there is only one reasonable choice of type conversion, and occasionally having things just malfunction because I didn't quite understand what these conversion functions would give me as a result.
Prelude> 3 + 3.5 6.5 Prelude> let x = 3 Prelude> x + 3.5
<interactive>:4:5: No instance for (Fractional Integer) arising from the literal `3.5' Possible fix: add an instance declaration for (Fractional Integer) In the second argument of `(+)', namely `3.5' In the expression: x + 3.5 In an equation for `it': it = x + 3.5 Prelude>
I mean, seriously? We expect newbies to just roll with this kind of thing?
Isn't this more of a ghci issue than a Haskell issue? I actually think it's good behaviour that - in actual code, not defaulted numerals in ghci - we need to explicitly convert between types rather than have a so-far-Integer-value automagically convert to Double.
Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`.
What would be the type of such an operation be? I think we'd need some kind of new typeclass to denote the "base value", which would make the actual type signature be much more hairy.
* The inability of Haskell to handle unary negation sanely, which means that I and newbies alike are constantly having to figure things out and parenthesize. From my observations of students, this is a huge barrier to Haskell adoption: people who can't write 3 + -5 just give up on a language. (I love the current error message here, "cannot mix `+' [infixl 6] and prefix `-' [infixl 6] in the same infix expression", which is about as self-diagnosing of a language failing as any error message I've ever seen.)
This is arguably the fault of mathematics for overloading the - operator :p
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.)
* The incredible awkwardness of hex/octal/binary input handling, which requires digging a function with an odd and awkward return convention (`readHex`) out of an oddly-chosen module (or rolling my own) in order to read a hex value. (Output would be just as bad except for `Text.Printf` as a safety hatch.) Lord knows what you're supposed to do if your number might have a C-style base specifier on the front, other than the obvious ugly brute-force thing?
* Defaulting numeric types with "-Wall" on producing scary warnings.
Prelude> 3 + 3
<interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' In the expression: 3 + 3 In an equation for `it': it = 3 + 3
<interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' at <interactive>:2:3 (Show a0) arising from a use of `print' at <interactive>:2:1-5 In the expression: 3 + 3 In an equation for `it': it = 3 + 3 6
and similarly for 3.0 + 3.0. If you can't even write simple addition without turning off or ignoring warnings, well, I dunno. Something. Oh, and try to get rid of those warnings. The only ways I know are `3 + 3 :: Integer` or `(3 :: Integer) + 3`, both of which make code read like a bad joke.
So above you didn't like how ghci defaulted to types too early, now you're complaining that it's _not_ defaulting? Or just that it gives you a warning that it's doing the defaulting?
Of course, if you write everything to take specific integral or floating types rather than `Integral` or `RealFloat` or `Num` this problem mostly goes away. So everyone does, turning potentially general code into needlessly over-specific code.
Not sure I'm done, but running out of steam. But yeah, while I'm fine with fancy algebraic stuff getting fixed, I'd also like to see simple grade-school-style arithmetic work sanely. That would let me teach Haskell more easily as well as letting me write better, clearer, more correct Haskell for that majority of my real-world problems that involve grade-school numbers.
--Bart _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
-- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

On Sat, Mar 22, 2014 at 8:58 PM, Bart Massey
* The lack of implicit conversions (except for the weird defaulting of literals, which means that I am constantly writing `fromIntegral` and `toRealFrac` in places where there is only one reasonable choice of type conversion, and occasionally having things just malfunction because I didn't quite understand what these conversion functions would give me as a result.
Prelude> 3 + 3.5 6.5 Prelude> let x = 3 Prelude> x + 3.5
<interactive>:4:5: No instance for (Fractional Integer) arising from the literal `3.5' Possible fix: add an instance declaration for (Fractional Integer) In the second argument of `(+)', namely `3.5' In the expression: x + 3.5 In an equation for `it': it = x + 3.5 Prelude>
I mean, seriously? We expect newbies to just roll with this kind of thing?
Actually, we don't expect them to roll with it any more.
let x = 3 x + 3.5 6.5
ghci turns no NoMonomorphismRestriction on newer GHCs. To me that issue is largely independent of changes to the numeric hierarchy, though I confess I'd largely tuned out this thread and am just now skimming backwards a bit. That said, implicit conversions are something where personally I feel Haskell does take the right approach. It is the only language I have access to where I can reason about the semantics of (+) sanely and extend the set of types.
Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`.
The problem arises when you allow for users to extend the set of numeric types like Haskell does. We have a richer menagerie of exotic numerical types than any other language, explicitly because of our adherence to a stricter discipline and moving the coercion down to the literal rather than up to every function application. Because of that type inference works better. It can flow both forward and backwards through (+), whereas the approach you advocate is strictly less powerful. You have to give up overloading of numeric literals, and in essence this gives up on the flexibility of the numerical tower to handle open sets of new numerical types. As someone who works with compensated arithmetic, automatic differentiaton, arbitrary precision floating point, interval arithmetic, Taylor models, and all sorts of other numerical types in Haskell, basically you're almost asking me to give up all the things that work in this language to go back to a scheme style fixed numerical tower.
* The inability of Haskell to handle unary negation sanely, which means that I and newbies alike are constantly having to figure things out and parenthesize. From my observations of students, this is a huge barrier to Haskell adoption: people who can't write 3 + -5 just give up on a language. (I love the current error message here, "cannot mix `+' [infixl 6] and prefix `-' [infixl 6] in the same infix expression", which is about as self-diagnosing of a language failing as any error message I've ever seen.)
That is probably fixable by getting creative in the language grammar. I note it particularly because our Haskell like language Ermine here at work gets it right. ;)
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.)
It is unfortunate, but there really is a distinction being made. * The incredible awkwardness of hex/octal/binary input handling, which
requires digging a function with an odd and awkward return convention (`readHex`) out of an oddly-chosen module (or rolling my own) in order to read a hex value. (Output would be just as bad except for `Text.Printf` as a safety hatch.) Lord knows what you're supposed to do if your number might have a C-style base specifier on the front, other than the obvious ugly brute-force thing?
A lot of people these days turn to lens for that:
:m + Numeric.Lens Control.Lens "7b" ^? hex Just 123 hex # 123 "7b"
* Defaulting numeric types with "-Wall" on producing scary warnings.
Prelude> 3 + 3
<interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' In the expression: 3 + 3 In an equation for `it': it = 3 + 3
<interactive>:2:3: Warning: Defaulting the following constraint(s) to type `Integer' (Num a0) arising from a use of `+' at <interactive>:2:3 (Show a0) arising from a use of `print' at <interactive>:2:1-5 In the expression: 3 + 3 In an equation for `it': it = 3 + 3 6
and similarly for 3.0 + 3.0. If you can't even write simple addition without turning off or ignoring warnings, well, I dunno. Something. Oh, and try to get rid of those warnings. The only ways I know are `3 + 3 :: Integer` or `(3 :: Integer) + 3`, both of which make code read like a bad joke.
This is no longer a problem at ghci due to NMR:
3 + 3 6
You can of course use
:set -Wall -fno-warn-type-defaults instead of -Wall for cases like
3 == 3 where the type doesn't get picked.
-Edward

On Sun, Mar 23, 2014 at 12:11 AM, Edward Kmett
On Sat, Mar 22, 2014 at 8:58 PM, Bart Massey
wrote: let x = 3 x + 3.5 6.5
ghci turns no NoMonomorphismRestriction on newer GHCs.
Nice. AFAICT newer means 7.8, which I haven't tried yet. Major improvement.
Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`.
Because of that type inference works better. It can flow both forward and backwards through (+), whereas the approach you advocate is strictly less powerful. You have to give up overloading of numeric literals, and in essence this gives up on the flexibility of the numerical tower to handle open sets of new numerical types.
You obviously know far more about this than me, but I'm not seeing it? AFAICT all I am asking for is numeric subtyping using the normal typeclass mechanism, but with some kind of preference rules that get the subtyping right in "normal" cases? I can certainly agree that I don't want to go toward C's morass of "widening to unsigned" (???) or explicitly-typed numeric literals. I just want a set of type rules that agrees with grade-school mathematics most of the time. I'm sure I'm missing something, and it really is that hard, but if so it makes me sad.
* The inability of Haskell to handle unary negation sanely, which means that I and newbies alike are constantly having to figure things out and parenthesize. From my observations of students, this is a huge barrier to Haskell adoption: people who can't write 3 + -5 just give up on a language. (I love the current error message here, "cannot mix `+' [infixl 6] and prefix `-' [infixl 6] in the same infix expression", which is about as self-diagnosing of a language failing as any error message I've ever seen.)
That is probably fixable by getting creative in the language grammar. I note it particularly because our Haskell like language Ermine here at work gets it right. ;)
Almost every PL I've seen with infix arithmetic gets it right. It's trivial for any operator-precedence parser, and not too hard for other common kinds. In general it would be nice if Haskell allowed arbitrary prefix and postfix unary operators, but I'd settle for a special case for unary minus.
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.)
It is unfortunate, but there really is a distinction being made.
I get that. I even get that static-typing exponentiation is hard. (You should see how we did it in Nickle (http://nickle.org) --not because it's good but because it calls out a lot of the problems.) What I don't get is why the names seem so terrible to me, nor why the typechecker can't do more to help reduce the number of needed operators, ideally to one. It might mean extra conversion operators around exponentiation once in a while, I guess?
* The incredible awkwardness of hex/octal/binary input handling, which requires digging a function with an odd and awkward return convention (`readHex`) out of an oddly-chosen module (or rolling my own) in order to read a hex value. (Output would be just as bad except for `Text.Printf` as a safety hatch.) Lord knows what you're supposed to do if your number might have a C-style base specifier on the front, other than the obvious ugly brute-force thing?
A lot of people these days turn to lens for that:
:m + Numeric.Lens Control.Lens "7b" ^? hex Just 123 hex # 123 "7b"
Nice. Once this really becomes standard, I guess we'll be somewhere.
* Defaulting numeric types with "-Wall" on producing scary warnings.
This is no longer a problem at ghci due to NMR:
3 + 3 6
Nice. I'm not sure what to make of something like this (with ghci -Wall -XNoMonomorphismRestriction)
let discard :: Integral a => a -> (); discard _ = () discard 3
<interactive>:5:1: Warning: Defaulting the following constraint(s) to type `Integer' (Integral a0) arising from a use of `discard' at <interactive>:5:1-7 (Num a0) arising from the literal `3' at <interactive>:5:9 In the expression: discard 3 In an equation for `it': it = discard 3 <interactive>:5:1: Warning: Defaulting the following constraint(s) to type `Integer' (Integral a0) arising from a use of `discard' at <interactive>:5:1-7 (Num a0) arising from the literal `3' at <interactive>:5:9 In the expression: discard 3 In an equation for `it': it = discard 3 () I get the feeling I just shouldn't worry about it: there's not much to be done, as this is just a limitation of the static type system and not really Haskell's fault. (Although I wonder why the conversion is warned twice? :-)
You can of course use
:set -Wall -fno-warn-type-defaults
instead of -Wall for cases like
3 == 3
where the type doesn't get picked.
Of course. I have no idea what warnings I might be turning off that would actually be useful? Anyway, thanks much for the response! --Bart

On Sun, Mar 23, 2014 at 4:50 AM, Bart Massey
Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`.
Because of that type inference works better. It can flow both forward and backwards through (+), whereas the approach you advocate is strictly less powerful. You have to give up overloading of numeric literals, and in essence this gives up on the flexibility of the numerical tower to handle open sets of new numerical types.
You obviously know far more about this than me, but I'm not seeing it? AFAICT all I am asking for is numeric subtyping using the normal typeclass mechanism, but with some kind of preference rules that get the subtyping right in "normal" cases? I can certainly agree that I don't want to go toward C's morass of "widening to unsigned" (???) or explicitly-typed numeric literals. I just want a set of type rules that agrees with grade-school mathematics most of the time. I'm sure I'm missing something, and it really is that hard, but if so it makes me sad.
Let's consider what directions type information flows though (+) class Foo a where (+) :: a -> a -> a Given that kind of class you get (+) :: Foo a => a -> a -> a Now given the result type of an expression you can know the types of both of its argument types. If either argument is determined you can know the type of the other argument and the result as well. Given any one of the arguments or the result type you can know the type of the other two. Consider now the kind of constraint you'd need for widening. class Foo a b c | a b -> c where (+) :: a -> b -> c Now, given both a and b you can know the type of c. But given just the type of c, you know nothing about the type of either of its arguments. a + b :: Int used to tell you that a and b both had to be Int, but now you'd get nothing! Worse, given the type of one argument, you don't get to know the type of the other argument or the result. We went from getting 2 other types from 1 in all directions to getting 1 type from 2 others, in only 1 of 3 cases. The very cases you complained about, where you needed defaulting now happen virtually everywhere!
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.) It is unfortunate, but there really is a distinction being made.
I get that. I even get that static-typing exponentiation is hard. (You should see how we did it in Nickle (http://nickle.org) --not because it's good but because it calls out a lot of the problems.) What I don't get is why the names seem so terrible to me, nor why the typechecker can't do more to help reduce the number of needed operators, ideally to one. It might mean extra conversion operators around exponentiation once in a while, I guess?
I do think there were at least two genuinely bad ideas in the design of (^) (^^) and (**) originally. Notably the choice in (^) an (^^) to overload on the power's Integral type. More often than not this simply leads to an ambiguous choice for simple cases like x^2. Even if that had that been monomorphized and MPTCs existed when they were defined AND we wanted to use fundeps in the numerical tower, you'd still need at least two operators. You can't overload cleanly between (^) and (**) because instance selection would overlap. Both of their right sides unify: (^) :: ... => a -> Int -> a (**) :: ... => a -> a -> a
You can of course use
:set -Wall -fno-warn-type-defaults
Of course. I have no idea what warnings
I might be turning off that would actually be useful?
The warnings that turns off are just the ones like your discard and the 3 == 3 where it had to turn to defaulting for an under-determined type. -Edward

On Sun, Mar 23, 2014 at 01:50:43AM -0700, Bart Massey wrote:
:m + Numeric.Lens Control.Lens "7b" ^? hex Just 123 hex # 123 "7b"
Nice. Once this really becomes standard, I guess we'll be somewhere.
Unless there's some way I'm unaware of to statically verify that the string indeed represents a valid hex encoding, then this is still not a complete solution. Tom

Am 23.03.2014 19:16, schrieb Tom Ellis:
On Sun, Mar 23, 2014 at 01:50:43AM -0700, Bart Massey wrote:
:m + Numeric.Lens Control.Lens "7b" ^? hex Just 123 hex # 123 "7b"
Nice. Once this really becomes standard, I guess we'll be somewhere.
Unless there's some way I'm unaware of to statically verify that the string indeed represents a valid hex encoding, then this is still not a complete solution.
Since the original complaint was about parsing, there must be a way to fail. The first line ("7b" ^? hex) seems to allow failure. For literal hex input we have 0x7b. However I don't see a problem with readHex. Prelude> case Numeric.readHex "7b" of [(n,"")] -> print n; _ -> putStrLn "could not parse hex number" 123

On Sun, Mar 23, 2014 at 07:30:07PM +0100, Henning Thielemann wrote:
Am 23.03.2014 19:16, schrieb Tom Ellis:
Unless there's some way I'm unaware of to statically verify that the string indeed represents a valid hex encoding, then this is still not a complete solution.
Since the original complaint was about parsing, there must be a way to fail. The first line ("7b" ^? hex) seems to allow failure.
For literal hex input we have 0x7b.
Oh, I wasn't reading carefully enough!

Am 23.03.2014 09:50, schrieb Bart Massey:
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.)
It is unfortunate, but there really is a distinction being made.
I get that. I even get that static-typing exponentiation is hard. (You should see how we did it in Nickle (http://nickle.org) --not because it's good but because it calls out a lot of the problems.) What I don't get is why the names seem so terrible to me, nor why the typechecker can't do more to help reduce the number of needed operators, ideally to one. It might mean extra conversion operators around exponentiation once in a while, I guess?
I think the power functions of Haskell are the best we can do, and mathematics is to blame for having only one notation for different power functions. http://www.haskell.org/haskellwiki/Power_function I like to compare it to division. In school we first learnt natural numbers and that division cannot always be performed with natural numbers. Instead we have division with remainder. In contrast to that we can always divide rational numbers (except division by zero). In Haskell this is nicely captured by two different functions div and (/). The same way I find it sensible to distinguish power functions. I found the infix operator names (^^) and (**) not very intuitive and defined (^-) and (^/) in NumericPrelude, in order to show, that the first one allows negative exponents and the second one allows fractional exponents. Unfortunately the first one looks like power function with negated exponent and I had no better idea for an identifier so far.

On 23.03.2014 19:25, Henning Thielemann wrote:
I found the infix operator names (^^) and (**) not very intuitive and defined (^-) and (^/) in NumericPrelude, in order to show, that the first one allows negative exponents and the second one allows fractional exponents. Unfortunately the first one looks like power function with negated exponent and I had no better idea for an identifier so far.
Ah, so x ^- 2 (or is it x ^- 2.0) is not the square root of x? Maybe you should change this identifier indeed! Maybe use ^+- instead. This looks a bit confusing, but ^- is outright misleading. Cheers, Andreas -- Andreas Abel <>< Du bist der geliebte Mensch. Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden andreas.abel@gu.se http://www2.tcs.ifi.lmu.de/~abel/

Am 24.03.2014 10:58, schrieb Andreas Abel:
On 23.03.2014 19:25, Henning Thielemann wrote:
I found the infix operator names (^^) and (**) not very intuitive and defined (^-) and (^/) in NumericPrelude, in order to show, that the first one allows negative exponents and the second one allows fractional exponents. Unfortunately the first one looks like power function with negated exponent and I had no better idea for an identifier so far.
Ah, so x ^- 2 (or is it x ^- 2.0) is not the square root of x? Maybe you should change this identifier indeed!
If at all, it could be misinterpreted as 1/x^2. Square root would be x^/0.5.
Maybe use ^+- instead.
Nice idea! Analogously I could define (^*/).

On Sun, Mar 23, 2014 at 3:11 AM, Edward Kmett
That said, implicit conversions are something where personally I feel Haskell does take the right approach. It is the only language I have access to where I can reason about the semantics of (+) sanely and extend the set of types.
Ditto. It can sometimes be unfortunate that we don't get the subset inclusions for free, but in my experience the few extra annotations required are a small price to pay for all the bugs caught by not doing so-- when doing numerical work! Not to mention that Int does not really have a subset inclusion into Double, and so this implicit coercion should be forbidden even if we did allow implicitly coercing Int to Integer, etc.
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.)
It is unfortunate, but there really is a distinction being made.
I'd go even farther and say that this is another case where Haskell is doing the right thing. There's a fundamental distinction going on here. The only complaints I have here are: 1) Because the type of the second argument to (^) and (^^) is polymorphic, numeric literals introduce warnings about defaulting. This is the one place where needing to make coercions explicit really does get on my nerves. 2) (^) does not restrict the second argument to non-negative values and thus can throw runtime errors, when it really should be a type error. 3) (**) should probably be split to distinguish exponentials and rational/real-valued powers. To see a clear example of how (^) and (^^) are fundamentally different operators, consider their instantiation for square matrices. We can use (^) whenever the scalars of the matrix form a semiring; whereas using (^^) requires that we be able to invert the matrix, which means the scalars must form a field[1]. [1] I'm not sure off hand if we can weaken the requirements to just be a division ring or (strongly) von Neuman regular ring; but certainly my implementation assumes it's a field. -- Live well, ~wren

On Tue, Mar 25, 2014 at 6:54 PM, wren romano
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.)
It is unfortunate, but there really is a distinction being made.
I'd go even farther and say that this is another case where Haskell is doing the right thing. There's a fundamental distinction going on here. The only complaints I have here are:
1) Because the type of the second argument to (^) and (^^) is polymorphic, numeric literals introduce warnings about defaulting. This is the one place where needing to make coercions explicit really does get on my nerves.
This situation really should not be a warning. It's an Integral constraint on a type in contravariant position, which by itself can only be ambiguous if it's given something built from numeric literals and basic arithmetic. Having an Integral instance means a type supports conversions to and from Integer, and I would be very dubious about a type where (fromInteger . toInteger) was not equivalent to id. I really can't see how defaulting an Integral constraint alone to Integer is anything other than safe and sensible. Unfortunately the same type may have other class constraints for which defaulting to Integer would be ridiculous, and if memory serves me the way defaults work doesn't distinguish between the clearly sensible-to-default "6 ^ 12" and the equally not sensible "3 ^ read userInput". In fact, I'd prefer that any type with a Read constraint not default at all, ever. That's just asking for trouble. On the other hand, a warning probably makes sense when defaulting a type with e.g. a Show constraint, or Num (without Integral as well). I'm not sure what exact rules would make the most sense here, but I don't think the current defaulting mechanism would support anything more sophisticated anyhow, so it's not a library issue.
2) (^) does not restrict the second argument to non-negative values and thus can throw runtime errors, when it really should be a type error.
...which would require a Natural type similar to Integer, and probably the corresponding type classes as well, which among other things entails adding a superclass to Num containing fromNatural (and, ideally, containing (+) and (*) as well). That's a pretty big bridge to burn, albeit a very temptingly flammable one. - C.

Am 26.03.2014 16:10, schrieb Casey McCann:
This situation really should not be a warning. It's an Integral constraint on a type in contravariant position, which by itself can only be ambiguous if it's given something built from numeric literals and basic arithmetic. Having an Integral instance means a type supports conversions to and from Integer, and I would be very dubious about a type where (fromInteger . toInteger) was not equivalent to id. I really can't see how defaulting an Integral constraint alone to Integer is anything other than safe and sensible.
In NumericPrelude I decided to fix the exponent type to Integer. 1. This solves the defaulting problem. 2. Most of the time in my code the exponent is constant and is 2. 3. It broke a dependency cycle between the numeric classes.
participants (13)
-
Andreas Abel
-
Bart Massey
-
Carter Charbonneau
-
Carter Schonwald
-
Casey McCann
-
Corey O'Connor
-
Edward Kmett
-
harry
-
Henning Thielemann
-
Ivan Lazar Miljenovic
-
Tobias Brandt
-
Tom Ellis
-
wren romano