Re: Revamping the numeric classes

Other people have been making great points for me. (I particularly liked the example of Dollars as a type with addition but not multiplication.) One point that has not been made: given a class setup like class Additive a where (+) :: a -> a -> a (-) :: a -> a -> a negate :: a -> a zero :: a class Multiplicative a where (*) :: a -> a -> a one :: a class (Additive a, Multiplicative a) => Num a where fromInteger :: Integer -> a then naive users can continue to use (Num a) in contexts, and the same programs will continue to work.[1] (A question in the above context is whether the literal '0' should be interpreted as 'fromInteger (0::Integer)' or as 'zero'. Opinions?) On Wed, Feb 07, 2001 at 06:27:02PM +1300, Brian Boutel wrote:
* Haskell equality is a defined operation, not a primitive, and may not be decidable. It does not always define equivalence classes, because a==a may be Bottom, so what's the problem? It would be a problem, though, to have to explain to a beginner why they can't print the result of a computation.
Why doesn't your argument show that all types should by instances of Eq and Show? Why are numeric types special? Best, Dylan Thurston Footnotes: [1] Except for the lack of abs and signum, which should be in some other class. I have to think about their semantics before I can say where they belong.

On Wed, Feb 07, 2001 at 01:57:41PM -0500, Dylan Thurston wrote:
... One point that has not been made: given a class setup like <deleted> then naive users can continue to use (Num a) in contexts, and the same programs will continue to work.
I take that back. Instance declarations would change, so this isn't a very conservative change. (Users would have to make instance declarations for Additive, Multiplicative, and Num where before they just made a declaration for Num. Of course, they don't have to write any more code.) Best, Dylan Thurston

Dylan Thurston wrote:
Why doesn't your argument show that all types should by instances of Eq and Show? Why are numeric types special?
Why do you think it does? I certainly don't think so. The point about Eq was that a objection was raised to Num being a subclass of Eq because, for some numeric types, equality is undecidable. I suggested that Haskell equality could be undecidable, so (==) on those types could reflect the real situation. One would expect that it could do so in a natural way, producing a value of True or False when possible, and diverging otherwise. Thus no convincing argument has been given for removing Eq as a superclass of Num. In general, if you fine-grain the Class heirarchy too much, the picture gets very complicated. If you need to define separate subclases of Num for those types which have both Eq and Show, those that only Have Eq, those than only have Show and those that have neither, not to mention those that have Ord as well as Eq and those that don't, and then for all the other distinctions that will be suggested, my guess is that Haskell will become the preserve of a few mathematicians and everyone else will give up in disgust. Then the likely result is that no-one will be interested in maintaining and developing Haskell and it will die. --brian

Dylan Thurston writes: : | (A question in the above context is whether the literal '0' should | be interpreted as 'fromInteger (0::Integer)' or as 'zero'. | Opinions?) Opinions? Be careful what you wish for. ;-) In a similar discussion last year, I was making wistful noises about subtyping, and one of Marcin's questions http://www.mail-archive.com/haskell-cafe@haskell.org/msg00125.html was whether the numeric literal 10 should have type Int8 (2's complement octet) or Word8 (unsigned octet). At the time I couldn't give a wholly satisfactory answer. Since then I've read the oft-cited paper "On Understanding Types, Data Abstraction, and Polymorphism" (Cardelli & Wegner, ACM Computing Surveys, Dec 1985), which suggests a nice answer: give the numeric literal 10 the range type 10..10, which is defined implicitly and is a subtype of both -128..127 (Int8) and 0..255 (Word8). The differences in arithmetic on certain important range types could be represented by multiple primitive functions (or perhaps foreign functions, through the FFI): primAdd :: Integer -> Integer -> Integer -- arbitrary precision primAdd8s :: Int8 -> Int8 -> Int8 -- overflow at -129, 128 primAdd8u :: Word8 -> Word8 -> Word8 -- overflow at -1, 256 -- etc. instance Additive Integer where zero = 0 (+) = primAdd ...with similar instances for the integer subrange types which may overflow. These other instances would belong outside the standard Prelude, so that the ambiguity questions don't trouble people (such as beginners) who don't care about the space and time advantages of fixed precision integers. Subtyping offers an alternative approach to handling arithmetic overflows: - Use only arbitrary precision arithmetic. - When calculated result *really* needs to be packed into a fixed precision format, project it (or treat it down, etc., whatever's your preferred name), so that overflows are represented as Nothing. For references to other uses of class Subtype see: http://www.mail-archive.com/haskell@haskell.org/msg07303.html For a reference to some unification-driven rewrites, see: http://www.mail-archive.com/haskell@haskell.org/msg07327.html Marcin 'Qrczak' Kowalczyk writes: : | Assuming that Ints can be implicitly converted to Doubles, is the | function | f :: Int -> Int -> Double -> Double | f x y z = x + y + z | ambiguous? Because there are two interpretations: | f x y z = realToFrac x + realToFrac y + z | f x y z = realToFrac (x + y) + z | | Making this and similar case ambiguous means inserting lots of explicit | type signatures to disambiguate subexpressions. | | Again, arbitrarily choosing one of the alternatives basing on some | set of weighting rules is dangerous, I don't think the following disambiguation is too arbitrary: x + y + z -- as above --> (x + y) + z -- left-associativity of (+) --> realToFrac (x + y) + z -- injection (or treating up) done -- conservatively, i.e. only where needed Regards, Tom

On Thu, 8 Feb 2001, Tom Pledger wrote:
nice answer: give the numeric literal 10 the range type 10..10, which is defined implicitly and is a subtype of both -128..127 (Int8) and 0..255 (Word8).
What are the inferred types for f = map (\x -> x+10) g l = l ++ f l ? I hope I can use them as [Int] -> [Int].
x + y + z -- as above
--> (x + y) + z -- left-associativity of (+)
--> realToFrac (x + y) + z -- injection (or treating up) done -- conservatively, i.e. only where needed
What does it mean "where needed"? Type inference does not proceed inside-out. What about this? h f = f (1::Int) == (2::Int) Can I apply f to a function of type Int->Double? If no, then it's a pity, because I could inline it (the comparison would be done on Doubles). If yes, then what is the inferred type for h? Note that Int->Double is not a subtype of Int->Int, so if h :: (Int->Int)->Bool, then I can't imagine how h can be applied to something :: Int->Double. -- Marcin 'Qrczak' Kowalczyk

First, a general remark which has nothing to do with Num. PLEASE WATCH YOUR DESTINATION ADDRESSES People send regularly their postings to haskell-cafe with several private receiver addresses, which is a bit annoying when you click "reply all"... Brian Boutel after Dylan Thurston:
Why doesn't your argument show that all types should by instances of Eq and Show? Why are numeric types special?
Why do you think it does? I certainly don't think so.
The point about Eq was that a objection was raised to Num being a subclass of Eq because, for some numeric types, equality is undecidable. I suggested that Haskell equality could be undecidable, so (==) on those types could reflect the real situation. One would expect that it could do so in a natural way, producing a value of True or False when possible, and diverging otherwise. Thus no convincing argument has been given for removing Eq as a superclass of Num.
In general, if you fine-grain the Class heirarchy too much, the picture gets very complicated. If you need to define separate subclases of Num for those types which have both Eq and Show, those that only Have Eq, those than only have Show and those that have neither, not to mention those that have Ord as well as Eq and those that don't, and then for all the other distinctions that will be suggested, my guess is that Haskell will become the preserve of a few mathematicians and everyone else will give up in disgust. Then the likely result is that no-one will be interested in maintaining and developing Haskell and it will die.
Strange, but from the objectives mentioned in the last part of this posting (even if a little demagogic [insert smiley here if you wish]) I draw opposite conclusions. The fact that the number of cases is quite large suggests that Eq, Show and arithmetic should be treated as *orthogonal* issues, and treated independently. If somebody needs Show for his favourite data type, he is free to arrange this himself. I repeat what I have already said: I work with functional objects as mathematical entities. I want to add parametric surfaces, to rotate trajectories. Also, to handle gracefully and legibly for those simpletons who call themselves 'theoretical physicists', the arithmetic of un-truncated lazy streams representing power series, or infinitely dimensional differential algebra elements. Perhaps those are not convincing arguments for Brian Boutel. They are certainly so for me. Num, with this forced marriage of (+) and (*) violates the principle of orthogonality. Eq and Show constraints make it worse. === And, last, but very high on my check-list: The implicit coercion of numeric constants: 3.14 -=->> (fromDouble 3.14) etc. is sick. (Or was; I still didn't install the last version of GHC, and with Hugs it is bad). The decision is taken by the compiler internally, and it doesn't care at all about the fact that in my prelude I have eliminated the Num class and redefined fromDouble, fromInt, etc. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Dylan Thurston terminates his previous posting about Num with:
Footnotes: [1] Except for the lack of abs and signum, which should be in some other class. I have to think about their semantics before I can say where they belong.
Now, signum and abs seem to be quite distincts beasts. Signum seem to require Ord (and a generic zero...). Abs from the mathematical point of view constitutes a *norm*. Now, frankly, I haven't the slightest idea how to cast this concept into Haskell class hierarchy in a sufficiently general way... I'll tell you anyway that if you try to "sanitize" the numeric classes, if you separate additive structures and the multiplication, if you finally define abstract Vectors over some field of scalars, and if you demand the existence of a generic normalization for your vectors, than *most probably* you will need multiparametric classes with dependencies. Jerzy Karczmarczuk Caen, France

On Thu, Feb 08, 2001 at 11:24:49AM +0000, Jerzy Karczmarczuk wrote:
First, a general remark which has nothing to do with Num.
PLEASE WATCH YOUR DESTINATION ADDRESSES People send regularly their postings to haskell-cafe with several private receiver addresses, which is a bit annoying when you click "reply all"...
Yes, apologies. The way the lists do the headers make it very easy to reply to individuals, and hard to reply to the list.
And, last, but very high on my check-list:
The implicit coercion of numeric constants: 3.14 -=->> (fromDouble 3.14) etc. is sick. (Or was; I still didn't install the last version of GHC, and with Hugs it is bad). The decision is taken by the compiler internally, and it doesn't care at all about the fact that in my prelude I have eliminated the Num class and redefined fromDouble, fromInt, etc.
Can't you just put "default ()" at the top of each module? I suppose you still have the problem that a numeric literal "5" means "Prelude.fromInteger 5". Can't you define your types to be instances of Prelude.Num, with no operations defined except Prelude.fromInteger?
Dylan Thurston terminates his previous posting about Num with:
Footnotes: [1] Except for the lack of abs and signum, which should be in some other class. I have to think about their semantics before I can say where they belong.
Now, signum and abs seem to be quite distincts beasts. Signum seem to require Ord (and a generic zero...).
Abs from the mathematical point of view constitutes a *norm*. Now, frankly, I haven't the slightest idea how to cast this concept into Haskell class hierarchy in a sufficiently general way...
This was one thing I liked with the Haskell hierarchy: the observation that "signum" of real numbers is very much like "argument" of complex numbers. abs and signum in Haskell satisfy an implicit law: abs x * signum x = x [1] So signum can be defined anywhere you can define abs (except that it's not a continuous function, so is not terribly well-defined). A default definition for signum x might read signum x = let a = abs x in if (a == 0) then 0 else x / abs x (Possibly signum is the wrong name. What is the standard name for this operation for, e.g., matrices?) [Er, on second thoughts, it's not as well-defined as I thought. Abs x needs to be in a field for the definition above to work.]
I'll tell you anyway that if you try to "sanitize" the numeric classes, if you separate additive structures and the multiplication, if you finally define abstract Vectors over some field of scalars, and if you demand the existence of a generic normalization for your vectors, than *most probably* you will need multiparametric classes with dependencies.
Multiparametric classes, certainly (for Vectors, at least). Fortunately, they will be in Haskell 2 with high probability. I'm not convinced about dependencies yet.
Jerzy Karczmarczuk Caen, France
Best, Dylan Thurston Footnotes: [1] I'm not sure what I mean by "=" there, since I do not believe these should be forced to be instances of Eq. For clearer cases, consider the various Monad laws, e.g., join . join = join . map join (Hope I got that right.) What does "=" mean there? Some sort of denotational equality, I suppose.

Thu, 08 Feb 2001 11:24:49 +0000, Jerzy Karczmarczuk
The implicit coercion of numeric constants: 3.14 -=->> (fromDouble 3.14) etc. is sick.
What do you propose instead? (BTW, it's fromRational, to keep arbitrarily large precision.)
Now, signum and abs seem to be quite distincts beasts. Signum seem to require Ord (and a generic zero...).
Signum doesn't require Ord. signum z = z / abs z for complex numbers. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTÊPCZA QRCZAK

On Thu, Feb 08, 2001 at 08:30:31PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
Signum doesn't require Ord. signum z = z / abs z for complex numbers.
I'd be careful here.
\begin{code}
signum 0 = 0
signum z = z / abs z
\end{code}
This is, perhaps, neither precise nor general enough.
The signum/abs pair seem to represent direction and magnitude.
According to the line of reasoning in some of the earlier posts in this
flamewar, the following constraints:
(1) z = signum z <*> abs z where <*> is appropriately defined
(2) abs $ signum z = 1
should be enforced, if possible, by the type system. This suggests
that for any type having a vector space structure over Fractional
(or whatever the hierarchy you're brewing up uses for rings with
a division partial function on them) that the result type of signum
lives in a more restricted universe, perhaps even one with a different
structure (operations defined on it, set of elements) than the argument
type, and it seems more than possible to parametrize it on the argument
type. The abs is in fact a norm, and the signum projects V^n -> V^n / V.
Attempts to define these things on Gaussian integers, p-adic numbers,
polynomial rings, and rational points on elliptic curves will quickly
reveal limitations of the stock class hierarchy.
Now, whether it's actually desirable to scare newcomers to the language
into math phobia, wetting their pants, and running screaming with
subtleties like this suggests perhaps that one or more "alternative
Preludes" may be desirable to have. There is a standard Prelude, why not
a nonstandard one or two? We have the source. The needs of the geek do
not outweigh the needs of the many. Hence, we can cook up a few Preludes
or so on our own, and certainly if we can tinker enough to spam the list
with counterexamples and suggestions of what we'd like the Prelude to
have, we can compile up a Prelude for ourselves with our "suggested
changes" included and perhaps one day knock together something which can
actually be used and has been tested, no?
The Standard Prelude serves its purpose well and accommodates the
largest cross-section of users. Perhaps a Geek Prelude could
accommodate the few of us who do need these sorts of schenanigans.
Cheers,
Bill
--

Marcin 'Qrczak' Kowalczyk writes: | On Thu, 8 Feb 2001, Tom Pledger wrote: | | > nice answer: give the numeric literal 10 the range type 10..10, which | > is defined implicitly and is a subtype of both -128..127 (Int8) and | > 0..255 (Word8). | | What are the inferred types for | f = map (\x -> x+10) | g l = l ++ f l | ? I hope I can use them as [Int] -> [Int]. f, g :: (Subtype a b, Subtype 10..10 b, Num b) => [a] -> [b] Yes, because of the substitution {Int/a, Int/b}. | > x + y + z -- as above | > | > --> (x + y) + z -- left-associativity of (+) | > | > --> realToFrac (x + y) + z -- injection (or treating up) done | > -- conservatively, i.e. only where needed | | What does it mean "where needed"? Type inference does not proceed | inside-out. In the expression (x + y) + z we know from the explicit type signature (in your question that I was responding to) that x,y::Int and z::Double. Type inference does not need to treat x or y up, because it can take the first (+) to be Int addition. However, it must treat the result (x + y) up to the most specific supertype which can be added to a Double. | What about this? | h f = f (1::Int) == (2::Int) | Can I apply f h? | to a function of type Int->Double? Yes. | If no, then it's a pity, because I could inline it (the comparison | would be done on Doubles). If yes, then what is the inferred type | for h? Note that Int->Double is not a subtype of Int->Int, so if h | :: (Int->Int)->Bool, then I can't imagine how h can be applied to | something :: Int->Double. There's no explicit type signature for the result of applying f to (1::Int), so... h :: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool That can be inferred by following the structure of the term. Function terms do seem prone to an accumulation of deferred subtype constraints. Regards, Tom

William Lee Irwin III wrote:
The Standard Prelude serves its purpose well and accommodates the largest cross-section of users. Perhaps a Geek Prelude could accommodate the few of us who do need these sorts of schenanigans.
Amen. --brian

Marcin 'Qrczak' Kowalczyk wrote:
JK> Now, signum and abs seem to be quite distincts beasts. Signum seem JK> to require Ord (and a generic zero...).
Signum doesn't require Ord. signum z = z / abs z for complex numbers.
Thank you, I know. And I ignore it. Calling "signum" the result of a vector normalization (on the gauss plane in this case) is something I don't really appreciate, and I wonder why this definition infiltrated the prelude. Just because it conforms to the "normal" definition of signum for reals? Again, a violation of the orthogonality principle. Needing division just to define signum. And of course a completely different approach do define the signum of integers. Or of polynomials... Jerzy Karczmarczuk

Brian Boutel wrote:
William Lee Irwin III wrote:
The Standard Prelude serves its purpose well and accommodates the largest cross-section of users. Perhaps a Geek Prelude could accommodate the few of us who do need these sorts of schenanigans.
Amen.
Aha. And we will have The Prole, normal users who can live with incomplete, sometimes contradictory math, and The Inner Party of those who know The Truth? Would you agree that your children be taught at primary school some dubious matter because "they won't need the real stuff". I would agree having a minimal standard Prelude which is incomplete. But it should be sane, should avoid confusion of categories and useless/harmful dependencies. Methodologically and pedagogically it seems a bit risky. Technically it may be awkward. It will require the compiler and the standard libraries almost completely independent of each other. This is not the case now. BTW. what is a schenanigan? Is it by definition someething consumed by Geeks? Is the usage of Vector Spaces restricted to those few Geeks who can't live without schenanigans? Jerzy Karczmarczuk PS. For some time I follow the discussion on some newsgroups dealing with computer graphics, imagery, game programming, etc. I noticed a curious, strong influence of people who shout loudly: "Math?! You don't need it really. Don't waste your time on it! Don't waste your time on cute algorithms, they will be slow as hell. Learn assembler, "C", MMX instructions, learn DirectX APIs, forget this silly geometric speculations. Behave *normally*, as a *normal* computer user, not as a speculative mathematician!" And I noticed that REGULARLY, 1 - 4 times a week some freshmen ask over and over again such questions: 1. How to rotate a vector in 3D? 2. How to zoom an image? 3. What is a quaternion, and why some people hate them so much? 4. How to compute a trajectory if I know the force acting on the object. To summarize: people who don't use and don't need math always feel right to discourage others to give to it an adequate importance. It is not they who will suffer from badly constructed math layer of a language, or from badly taught math concepts, so they don't care too much.

William Lee Irwin III wrote:
The Standard Prelude serves its purpose well and accommodates the largest cross-section of users. Perhaps a Geek Prelude could accommodate the few of us who do need these sorts of schenanigans.
I, of course, intend to use the Geek Prelude(s) myself. =) On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
Aha. And we will have The Prole, normal users who can live with incomplete, sometimes contradictory math, and The Inner Party of those who know The Truth? Would you agree that your children be taught at primary school some dubious matter because "they won't need the real stuff".
This is, perhaps, the best argument against my pseudo-proposal. I'm not against resolving things that are outright inconsistent or otherwise demonstrably bad, but the simplifications made to prevent the (rather large) mathphobic segment of the population from wetting their pants probably shouldn't be done away with to add more generality for the advanced users. We can write our own preludes anyway. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
I would agree having a minimal standard Prelude which is incomplete. But it should be sane, should avoid confusion of categories and useless/harmful dependencies.
At the risk of turning this into "me too", I'm in agreement here. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
Methodologically and pedagogically it seems a bit risky. Technically it may be awkward. It will require the compiler and the standard libraries almost completely independent of each other. This is not the case now.
I'm seeing a bit of this now, and the error messages GHC spits out are hilarious! e.g. My brain just exploded. I can't handle pattern bindings for existentially-quantified constructors. and Couldn't match `Bool' against `Bool' Expected type: Bool Inferred type: Bool They're not quite Easter eggs, but they're quite a bit of fun. I might have to look into seeing what sort of things I might have to alter in GHC in order to resolve nasty situations like these. I can't speak to the methodological and pedagogical aspects of it. I just have a vague idea that explaining why something isn't an instance of GradedAlgebra or DifferentialRing to freshman or the otherwise mathematically disinclined isn't a task compiler and/or library implementors care to deal with. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
BTW. what is a schenanigan? Is it by definition someething consumed by Geeks? Is the usage of Vector Spaces restricted to those few Geeks who can't live without schenanigans?
Yes! And I can't live without them. I had a few schenanigans at the math bar last night while I was trying to pick up a free module, but she wanted a normed ring before getting down to a basis. I guess that's what I get for going to a algebra bar. I should really have gone to a topology bar instead if I was looking for something kinkier. =) Perhaps "Geek Prelude" isn't a good name for it. Feel free to suggest alternatives. Of course, there's nothing to prevent the non-geek among us from using them if they care to. If I by some miracle produce something which actually works, I'll leave it untitled. And yes, I agree everyone needs VectorSpace. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
For some time I follow the discussion on some newsgroups dealing with computer graphics, imagery, game programming, etc. I noticed a curious, strong influence of people who shout loudly:
"Math?! You don't need it really. Don't waste your time on it! Don't waste your time on cute algorithms, they will be slow as hell. Learn assembler, "C", MMX instructions, learn DirectX APIs, forget this silly geometric speculations. Behave *normally*, as a *normal* computer user, not as a speculative mathematician!"
And I noticed that REGULARLY, 1 - 4 times a week some freshmen ask over and over again such questions: 1. How to rotate a vector in 3D? 2. How to zoom an image? 3. What is a quaternion, and why some people hate them so much? 4. How to compute a trajectory if I know the force acting on the object.
To date I've been highly unsuccessful in convincing anyone in this (the predominant) camp otherwise. People do need math, they just refuse to believe it regardless of how strong the evidence is. I spent my undergrad preaching the gospel of "CS is math" and nobody listened. I don't know how they get anything done. On Fri, Feb 09, 2001 at 11:26:39AM +0000, Jerzy Karczmarczuk wrote:
To summarize: people who don't use and don't need math always feel right to discourage others to give to it an adequate importance. It is not they who will suffer from badly constructed math layer of a language, or from badly taught math concepts, so they don't care too much.
How can I counter-summarize? It's true. I suppose I'm saying that the design goals of a Standard Prelude are outright against being so general it's capable of representing as many mathematical structures as possible. Of course, as it stands, it's not beyond reproach. Cheers, Bill -- A mathematician is a system for turning coffee into theorems. -- Paul Erd�s A comathematician is a system for turning theorems into coffee. -- Tim Poston

Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger
(x + y) + z
we know from the explicit type signature (in your question that I was responding to) that x,y::Int and z::Double. Type inference does not need to treat x or y up, because it can take the first (+) to be Int addition. However, it must treat the result (x + y) up to the most specific supertype which can be added to a Double.
Approach it differently. z is Double, (x+y) is added to it, so (x+y) must have type Double. This means that x and y must have type Double. This is OK, because they are Ints now, which can be converted to Double. Why is your approach better than mine?
| h f = f (1::Int) == (2::Int) | Can I apply f
h?
Sure, sorry.
h:: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool
This type is ambiguous: the type variable b is needed in the context but not present in the type itself, so it can never be determined from the usage of h.
That can be inferred by following the structure of the term. Function terms do seem prone to an accumulation of deferred subtype constraints.
When function application generates a constraint, the language gets ambiguous as hell. Applications are found everywhere through the program! Very often the type of the argument or result of an internal application does not appear in the type of the whole function being defined, which makes it ambiguous. Not to mention that there would be *LOTS* of these constraints. Application is used everywhere. It's important to have its typing rule simple and cheap. Generating a constraint for every application is not an option. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTÊPCZA QRCZAK

Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk
Again, a violation of the orthogonality principle. Needing division just to define signum. And of course a completely different approach do define the signum of integers. Or of polynomials...
So what? That's why it's a class method and not a plain function with a single definition. Multiplication of matrices is implemented differently than multiplication of integers. Why don't you call it a violation of the orthogonality principle (whatever it is)? -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTÊPCZA QRCZAK

Fri, 09 Feb 2001 10:52:39 +0000, Jerzy Karczmarczuk pisze:
Again, a violation of the orthogonality principle. Needing division just to define signum. And of course a completely different approach do define the signum of integers. Or of polynomials...
On Fri, Feb 09, 2001 at 07:19:21PM +0000, Marcin 'Qrczak' Kowalczyk wrote:
So what? That's why it's a class method and not a plain function with a single definition.
Multiplication of matrices is implemented differently than multiplication of integers. Why don't you call it a violation of the orthogonality principle (whatever it is)?
Matrix rings actually manage to expose the inappropriateness of signum and abs' definitions and relationships to Num very well: class (Eq a, Show a) => Num a where (+), (-), (*) :: a -> a -> a negate :: a -> a abs, signum :: a -> a fromInteger :: Integer -> a fromInt :: Int -> a -- partain: Glasgow extension Pure arithmetic ((+), (-), (*), negate) works just fine. But there are no good injections to use for fromInteger or fromInt, the type of abs is wrong if it's going to be a norm, and it's not clear that signum makes much sense. So we have two totally inappropriate operations (fromInteger and fromInt), one operation which has the wrong type (abs), and an operation which doesn't have well-defined meaning (signum) on matrices. If we want people doing graphics or linear algebraic computations to be able to go about their business with their code looking like ordinary arithmetic, this is, perhaps, a real concern. I believe that these applications are widespread enough to be concerned about how the library design affects their aesthetics. Cheers, Bill -- <craving> Weak coffee is only fit for lemmas. --

On Fri, Feb 09, 2001 at 12:55:12PM -0800, William Lee Irwin III wrote:
class (Eq a, Show a) => Num a where (+), (-), (*) :: a -> a -> a negate :: a -> a abs, signum :: a -> a fromInteger :: Integer -> a fromInt :: Int -> a -- partain: Glasgow extension
... So we have two totally inappropriate operations (fromInteger and fromInt), ...
I beg to differ on this point. One could provide a default implementation for fromInt(eger) as follows, assuming a 'zero' and 'one', which do obviously fit (they are the additive and multiplicative units): fromInteger n | n < 0 = negate (fromInteger (-n)) fromInteger n = foldl (+) zero (repeat n one) (Of course, one could use the algorithm in integer exponentiation to make this efficient.) Best, Dylan Thurston

Marcin 'Qrczak' Kowalczyk writes:
| Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger

Marcin Kowalczyk pretends not to understand:
JK:
Again, a violation of the orthogonality principle. Needing division just to define signum. And of course a completely different approach do define the signum of integers. Or of polynomials...
So what? That's why it's a class method and not a plain function with a single definition.
Multiplication of matrices is implemented differently than multiplication of integers. Why don't you call it a violation of the orthogonality principle (whatever it is)?
1. Orthogonality priniciple has - in principle - nothing to do with the implementation. Separating a complicated structure in independent, or "orthogonal" concepts is a basic invention of human mind, spanning from the principle of Montesquieu of the independence of three political powers, down to syntactic issues in the design of a programming language. If you eliminate as far as possible the "interfacing" between concepts, the integration of the whole is easier. Spurious dependencies are always harmful. 2. This has been a major driving force in the construction of mathematical entities for centuries. What do you really NEED for your proof. What is the math. category where a given concept can be defined, where a theorem holds, etc. 3. The example of matrices is inadequate (to say it mildly). The monoid rules hold in both cases, e.g. the associativity. So, I might call both operations "multiplication", although one is commutative, and the other one not. == In a later posting you say:
If (+) can be implicitly lifted to functions, then why not signum? Note that I would lift neither signum nor (+). I don't feel the need. ...
I not only feel the need, but I feel that this is important that the additive structure in the codomain is inherited by functions. In a more specific context: the fact that linear functionals over a vector space form also a vector space, is simply *fundamental* for the quantum mechanics, for the cristallography, etc. You don't need to be a Royal Abstractor to see this. Jerzy Karczmarczuk Caen, France

In a later posting Marcin Kowalczyk says:
If (+) can be implicitly lifted to functions, then why not signum? Note that I would lift neither signum nor (+). I don't feel the need. ...
On Mon, Feb 12, 2001 at 09:33:03AM +0000, Jerzy Karczmarczuk wrote:
I not only feel the need, but I feel that this is important that the additive structure in the codomain is inherited by functions. In a more specific context: the fact that linear functionals over a vector space form also a vector space, is simply *fundamental* for the quantum mechanics, for the cristallography, etc. You don't need to be a Royal Abstractor to see this.
I see this in a somewhat different light, though I'm in general agreement. What I'd like to do is to be able to effectively model module structures in the type system, and furthermore be able to simultaneously impose distinct module structures on a particular type. For instance, complex n-vectors are simultaneously C-modules and R-modules. and an arbitrary commutative ring R is at once a Z-module and an R-module. Linear functionals, which seem like common beasts (try a partially applied inner product) live in the mathematical structure Hom_R(M,R) which is once again an R-module, and, perhaps, by inheriting structure on R, an R' module from various R'. So how does this affect Prelude design? Examining a small bit of code could be helpful: -- The group must be Abelian. I suppose anyone could think of this. class (AdditiveGroup g, Ring r) => LeftModule g r where (&) :: r -> g -> g instance AdditiveGroup g => LeftModule g Integer where n & x | n == 0 = one | n < 0 = -(n & (-x)) | n > 0 = x + (n-1) & x ... and we naturally acquire the sort of structure we're looking for. But this only shows a possible outcome, and doesn't motivate the implementation. What _will_ motivate the implementation is the sort of impact this has on various sorts of code: (1) The fact that R is an AdditiveGroup immediately makes it a Z-module, so we have mixed-mode arithmetic by a different means from the usual implicit coercion. (2) This sort of business handles vectors quite handily. (3) The following tidbit of code immediately handles curried innerprods: instance (AdditiveGroup group, Ring ring) => LeftModule (group->ring) ring where r & g = \g' -> r & g g' (4) Why would we want to curry innerprods? I envision: type SurfaceAPoles foo = SomeGraph (SomeVector foo) and then surface :: SurfaceAPoles bar innerprod v `fmap` normalsOf faces where faces = facesOf surface (5) Why would we want to do arithmetic on these beasts now that we think we might need them at all? If we're doing things like determining the light reflected off of the various surfaces we will want to scale and add together the various beasties. Deferring the innerprod operation so we can do this is inelegant and perhaps inflexible compared to: lightSources :: [(SomeVector foo -> Intensity foo, Position)] lightSources = getLightSources boundingSomething reflection = sum $ map (\(f,p) -> getSourceWeight p * f) lightSources reflection `fmap` normalsOf faces where faces = facesOf surface and now in the lightSources perhaps ambient light can be represented very conveniently, or at least the function type serves to abstract out the manner in which the orientation of a surface determines the amount of light reflected off it. (My apologies for whatever inaccuracies are happening with the optics here, it's quite far removed from my direct experience.) Furthermore, within things like small interpreters, it is perhaps convenient to represent the semantic values of various expressions by function types. If one should care to define arithmetic on vectors and vector functions in the interpreted language, support in the source language allows a more direct approach. This would arise within solid modelling and graphics once again, as little languages are often used to describe objects, images, and the like. How can we anticipate all the possible usages of pretty-looking vector and matrix algebra? I suspect graphics isn't the only place where linear algebra could arise. All sorts of differential equation models of physical phenomena, Markov models of state transition systems, even economic models at some point require linear algebra in their computational methods. It's something I at least regard as a fairly fundamental and important aspect of computation. And to me, that means that the full power of the language should be applied toward beautifying, simplifying, and otherwise enabling linear algebraic computations. Cheers, Bill P.S.: Please forgive the harangue-like nature of the post, it's the best I could do at 3AM.

On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:
I not only feel the need, but I feel that this is important that the additive structure in the codomain is inherited by functions.
It could support only the basic arithmetic. It would not automatically lift an expression which uses (>) and if. It would be inconsistent to provide a shortcut for a specific case, where generally it must be explicitly lifted anyway. Note that it does make sense to lift (>) and if, only the type system does not permit it implicitly because a type is fixed to Bool. Lifting is so easy to do manually that I would definitely not constrain the whole Prelude class system only to have convenient lifting of basic arithmetic. When it happens that an instance of an otherwise sane class for functions makes sense, then OK, but nothing more. -- Marcin 'Qrczak' Kowalczyk

Marcin Kowalczyk wrote:
Jerzy Karczmarczuk wrote:
I not only feel the need, but I feel that this is important that the additive structure in the codomain is inherited by functions.
It could support only the basic arithmetic. It would not automatically lift an expression which uses (>) and if. It would be inconsistent to provide a shortcut for a specific case, where generally it must be explicitly lifted anyway. Note that it does make sense to lift (>) and if, only the type system does not permit it implicitly because a type is fixed to Bool.
Lifting is so easy to do manually that I would definitely not constrain the whole Prelude class system only to have convenient lifting of basic arithmetic. When it happens that an instance of an otherwise sane class for functions makes sense, then OK, but nothing more.
Sorry for quoting in extenso the full posting just to say: I haven't the slightest idea what are you talking about. -- but I want to avoid partial quotations and misunderstandings resulting thereof. I don't want any automatic lifting nor *constrain* the Prelude class. I want to be *able* to define mathematical operations upon objects which by their intrinsic nature permit so! My goodness, I suspect really that despite plenty of opinions you express every day on this list you didn't really try to program something in Haskell IN A MATHEMATICALLY NON-TRIVIAL CONTEXT. I defined hundred times some special functions to add lists or records, to multiply a tree by a scalar (btw.: Jón Fairbarn proposes (.*), I have in principle nothing against, but these operators is used elsewhere, in other languages, CAML and Matlab; I use (*>) ). I am fed up with solutions ad hoc, knowing that correct mathematical hierarchies permit to inherit plenty of subsumptions, e.g. the fact that x+x exists implies 2*x. Thank you for reminding me that manual lifting is easy. In fact, everything is easy. Type-checking as well. Let's go back to assembler. Jerzy Karczmarczuk

On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:
I want to be *able* to define mathematical operations upon objects which by their intrinsic nature permit so!
You can't do it in Haskell as it stands now, no matter what the Prelude would be. For example I would say that with the definition abs x = if x >= 0 then x else -x it's obvious how to obtain abs :: ([Int]->Int) -> ([Int]->Int): apply the definition pointwise. But it will never work in Haskell, unless we changed the type rules for if and the tyoe of the result of (>=). You are asking for letting abs x = max x (-x) work on functions. OK, in this particular case it can be made to work by making appropriate instances, but it's because this is a special case where all intermediate types are appropriately polymorphic. This technique cannot work in general, as the previous example shows. So IMHO it's better to not try to pretend that functions can be implicitly lifted. Better provide as convenient as possible way of manual lifting arbitrary functions, so it doesn't matter if they have fixed Integer in the result or not. You are asking for an impossible thing.
I defined hundred times some special functions to add lists or records, to multiply a tree by a scalar (btw.: Jón Fairbarn proposes (.*), I have in principle nothing against, but these operators is used elsewhere, in other languages, CAML and Matlab; I use (*>) ).
Please show a concrete proposal how Prelude classes could be improved. -- Marcin 'Qrczak' Kowalczyk

Marcin Kowalczyk continues:
On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:
I want to be *able* to define mathematical operations upon objects which by their intrinsic nature permit so!
You can't do it in Haskell as it stands now, no matter what the Prelude would be.
For example I would say that with the definition abs x = if x >= 0 then x else -x it's obvious how to obtain abs :: ([Int]->Int) -> ([Int]->Int): apply the definition pointwise.
But it will never work in Haskell, unless we changed the type rules for if and the tyoe of the result of (>=).
You are asking for letting abs x = max x (-x) work on functions. OK, in this particular case it can be made to work ....
Why don't you try from time to time to attempt to understand what other people want? And wait, say 2 hours, before responding? I DON'T WANT max TO WORK ON FUNCTIONS. I never did. I will soon (because I am writing a graphical package where max serves to intersect implicit graphical objects) need that, but for very specific functions which represent textures, but NOT in general. I repeat for the last time, that I want to have those operations which are *implied* by the mathematical properties. And anyway, if you replace x>=0 by x>=zero with an appropriate zero, this should work as well. I want only that Prelude avoids spurious dependencies. This is the way I program in Clean, where there is no Num, and (+), (*), zero, abs, etc. constitute classes by themselves. So, when you say:
You are asking for an impossible thing.
My impression is what is impossible, is your way of interpreting/ understanding the statements (and/or desiderata) of other people.
I defined hundred times some special functions to add lists or records, to multiply a tree by a scalar (btw.: Jón Fairbarn proposes (.*), I have in principle nothing against, but these operators is used elsewhere, in other languages, CAML and Matlab; I use (*>) ).
Please show a concrete proposal how Prelude classes could be improved.
(Why do you precede your query by this citation? What do you have to say here about the syntax proposed by Jón Fairbarn, or whatever??) I am Haskell USER. I have no ambition to save the world. The "proposal" has been presented in 1995 in Nijmegen (FP in education). Actually, it hasn't, I concentrated on lazy power series etc., and the math oriented prelude has been mentioned casually. Jeroen Fokker presented similar ideas, implemented differently. If you have nothing else to do (but only in this case!) you may find the modified prelude called math.hs for Hugs (which needs a modified prelude.hs exporting primitives) in http://users.info.unicaen.fr/~karczma/humat/ This is NOT a "public proposal" and I *don't want* your public comments on it. If you want to be nice, show me some of *your* Haskell programs. Jerzy Karczmarczuk Caen, France

Mon, 12 Feb 2001 10:58:40 +1300, Tom Pledger
| Approach it differently. z is Double, (x+y) is added to it, so | (x+y) must have type Double.
That's a restriction I'd like to avoid. Instead: ...so the most specific common supertype of Double and (x+y)'s type must support addition.
In general there is no such thing as (x+y)'s type considered separately from this usage. The use of (x+y) as one of arguments of this addition influences the type determined for it. Suppose x and y are lambda-bound variables: then you don't know their types yet. Currently this addition determines their types: it must be the same as the type of z. With your rules the type of \x y -> x + y is not (some context) => a -> a -> a but (some context) => a -> b -> c It leads to horrible ambiguities unless the context is able to determine some types exactly (which is currently true only for fundeps).
| Why is your approach better than mine?
It used a definition of (+) which was a closer fit for the types of x and y.
But used a worse definition of the outer (+): mine was Double -> Double -> Double and yours was Int -> Double -> Double with the implicit conversion of Int to double.
Yes, I rashly glossed over the importance of having well-defined most specific common supertype (MSCS) and least specific common subtype (LSCS) operators in a subtype lattice.
They are not always defined. Suppose the following holds: Word32 `Subtype` Double Word32 `Subtype` Integer Int32 `Subtype` Double Int32 `Subtype` Integer What is the MSCS of Word32 and Int32? What is the LSCS of Double and Integer?
Anyway, since neither of us is about to have a change of mind, and nobody else is showing an interest in this branch of the discussion, it appears that the most constructive thing for me to do is return to try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode.
IMHO it's impossible to do. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTÊPCZA QRCZAK

On Mon, Feb 12, 2001 at 04:40:06PM +0000, Jerzy Karczmarczuk wrote:
This is the way I program in Clean, where there is no Num, and (+), (*), zero, abs, etc. constitute classes by themselves. ...
I've heard Clean mentioned before in this context, but I haven't found the Clean numeric class system described yet. Can you send me a pointer to their class system, or just give me a description? Does each operation really have its own class? That seems slightly silly. Are the (/) and 'recip' equivalents independent, and independent of (*) as well? Best, Dylan Thurston

On 09-Feb-2001 William Lee Irwin III wrote: | Matrix rings actually manage to expose the inappropriateness of signum | and abs' definitions and relationships to Num very well: | | class (Eq a, Show a) => Num a where | (+), (-), (*) :: a -> a -> a | negate :: a -> a | abs, signum :: a -> a | fromInteger :: Integer -> a | fromInt :: Int -> a -- partain: Glasgow extension | | Pure arithmetic ((+), (-), (*), negate) works just fine. | | But there are no good injections to use for fromInteger or fromInt, | the type of abs is wrong if it's going to be a norm, and it's not | clear that signum makes much sense. For fromInteger, fromInt, and abs, the result should be a scalar matrix. For the two coercions, I don't think there would be much controversy about this. I agree that it would be nice if abs could return a scalar, but this requires multiparameter classes, so we have to make do with a scalar matrix. We already have this problem with complex numbers: It might be nice if the result of abs were real. signum does make sense. You want abs and signum to obey these laws: x == abs x * signum x abs (signum x) == (if abs x == 0 then 0 else 1) Thus, having fixed an appropriate matrix norm, signum is a normalization function, just as with reals and complexes. If we make the leap to multiparameter classes, I think this is the signature we want: class (Eq a, Show a) => Num a b | a --> b where (+), (-), (*) :: a -> a -> a negate :: a -> a abs :: a -> b signum :: a -> a scale :: b -> a -> a fromInteger :: Integer -> a fromInt :: Int -> a Here, b is the type of norms of a. Instead of the first law above, we have x == scale (abs x) (signum x) All this, of course, is independent of whether we want a more proper algebraic class hierarchy, with (+) introduced by Monoid, negate and (-) by Group, etc. Cheers, --Joe Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545

On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
For fromInteger, fromInt, and abs, the result should be a scalar matrix. For the two coercions, I don't think there would be much controversy about this. I agree that it would be nice if abs could return a scalar, but this requires multiparameter classes, so we have to make do with a scalar matrix.
I'm not a big fan of this approach. I'd like to see at least some attempt to statically type dimensionality going on, and that flies in the face of it. Worse yet, coercing integers to matrices is likely to be a programmer error. On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
signum does make sense. You want abs and signum to obey these laws:
x == abs x * signum x abs (signum x) == (if abs x == 0 then 0 else 1)
Thus, having fixed an appropriate matrix norm, signum is a normalization function, just as with reals and complexes.
This works fine for matrices of reals, for matrices of integers and polynomials over integers and the like, it breaks down quite quickly. It's unclear that in domains like that, the norm would be meaningful (in the sense of something we might want to compute) or that it would have a type that meshes well with a class hierarchy we might want to design. Matrices over Z/nZ for various n and Galois fields, and perhaps various other unordered algebraically incomplete rings explode this further still. On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote:
If we make the leap to multiparameter classes, I think this is the signature we want:
Well, nothing is going to satisfy everyone. It's pretty reasonable, though. Cheers, Bill

On 12-Feb-2001 William Lee Irwin III wrote: | On Mon, Feb 12, 2001 at 02:13:38PM -0700, Joe Fasel wrote: |> signum does make sense. You want abs and signum to obey these laws: |> |> x == abs x * signum x |> abs (signum x) == (if abs x == 0 then 0 else 1) |> |> Thus, having fixed an appropriate matrix norm, signum is a normalization |> function, just as with reals and complexes. | | This works fine for matrices of reals, for matrices of integers and | polynomials over integers and the like, it breaks down quite quickly. | It's unclear that in domains like that, the norm would be meaningful | (in the sense of something we might want to compute) or that it would | have a type that meshes well with a class hierarchy we might want to | design. Matrices over Z/nZ for various n and Galois fields, and perhaps | various other unordered algebraically incomplete rings explode this | further still. Fair enough. So, the real question is not whether signum makes sense, but whether abs does. I guess the answer is that it does for matrix rings over division rings. Cheers, --Joe Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Technology Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: TSA-7 MS F609; Los Alamos, NM 87545

[incomprehensible (not necessarily wrong!) stuff about polynomials, rings, modules over Z and complaints about the current prelude nuked] --- Marcin 'Qrczak' Kowalczyk pisze ---
Please show a concrete proposal how Prelude classes could be improved.
--- Jerzy Karczmarczuk repondre ---
I am Haskell USER. I have no ambition to save the world. The "proposal" has been presented in 1995 in Nijmegen (FP in education). Actually, it hasn't, I concentrated on lazy power series etc., and the math oriented prelude has been mentioned casually. Jeroen Fokker presented similar ideas, implemented differently.
I'm afraid all this discussion reminds me the one we had a year or two ago. At that time the mathematically inclined side was lead by Sergei, who to his credit developed the Basic Algebra Proposal, which I don't understand, but many people seemed to be happy about at that time. And then of course nothing happend, because no haskell implementor has bitten the bullet and implemented the proposal. This is something understandable as supporting Sergei's proposal seem to be a lot of work, most of which would be incompatible with current implementations. And noone wants to maintain *two* haskell compilers within one. Even if this discussion continues and another brave soul develops another algebra proposal I am prepared to bet with both of you in one years supply of Ben and Jerry's (not Jerzy :)!) icecream that nothing will continue to happen on the implementors side. It is simply too much work for an *untested* (in practice, for teaching etc) alternative prelude. So instead of wasting time, why don't you guys ask the implementors to provide a flag '-IDontWantYourStinkingPrelude' which would give you a bare metal compiler with no predefined types, functions, classes, no derived instances, no fancy stuff and build and test your proposals with it? I guess the RULES pragma (in GHC) could be abused to allow access to the primitive operations (on Ints), but you are still likely to loose much of the elegance, conciseness and perhaps even some efficiency of Haskell (e.g. list comprehensions), but this should allow us to gain experience in what sort of support is essential for providing alternative prelude(s). Once we learnt how to decouple the prelude from the compiler, and gained experience with alternative preludes implementors would have no excuse not to provide the possibility (unless it turns out to be completely impossible or impractical, in which case we learnt something genuinely useful). So, Marcin (as you are one of the GHC implementors), how much work would it be do disable the disputed Prelude stuff within the compiler, and what would be lost? Laszlo [Disclaimer: Just my 10 wons. This message is not in disagreement or agreement with any of the previous messages]
participants (9)
-
Brian Boutel
-
Dylan Thurston
-
Jerzy Karczmarczuk
-
Joe Fasel
-
Laszlo Nemeth
-
Marcin 'Qrczak' Kowalczyk
-
qrczak@knm.org.pl
-
Tom Pledger
-
William Lee Irwin III