Function Precedence

Hi If f x = x and g y = y then f g x returns an error because f takes only one argument. Why can't we have function application implemented outwardly (inside-out). So f g x would be applied with gx first followed by its return value passed to f instead of putting g x in brackets. Cheers, Paul

PR Stanley wrote:
Hi If f x = x and g y = y then f g x returns an error because f takes only one argument. Why can't we have function application implemented outwardly (inside-out).
Why should it be so?
So f g x would be applied with gx first followed by its return value passed to f instead of putting g x in brackets.
You can get the same behavior with f . g $ x if you mislike brackets. -- Dr. Janis Voigtlaender http://wwwtcs.inf.tu-dresden.de/~voigt/ mailto:voigt@tcs.inf.tu-dresden.de

On 01/04/2008, PR Stanley
Hi If f x = x and g y = y then f g x returns an error because f takes only one argument. Why can't we have function application implemented outwardly (inside-out). So f g x would be applied with gx first followed by its return value passed to f instead of putting g x in brackets.
Think about this: map (+1) [1..10] What should it do? How about: f 1 2 3 Should that be f (1 (2 3)), or ((f 1) 2) 3? Jeremy

PR Stanley wrote:
Why can't we have function application implemented outwardly (inside-out).
No reason we can't. We could. We just don't. People have spent some time thinking and experimenting and have decided this way round is more convenient. It's certainly possible to disagree. Jules

2008/4/1, Jules Bean
PR Stanley wrote:
Why can't we have function application implemented outwardly (inside-out).
No reason we can't.
We could.
We just don't.
People have spent some time thinking and experimenting and have decided this way round is more convenient. It's certainly possible to disagree.
I bet this "time and thinking" involved currying. For instance, with: f :: int -> int -> int f a b = a + b + 3 Let's explore the two possibilities (1) f 4 2 <=> (f 4) 2 -- don't need parentheses (2) f 4 2 <=> f (4 2) -- do need parentheses: (f 4) 2 Curried functions are pervasive, so (1) just saves us more brakets than (2) does.
f g x returns an error because f takes only one argument.
Do not forget that *every* function take only one argument. The trick is that the result may also be a function. Therefore, f g 5 <=> id id 5 <=> (id id) 5 <=> id 5 <=> 5 indeed do run smoothly (just checked in the Ocaml toplevel, thanks to Jerzy for pointing this out). Loup

Paul Stanley writes:
Hi If f x = x and g y = y then f g x returns an error because f takes only one argument. Why can't we have function application implemented outwardly (inside-out).... etc.
Paul, There were already some answers, but it seems that people did not react to the statement that f g x fails. It doesn't, in normal order everything should go smoothly, f g 5 returns 5 = (f g) 5 = g 5, unless I am terribly mistaken... Where did you see an error? Jerzy Karczmarczuk

On 1 Apr 2008, at 12:40, PR Stanley wrote:
Why can't we have function application implemented outwardly (inside-out). So f g x would be applied with gx first followed by its return value passed to f instead of putting g x in brackets.
It seems me it may come from an alteration of math conventions: Normally (x) = x, and function application is written as f(x), except for a few traditional names, like for example sin x. So if one reasons that f(x) can be simplified to f x, then f g x becomes short for f(g)(x) = (f(g))(x). It is just a convention. In math, particularly in algebra, one sometimes writes "f of x" as x f or (x)f, so that one does not have to reverse the order for example in diagrams. Hans Aberg

On Tue, 1 Apr 2008, Hans Aberg wrote:
On 1 Apr 2008, at 12:40, PR Stanley wrote:
Why can't we have function application implemented outwardly (inside-out). So f g x would be applied with gx first followed by its return value passed to f instead of putting g x in brackets.
It seems me it may come from an alteration of math conventions: Normally (x) = x, and function application is written as f(x), except for a few traditional names, like for example sin x. So if one reasons that f(x) can be simplified to f x, then f g x becomes short for f(g)(x) = (f(g))(x).
In functional analysis you write e.g. D f(x) meaning (D f)(x) not D(f(x)), so I wouldn't say there is any convention of precedence of function application in mathematics. Even more, in functional analysis it is common to omit the parentheses around operator arguments, and since there are a lot of standard functions like 'sin', I wouldn't say that using argument parentheses is more common than omitting them. (Btw. in good old ZX Spectrum BASIC it was also allowed to omit argument parentheses.)

On 2 Apr 2008, at 11:22, Henning Thielemann wrote:
It seems me it may come from an alteration of math conventions: Normally (x) = x, and function application is written as f(x), except for a few traditional names, like for example sin x. So if one reasons that f(x) can be simplified to f x, then f g x becomes short for f(g)(x) = (f(g))(x).
In functional analysis you write e.g. D f(x) meaning (D f)(x) not D (f(x)), so I wouldn't say there is any convention of precedence of function application in mathematics.
When I take a quick look into Hörmander's book on distributions, then he writes (D f)(phi), and not D f(phi). So there might be a difference between math that is drawn towards pure or applied math.
Even more, in functional analysis it is common to omit the parentheses around operator arguments, and since there are a lot of standard functions like 'sin', ...
I think that in RTL, one do that as well: x tau, instead of (x)tau.
...I wouldn't say that using argument parentheses is more common than omitting them.(Btw. in good old ZX Spectrum BASIC it was also allowed to omit argument parentheses.)
Math usage is probably in minority these days. As I noted, looking into books on axiomatic set theory, one construct tuplets it so that (x) = x. So it seems possible, although for function application f(z) seems the normal notation. But one should also be able to write (f+g)(x). - This does not work in Haskell, because Num requires an instance of Eq and Show. Hans

Hans Aberg writes: ...
But one should also be able to write (f+g)(x). - This does not work in Haskell, because Num requires an instance of Eq and Show.
So, declare them, even if they are vacuous. I did it several times, I am still alive, so no need to say "this does not work". Jerzy Karczmarczuk

On 2 Apr 2008, at 13:51, jerzy.karczmarczuk@info.unicaen.fr wrote:
But one should also be able to write (f+g)(x). - This does not work in Haskell, because Num requires an instance of Eq and Show.
So, declare them, even if they are vacuous. I did it several times, I am still alive, so no need to say "this does not work".
That is possible, of course - I did that, too. But it means that the syntax and semantics do not work together; an invitation to pitfalls. So this ought to be avoided, except if there are no other workarounds. It would be better to write a new Prelude. :-) Hans

Hans Aberg comments my remark to his observation:
But one should also be able to write (f+g)(x). - This does not work in Haskell, because Num requires an instance of Eq and Show.
So, declare them, even if they are vacuous. I did it several times, I am still alive, so no need to say "this does not work".
That is possible, of course - I did that, too. But it means that the syntax and semantics do not work together; an invitation to pitfalls. So this ought to be avoided, except if there are no other workarounds.
I am more tolerant. The question - for me - is not an interplay between syntax and semantics, syntax here is irrelevant, the fact that (+) is a popular infix operator plays no role. The calamity comes from the fact that it is not possible to write serious and "natural" instances of Eq and Show for functions, and that for God knows which reasons, the Num instance demands them ! This requirement is not rational, although intuitive. But I violated it several times, when I needed arithmetic for lazy infinite objects... So, I can't say that this should be avoided. I don't see "obvious" pitfalls therein.
It would be better to write a new Prelude. :-)
Oh, yes, our common dream... Jerzy Karczmarczuk

On 2 Apr 2008, at 14:27, jerzy.karczmarczuk@info.unicaen.fr wrote:
That is possible, of course - I did that, too. But it means that the syntax and semantics do not work together; an invitation to pitfalls. So this ought to be avoided, except if there are no other workarounds.
I am more tolerant.
The pragmatics is to decide whether to program Haskell, or making a new language. I am interested in the latter, but realistically nobody will singly be able prdocue the programingg capacity that the now mature Haskell has.
The question - for me - is not an interplay between syntax and semantics, ...
That interplay, between notation and notions, is very important in math, as if the do not flow together, one will not be able to describe very complex logical structures.
...syntax here is irrelevant, the fact that (+) is a popular infix operator plays no role. The calamity comes from the fact that it is not possible to write serious and "natural" instances of Eq and Show for functions, ...
A correct Eq would require a theorem prover. Show could be implemented by writing out the function closures, but I think the reason it is not there is that it would create overhead in compiled code.
...and that for God knows which reasons, the Num instance demands them ! This requirement is not rational, although intuitive.
Probably pragmatics. More general implementations were not considered at the time.
But I violated it several times, when I needed arithmetic for lazy infinite objects... So, I can't say that this should be avoided. I don't see "obvious" pitfalls therein.
The pitfall is when somebody which does not know the code well tries to use it. Define a library with false = True true = False Perfectly logical, but it will be thwarted by peoples expectations.
It would be better to write a new Prelude. :-)
Oh, yes, our common dream...
Such changes will require a new standard. Haskell seems rather fixed, so it will perhaps then happen in a new language. :-) Hans

On Wed, 2 Apr 2008, Hans Aberg wrote:
Show could be implemented by writing out the function closures, but I think the reason it is not there is that it would create overhead in compiled code.
It would also not give referential transparent answers, because the same function can be implemented in different ways: http://www.haskell.org/haskellwiki/Show_instance_for_functions

On 3 Apr 2008, at 08:07, Henning Thielemann wrote:
Show could be implemented by writing out the function closures, but I think the reason it is not there is that it would create overhead in compiled code.
It would also not give referential transparent answers, because the same function can be implemented in different ways: http://www.haskell.org/haskellwiki/Show_instance_for_functions
You can define scalars as constant functions, making the set of functions into a ring, and then implicit multiplication would not work. The way I view implicit multiplication though, is as an abbreviation of the explicit multiplication. So from that point of view, it is no stranger than other notational simplifications that may or may not take place, for example the use of parenthesizes. So if one defines scalars as constants, one will have accept that implicit multiplication cannot take place. But that should not be a problem in Haskell, as it does not admit that anyway: one knows that x (y) always is function application. Anyway, in math, the context may change. So sometimes it may be useful to let a number r denote a constant function, but if r is in a ring R, the it may be useful to let it denote the function that is multiplication of r. Now in a computer language, the problem is to avoid setting one such possibility in stone at fundamental level so that by that it excludes the other variations. Hans

On 2 Apr 2008, at 14:27, jerzy.karczmarczuk@info.unicaen.fr wrote:
It would be better to write a new Prelude. :-)
Oh, yes, our common dream...
One may not need to write a wholly new Prelude, by something like: module NewPrelude where import Prelude hiding -- Num, (+). class AdditiveSemiMonoid a where (+) :: a -> a -> a ... class (Eq a, Show a, AdditiveSemiMonoid a) => Num a where (+) :: a -> a -> a -- Stuff of Prelude using Num. Then import NewPrelude instead, and instance AdditiveSemiMonoid (a -> b) where f + g = \x -> f(x) + g(x) or something. Hans

2008/4/2, Hans Aberg
On 2 Apr 2008, at 14:27, jerzy.karczmarczuk@info.unicaen.fr wrote:
It would be better to write a new Prelude. :-)
Oh, yes, our common dream...
One may not need to write a wholly new Prelude, by something like:
module NewPrelude where
import Prelude hiding -- Num, (+).
class AdditiveSemiMonoid a where (+) :: a -> a -> a
Err, why *semi* monoid? Plain "monoid" would not be accurate? <rant> While we're at it, what about adding even more classes, like "group" or "ring"? Algebra in a whole class hierachy. :-) </rant> Loup

On 2 Apr 2008, at 16:20, Loup Vaillant wrote:
class AdditiveSemiMonoid a where (+) :: a -> a -> a
Err, why *semi* monoid? Plain "monoid" would not be accurate?
A monoid has a unit: class (AdditiveSemiMonoid a) => AdditiveMonoid a where o :: a The semimonoid is also called semigroup, I think.
<rant> While we're at it, what about adding even more classes, like "group" or "ring"? Algebra in a whole class hierachy. :-)
Only ambition required :-). It is probably easier to make a copy of Prelude.hs to say NewPrelude.hs, and modify it directly, by inserting intermediate classes. Hans

On Apr 2, 2008, at 10:27 , Hans Aberg wrote:
On 2 Apr 2008, at 16:20, Loup Vaillant wrote:
<rant> While we're at it, what about adding even more classes, like "group" or "ring"? Algebra in a whole class hierachy. :-)
Only ambition required :-).
http://www.haskell.org/haskellwiki/Mathematical_prelude_discussion --- go nuts. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On 2 Apr 2008, at 16:30, Brandon S. Allbery KF8NH wrote:
While we're at it, what about adding even more classes, like "group" or "ring"? Algebra in a whole class hierachy. :-)
Only ambition required :-).
http://www.haskell.org/haskellwiki/Mathematical_prelude_discussion --- go nuts.
There is a Math Prelude, but perhaps one can simplify and divide into parts that refines the current Prelude, and stuff built on top of a refined Prelude. The problem with the current one is that for example Num claims (+), insists on Eq and Show, and there is no way to get rid of those requirements. So inserting some classes, like AdditiveSemiMonoid would be less ambitious than writing a whole new algebra oriented Prelude. Perhaps a better chance of success. Hans

On 2 Apr 2008, at 16:20, Loup Vaillant wrote:
class AdditiveSemiMonoid a where (+) :: a -> a -> a
Err, why *semi* monoid? Plain "monoid" would not be accurate?
I found an example where it is crucial that the monoid has a unit: When given a monoid m, then one can also define an m-algebra, by giving a structure map (works in 'hugs -98'): class Monad m => MAlgebra m a where smap :: m a -> a Now, the set of lists on a set A is just the free monoid with base A; the list monad identifies the free monoids, i.e., the lists. So this then gives Haskell interpretation class Monoid a where unit :: a (***) :: a -> a -> a instance Monoid a => MAlgebra [] a where smap [] = unit smap (x:xs) = x *** smap xs Here, I use (***) to not clash with the Prelude (*). But the function "product" here shows up as the structure map of a multiplicative monoid. Similarly, "sum" is the structure map of the additive monoids. And (++) is just the monoid multiplication of the free monoids. Hans

On Wed, 2 Apr 2008, Hans Aberg wrote:
But one should also be able to write (f+g)(x). - This does not work in Haskell, because Num requires an instance of Eq and Show.
You could define these instances with undefined function implementations anyway. But also in a more cleaner type hierarchy like that of NumericPrelude you should not define this instance, because it would open new surprising sources of errors: http://www.haskell.org/haskellwiki/Num_instance_for_functions

On 3 Apr 2008, at 07:59, Henning Thielemann wrote:
But one should also be able to write (f+g)(x). - This does not work in Haskell, because Num requires an instance of Eq and Show.
You could define these instances with undefined function implementations anyway. But also in a more cleaner type hierarchy like that of NumericPrelude you should not define this instance, because it would open new surprising sources of errors: http://www.haskell.org/haskellwiki/Num_instance_for_functions
This problem is not caused by defining f+g, but by defining numerals as constants. In some contexts is natural to let the identity be written as 1, and then 2 = 2*1 is not a constant. With this definition, a (unitary) ring may be identified with an additive category with only one object. In mathematical terms, the set of functions is a (math) module ("generalized vectorspace"), not a ring. Anyway, Num is a type for unifying some common computer numerical types, and not for doing algebra. If its (+) is derived from an additive monoid (or magma) type, then defining f+g will not interfere with Num. Hans

Hans Aberg wrote:
This problem is not caused by defining f+g, but by defining numerals as constants.
Yup. So the current (Num thing) is basically: 1. The type thing is a ring 2. ... with signs and absolute values 3. ... along with a natural homomorphism from Z into thing 4. ... and with Eq and Show. If one wanted to be perfectly formally correct, then each of 2-4 could be split out of Num. For example, 2 doesn't make sense for polynomials or n by n square matrices. 4 doesn't make sense for functions. 3 doesn't make sense for square matrices of dimension greater than 1. And, this quirk about 2(x+y) can be seen as an argument for not wanting it in the case of functions, either. I'm not sure I find the argument terribly compelling, but it is there anyway. On the other hand, I have enough time already trying to explain Num, Fractional, Floating, RealFrac, ... to new haskell programmes. I'm not sure it's an advantage if someone must learn the meaning of an additive commutative semigroup in order to understand the type signatures inferred from code that does basic math in Haskell. At least in the U.S., very few computer science students take an algebra course before getting undergraduate degrees.
In mathematical terms, the set of functions is a (math) module ("generalized vectorspace"), not a ring.
Well, I agree that functions are modules; but it's hard to agree that they are not rings. After all, it's not too difficult to verify the ring axioms.

Hello Chris, Thursday, April 3, 2008, 6:07:53 PM, you wrote:
On the other hand, I have enough time already trying to explain Num, Fractional, Floating, RealFrac, ... to new haskell programmes. I'm not sure it's an advantage if someone must learn the meaning of an additive commutative semigroup in order to understand the type signatures inferred from code that does basic math in Haskell. At least in the U.S., very few computer science students take an algebra course before getting undergraduate degrees.
we already need to learn Category Theory to say "Hello, World" :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Chris Smith writes:
... So the current (Num thing) is basically:
1. The type thing is a ring 2. ... with signs and absolute values 3. ... along with a natural homomorphism from Z into thing 4. ... and with Eq and Show.
If one wanted to be perfectly formally correct, then each of 2-4 could be split out of Num. For example, 2 doesn't make sense for polynomials or n by n square matrices. 4 doesn't make sense for functions. 3 doesn't make sense for square matrices of dimension greater than 1. And, this quirk about 2(x+y) can be seen as an argument for not wanting it in the case of functions, either. I'm not sure I find the argument terribly compelling, but it is there anyway.
Why "formally correct"?? This is *your* vision. I might say that fromInteger makes sense for square matrices, mapping to a multiple of the unity matrix. 4 makes sense for functions on finite domains. The full Ring property requires a unit, which might not be there "formally". The "absolute value" might be a buzzword for <<norm>> which *can* be defined for polynomials. In short, Num is a conventional hybrid, and trying to squeeze it into *any* math formal structure is asking for trouble.
On the other hand, I have enough time already trying to explain Num, Fractional, Floating, RealFrac, ... to new haskell programmes. ... At least in the U.S., very few computer science students take an algebra course before getting undergraduate degrees.
Not only in US.
In mathematical terms, the set of functions is a (math) module ("generalized vectorspace"), not a ring.
Well, I agree that functions are modules; but it's hard to agree that they are not rings. After all, it's not too difficult to verify the ring axioms.
Oh. What is the function multiplication? You say: (f*g)(x) = f x * g x ?? I agree. But if you say that the multiplication IS the *composition* (for example for the popular implementation of Peano/Church numerals), then I will also agree. So, there isn't any obvious instance for the function multiplication. (The composition works even if neither "x" nor "f x" belong to any ring). BTW functions "are" or are not members of modules/vector spaces depending on the codomain structure, so this requires a constrained instance specification. Fractional, Floating, etc. are also horrible. Why the square root needs to be floating? It can belong to the algebraic number domain. We can continue this ad infinitum. I cared very much about this stuff some 10 years ago, I gave up... I would like, sure, to have the possibility to derive in Haskell some mathematical subsumptions, e.g., that the ring of integers modulo N becomes a field if N is prime. Or, that every additive group is a module over integers. But it seems that we are very far from such a dream. Jerzy Karczmarczuk

On 3 Apr 2008, at 16:07, Chris Smith wrote:
This problem is not caused by defining f+g, but by defining numerals as constants.
Yup. So the current (Num thing) is basically:
1. The type thing is a ring 2. ... with signs and absolute values 3. ... along with a natural homomorphism from Z into thing 4. ... and with Eq and Show.
If one wanted to be perfectly formally correct, then each of 2-4 could be split out of Num. For example, 2 doesn't make sense for polynomials or n by n square matrices.
Or ordinals.
4 doesn't make sense for functions. 3 doesn't make sense for square matrices of dimension greater than 1. And, this quirk about 2(x+y) can be seen as an argument for not wanting it in the case of functions, either. I'm not sure I find the argument terribly compelling, but it is there anyway.
On the other hand, I have enough time already trying to explain Num, Fractional, Floating, RealFrac, ... to new haskell programmes. I'm not sure it's an advantage if someone must learn the meaning of an additive commutative semigroup in order to understand the type signatures inferred from code that does basic math in Haskell. At least in the U.S., very few computer science students take an algebra course before getting undergraduate degrees.
It is probably not worth to change Num, as code already depends on it. But by inserting some extra classes, and letting it derive, makes it possible to use operators such as (+) without tying it to Eq, Show, etc.
In mathematical terms, the set of functions is a (math) module ("generalized vectorspace"), not a ring.
Well, I agree that functions are modules; but it's hard to agree that they are not rings. After all, it's not too difficult to verify the ring axioms.
It is the set of function that may be a module or ring, and what it is depends on how you define it. - It may be different in different contexts. And there is a different question finding unambiguous notation. How would a explicit constant like 2 be defined in HAskell as a constant so that 2(sin) becomes possible? INs't OK to write (const 2)? Hans

On 2008-04-03, Chris Smith
Hans Aberg wrote:
This problem is not caused by defining f+g, but by defining numerals as constants.
Yup. So the current (Num thing) is basically:
1. The type thing is a ring 2. ... with signs and absolute values 3. ... along with a natural homomorphism from Z into thing 4. ... and with Eq and Show.
If one wanted to be perfectly formally correct, then each of 2-4 could be split out of Num. For example, 2 doesn't make sense for polynomials or n by n square matrices. 4 doesn't make sense for functions. 3 doesn't make sense for square matrices of dimension greater than 1. And, this quirk about 2(x+y) can be seen as an argument for not wanting it in the case of functions, either. I'm not sure I find the argument terribly compelling, but it is there anyway.
Just a nit, but 3 seems to make perfect sense for square matrices -- n gets mapped onto I_d for any dimension d. fromInteger (n*m) == fromInteger n * fromInteger m fromInteger (n+m) == fromInteger n + fromInteger m -- Aaron Denney -><-
participants (12)
-
Aaron Denney
-
Brandon S. Allbery KF8NH
-
Bulat Ziganshin
-
Chris Smith
-
Hans Aberg
-
Henning Thielemann
-
Janis Voigtlaender
-
Jeremy Apthorp
-
jerzy.karczmarczuk@info.unicaen.fr
-
Jules Bean
-
Loup Vaillant
-
PR Stanley