
I got question about why haskell insist to be a purely FL. I mean is there any feature which is only support by pure? I mean maybe the program is much easier to prove? Or maybe we can cache some value for laziness. Could anyone give me some more information about why haskell needs to be pure. Thanks a lot. Shen, Yu-Teh

Hi Shen,
I got question about why haskell insist to be a purely FL. I mean is there any feature which is only support by pure?
Laziness. It is very difficult to have a lazy language which is not pure. Laziness and purity together help with equational reasoning, compiler transformations, less obscure bugs, better compositionality etc. Thanks Neil

jerzy.karczmarczuk@info.unicaen.fr wrote:
Niko Korhonen writes:
...
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..] , for assumed operational semantics of ones own axiomatic semantics. Bugs are only a misunderstanding away. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

On Jan 10, 2008 3:36 PM, Achim Schneider
jerzy.karczmarczuk@info.unicaen.fr wrote:
Niko Korhonen writes:
...
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..]
, for assumed operational semantics of ones own axiomatic semantics. Bugs are only a misunderstanding away.
It has nothing to do with laziness, but with using an algebraic function (==) with a codata structure (stream). If Haskell kept laziness but enforced separation of data and codata such code wouldn't compile. Lazy lists or streams never are a problem, but you can't (generically) fold codata.
-- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.
Best regards, Daniel Yokomizo

"Daniel Yokomizo"
On Jan 10, 2008 3:36 PM, Achim Schneider
wrote: jerzy.karczmarczuk@info.unicaen.fr wrote:
Niko Korhonen writes:
...
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..]
, for assumed operational semantics of ones own axiomatic semantics. Bugs are only a misunderstanding away.
It has nothing to do with laziness, but with using an algebraic function (==) with a codata structure (stream). If Haskell kept laziness but enforced separation of data and codata such code wouldn't compile. Lazy lists or streams never are a problem, but you can't (generically) fold codata.
Exactly. Denotationally it hasn't, but axiomatically it has. Because it looks like mathematical terms one can easily get lost in believing it reduces like mathematics, too. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

On 10 Jan 2008, at 7:55 AM, Achim Schneider wrote:
"Daniel Yokomizo"
wrote: On Jan 10, 2008 3:36 PM, Achim Schneider
wrote: jerzy.karczmarczuk@info.unicaen.fr wrote:
Niko Korhonen writes:
...
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..]
, for assumed operational semantics of ones own axiomatic semantics. Bugs are only a misunderstanding away.
It has nothing to do with laziness, but with using an algebraic function (==) with a codata structure (stream). If Haskell kept laziness but enforced separation of data and codata such code wouldn't compile. Lazy lists or streams never are a problem, but you can't (generically) fold codata.
Exactly. Denotationally it hasn't, but axiomatically it has. Because it looks like mathematical terms one can easily get lost in believing it reduces like mathematics, too.
What kind of mathematics? I don't know of any mathematics where algebraic simplifications are employed without proof of the underlying equations (in some denotational model). jcc

Jonathan Cast
On 10 Jan 2008, at 7:55 AM, Achim Schneider wrote:
"Daniel Yokomizo"
wrote: On Jan 10, 2008 3:36 PM, Achim Schneider
wrote: jerzy.karczmarczuk@info.unicaen.fr wrote:
Niko Korhonen writes:
...
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..]
, for assumed operational semantics of ones own axiomatic semantics. Bugs are only a misunderstanding away.
It has nothing to do with laziness, but with using an algebraic function (==) with a codata structure (stream). If Haskell kept laziness but enforced separation of data and codata such code wouldn't compile. Lazy lists or streams never are a problem, but you can't (generically) fold codata.
Exactly. Denotationally it hasn't, but axiomatically it has. Because it looks like mathematical terms one can easily get lost in believing it reduces like mathematics, too.
What kind of mathematics? I don't know of any mathematics where algebraic simplifications are employed without proof of the underlying equations (in some denotational model).
Mathematics as, as my professor put it, "Solving by staring". -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

On 11 Jan 2008, at 5:13 AM, Achim Schneider wrote:
Jonathan Cast
wrote: What kind of mathematics? I don't know of any mathematics where algebraic simplifications are employed without proof of the underlying equations (in some denotational model).
Mathematics as, as my professor put it, "Solving by staring".
Professor of what? I would have been flunked for such an approach. jcc

Jonathan Cast
On 11 Jan 2008, at 5:13 AM, Achim Schneider wrote:
Jonathan Cast
wrote: What kind of mathematics? I don't know of any mathematics where algebraic simplifications are employed without proof of the underlying equations (in some denotational model).
Mathematics as, as my professor put it, "Solving by staring".
Professor of what? I would have been flunked for such an approach.
Professor as in "gives lectures", he's a Dipl. Ing... and also also as in "got a Job for life", although not formally. I think his main point in telling it is to stop people from blindly expanding and reducing around. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

Achim Schneider answers my question to somebody else (Niko Korhonen):
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..]
Whatever you may say more, this is neither obscure nor a bug. I still wait for a relevant example. But I don't insist too much... Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote:
Achim Schneider answers my question to somebody else (Niko Korhonen):
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
[1..] == [1..]
Whatever you may say more, this is neither obscure nor a bug. I still wait for a relevant example. But I don't insist too much...
It's not an example of a bug, but of a cause. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

Achim Schneider:
jerzy.karczmarczuk asks what's wrong with:
[1..] == [1..]
Whatever you may say more, this is neither obscure nor a bug. I still wait for a relevant example. But I don't insist too much...
It's not an example of a bug, but of a cause.
A cause of WHAT?? This is a perfect, nice, runaway computation, which agrees with all dogmas of my religion. Now, if somebody says that the fact that some people find it difficult to grasp the essence of laziness, and THIS is a source of bugs, I may believe. But not the laziness itself. (For some people the (==) operator seems to be a permanent source of bugs.) The difference between fold and stricter fold' won't convince me either. People who want to use laziness must simply read the documentation... On the other hand, what Don Stewart pointed out, the inconsistency between enumFrom and enumFromTo, and the difference of behaviours when one passes from Int to Integer, now, this is another story, worrying a little... Thanks. Perhaps another example is more relevant, the tradeoffs space-time in the "optimized" version of the powerset generator... Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote:
Achim Schneider:
jerzy.karczmarczuk asks what's wrong with:
[1..] == [1..]
Whatever you may say more, this is neither obscure nor a bug. I still wait for a relevant example. But I don't insist too much...
It's not an example of a bug, but of a cause.
A cause of WHAT?? This is a perfect, nice, runaway computation, which ^^^^^^^ agrees with all dogmas of my religion. ^^^^^
The essence of laziness is to do the least work necessary to cause the desired effect, which is to see that the set of natural numbers equals the set of natural numbers, which, axiomatically, is always computable in O(1) by equality by identity. The essence of non-strictness, though, is another kind of story. Like a golem plowing half of the country until you remember that you placed him a bit absent-mindedly into your backyard and said "plow", that still won't plow mountains. The essence of strictness is easy, though: get stuck on a stone, fall over and continue moving until you break. And people just think axiomatically and then translate their understanding more or less blindly into code, even if they can't name the axiom(s). Or, as I already mentioned: | , for assumed operational semantics of ones own axiomatic semantics. | Bugs are only a misunderstanding away. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

Achim Schneider wrote:
[1..] == [1..]
[some discussion about the nontermination of this expression]
The essence of laziness is to do the least work necessary to cause the desired effect, which is to see that the set of natural numbers equals the set of natural numbers, which, axiomatically, is always computable in O(1) by equality by identity.
This would make sense if Haskell had inbuild equality and (==) where part of the formal semantics of Haskell, wich it isn't. (==) is a library function like every other library function. How could the language or a system implementing the language decide wether this or any other library function returns True without actually running it? Haskell is a programming language, not a theorem prover. Tillmann

Tillmann Rendel
Achim Schneider wrote:
[1..] == [1..]
[some discussion about the nontermination of this expression]
The essence of laziness is to do the least work necessary to cause the desired effect, which is to see that the set of natural numbers equals the set of natural numbers, which, axiomatically, is always computable in O(1) by equality by identity.
This would make sense if Haskell had inbuild equality and (==) where part of the formal semantics of Haskell, wich it isn't. (==) is a library function like every other library function. How could the language or a system implementing the language decide wether this or any other library function returns True without actually running it?
The list instance for Eq might eg. know something about the structure of the lists and be smart enough not to get caught in the recursion of x = 1:1:x and y = 1:1:1:y so it could successfully compare x == y to True in six compares.
Haskell is a programming language, not a theorem prover.
Yes I know. That's why I wrote axiomatic and operational semantics, not denotational; I didn't want to start a bar fight. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

On Fri, 2008-01-11 at 01:12 +0100, Achim Schneider wrote:
Tillmann Rendel
wrote: Achim Schneider wrote:
[1..] == [1..]
[some discussion about the nontermination of this expression]
The essence of laziness is to do the least work necessary to cause the desired effect, which is to see that the set of natural numbers equals the set of natural numbers, which, axiomatically, is always computable in O(1) by equality by identity.
This would make sense if Haskell had inbuild equality and (==) where part of the formal semantics of Haskell, wich it isn't. (==) is a library function like every other library function. How could the language or a system implementing the language decide wether this or any other library function returns True without actually running it?
The list instance for Eq might eg. know something about the structure of the lists and be smart enough not to get caught in the recursion of x = 1:1:x and y = 1:1:1:y so it could successfully compare x == y to True in six compares.
So let's imagine: ones = 1 : ones ones' = repeat 1 where repeat n = n : repeat n So you're suggesting that: ones == ones = True but ones' == ones' = _|_ Well if that were the case then it is distinguishing two equal values and hence breaking referential transparency. We can fairly trivially prove that ones and ones' are equal so == is not allowed to distinguish them. Fortunately it is impossible to write == above, at least using primitives within the language. Duncan

Thank you Duncan, you took the words out of my mouth. :)
On Jan 10, 2008 5:42 PM, Duncan Coutts
On Fri, 2008-01-11 at 01:12 +0100, Achim Schneider wrote:
Tillmann Rendel
wrote: Achim Schneider wrote:
[1..] == [1..]
[some discussion about the nontermination of this expression]
The essence of laziness is to do the least work necessary to cause the desired effect, which is to see that the set of natural numbers equals the set of natural numbers, which, axiomatically, is always computable in O(1) by equality by identity.
This would make sense if Haskell had inbuild equality and (==) where part of the formal semantics of Haskell, wich it isn't. (==) is a library function like every other library function. How could the language or a system implementing the language decide wether this or any other library function returns True without actually running it?
The list instance for Eq might eg. know something about the structure of the lists and be smart enough not to get caught in the recursion of x = 1:1:x and y = 1:1:1:y so it could successfully compare x == y to True in six compares.
So let's imagine:
ones = 1 : ones
ones' = repeat 1 where repeat n = n : repeat n
So you're suggesting that:
ones == ones = True but ones' == ones' = _|_
Well if that were the case then it is distinguishing two equal values and hence breaking referential transparency. We can fairly trivially prove that ones and ones' are equal so == is not allowed to distinguish them. Fortunately it is impossible to write == above, at least using primitives within the language.
Duncan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 11 Jan 2008 09:16:12 +0200, Lennart Augustsson
Thank you Duncan, you took the words out of my mouth. :)
On Jan 10, 2008 5:42 PM, Duncan Coutts
wrote:
So let's imagine:
ones = 1 : ones
ones' = repeat 1 where repeat n = n : repeat n
So you're suggesting that:
ones == ones = True but ones' == ones' = _|_
Well if that were the case then it is distinguishing two equal values and hence breaking referential transparency. We can fairly trivially prove that ones and ones' are equal so == is not allowed to distinguish them. Fortunately it is impossible to write == above, at least using primitives within the language.
If one can prove ones == ones = True with some method, why that method cannot be made to work on ones' ? Are you thinking about repeat (f x) by any chance ? ________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

Achim Schneider wrote:
The list instance for Eq might eg. know something about the structure of the lists and be smart enough not to get caught in the recursion of x = 1:1:x and y = 1:1:1:y so it could successfully compare x == y to True in six compares.
This would not be "something about the structure of lists" This would be "somethign about the structure of thunks". Thunks are not supposed to be observable. If you augment the language to make thunks observable and comparable, you will break referential transparency. Jules

G'day all. Quoting jerzy.karczmarczuk@info.unicaen.fr:
Perhaps another example is more relevant, the tradeoffs space-time in the "optimized" version of the powerset generator...
Yeah, I was going to bring up that topic. I've been lazy programming for 15 or so years, and the only "bugs" that I can think of are: 1. Indirect "black holes" that are not expressible in a strict language. You generally have to be doing something bizarre for this to occur, and it doesn't take too long before you can accurately predict when they constitute a likely risk. 2. Space leaks. This is the one thing that really bites hard, because a space leak is a global property of a lazy program, and hence can't be reasoned about locally. Cheers, Andrew Bromage

On Jan 11, 2008 12:09 AM,
1. Indirect "black holes" that are not expressible in a strict language. You generally have to be doing something bizarre for this to occur, and it doesn't take too long before you can accurately predict when they constitute a likely risk.
What do you mean by "black hole" here? Luke

G'day all.
Quoting Luke Palmer
What do you mean by "black hole" here?
A "black hole" is what happens when evalation of something recursively depends on its own evaluation in such a way that no useful work can be done. An example is: let omega = omega + 1 in omega In previous GHCs, that triggered am error which used the phrase "black hole". Now, I think it just consumes the heap and/or stack. On my 2005-era Hugs, it causes a seg fault. Most examples of black holes aren't so blatant. In all cases, they are genuine bugs in the program (i.e. the program is incorrect), but programs containing bugs like this are not even expressible in a strict language. So I claim this is a class of (admittedly rare) bug that is only a problem because of laziness. Cheers, Andrew Bromage

ajb@spamcop.net writes:
A "black hole" is what happens when evalation of something recursively depends on its own evaluation in such a way that no useful work can be done. An example is:
let omega = omega + 1 in omega
In previous GHCs, that triggered am error which used the phrase "black hole". Now, I think it just consumes the heap and/or stack. On my 2005-era Hugs, it causes a seg fault.
Strange, because blackohes are visible by the runtime. Clean should say: "Warning, cycle in spine detected", and stop. Jerzy Karczmarczuk PS. Perhaps Simon decided to think more literally about the black hole as it is in the theory of relativity? You know, when you descend a probe into it, and look from a faraway, safe distance, it takes an infinite amount of time for the probe to reach the event horizon...

Some people seem to think that == is an equality predicate.
This is a big source of confusion for them; until they realize that == is
just another function returning Bool they will make claims like [1..]==[1..]
having an unnatural result.
The == function is only vaguely related to the equality predicate in that it
is meant to be a computable approximation of semantic equality (but since
it's overloaded it can be anything, of course).
-- Lennart
On Jan 10, 2008 10:34 AM,
Achim Schneider:
jerzy.karczmarczuk asks what's wrong with:
[1..] == [1..]
Whatever you may say more, this is neither obscure nor a bug. I still wait for a relevant example. But I don't insist too much...
It's not an example of a bug, but of a cause.
A cause of WHAT?? This is a perfect, nice, runaway computation, which agrees with all dogmas of my religion.
Now, if somebody says that the fact that some people find it difficult to grasp the essence of laziness, and THIS is a source of bugs, I may believe. But not the laziness itself. (For some people the (==) operator seems to be a permanent source of bugs.)
The difference between fold and stricter fold' won't convince me either. People who want to use laziness must simply read the documentation...
On the other hand, what Don Stewart pointed out, the inconsistency between enumFrom and enumFromTo, and the difference of behaviours when one passes from Int to Integer, now, this is another story, worrying a little... Thanks.
Perhaps another example is more relevant, the tradeoffs space-time in the "optimized" version of the powerset generator...
Jerzy Karczmarczuk
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Am Freitag, 11. Januar 2008 08:11 schrieb Lennart Augustsson:
Some people seem to think that == is an equality predicate. This is a big source of confusion for them; until they realize that == is just another function returning Bool they will make claims like [1..]==[1..] having an unnatural result.
The == function is only vaguely related to the equality predicate in that it is meant to be a computable approximation of semantic equality (but since it's overloaded it can be anything, of course).
-- Lennart
But class methods are expected to fulfill some axioms. I’d suppose that (==) should be an equivalence relation. Of course, this is not implementable because of infininte data structures. But one could relax the axioms such that it’s allowed for (==) to return _|_ instead of the expected value. Differentiating between data and codata would of course be the better solution. However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq. So I think that this behavior should be changed. Think of a set implementation which uses (==) to compare set elements for equality. The NaN behavior would break this implementation since it would allow for sets which contain NaN multiple times. Best wishes, Wolfgang

Wolfgang Jeltsch
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn?t adhere to any meaningful axiom set for Eq. So I think that this behavior should be changed. Think of a set implementation which uses (==) to compare set elements for equality. The NaN behavior would break this implementation since it would allow for sets which contain NaN multiple times.
You forget, that the intention of NaN is denial of membership of any set of numbers. -- Dipl.-Math. Wilhelm Bernhard Kloke Institut fuer Arbeitsphysiologie an der Universitaet Dortmund Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257 PGP: http://vestein.arb-phys.uni-dortmund.de/~wb/mypublic.key

Am Freitag, 11. Januar 2008 10:54 schrieb Wilhelm B. Kloke:
Wolfgang Jeltsch
schrieb: However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn?t adhere to any meaningful axiom set for Eq. So I think that this behavior should be changed. Think of a set implementation which uses (==) to compare set elements for equality. The NaN behavior would break this implementation since it would allow for sets which contain NaN multiple times.
You forget, that the intention of NaN is denial of membership of any set of numbers.
This doesn’t matter. The Set data type I’m talking about would not know about NaN and would therefore allow multiple NaNs in a set. Best wishes, Wolfgang

On Fri, 11 Jan 2008 13:29:35 +0200, Wolfgang Jeltsch
Am Freitag, 11. Januar 2008 10:54 schrieb Wilhelm B. Kloke:
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn?t adhere to any meaningful axiom set for Eq. So I think
Wolfgang Jeltsch
schrieb: that this behavior should be changed. Think of a set implementation which uses (==) to compare set elements for equality. The NaN behavior would break this implementation since it would allow for sets which contain NaN multiple times.
You forget, that the intention of NaN is denial of membership of any set of numbers.
This doesn’t matter. The Set data type I’m talking about would not know about NaN and would therefore allow multiple NaNs in a set.
This is a good thing because one can define natural numbers with such sets :-) ________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. In my opinion it is the better than yielding True. 0/0 doesn't make sense. So it can't be compared to anything else which doesn't make sense.
Whether == should yield False at all is another matter. It may be better to yield some kind of bottom (undefined?), but then compatibility with IEEE 754 might be an issue, hence using external libraries like BLAS, gmp, ...

Wolfgang Jeltsch
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq.
Tough luck, but that's how floating point works, and what the numericalists know, and possibly even love (although I have my doubts). Sanitizing this behavior would make Haskell less usable for real-world numerical problems. As a compromise, what about an option to make NaN (and presumably the infinities) cause an immediate exception? (And, cetero censeo, exceptions for Int overflow as well.) -k -- If I haven't seen further, it is by standing in the footprints of giants

Am Freitag, 11. Januar 2008 11:33 schrieben Sie:
Wolfgang Jeltsch
writes: However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq.
Tough luck, but that's how floating point works, and what the numericalists know, and possibly even love (although I have my doubts). Sanitizing this behavior would make Haskell less usable for real-world numerical problems.
The IEEE floating point equivalence test has to yield false when comparing NaN with NaN. Haskell’s (==) has to yield True or undefined when comparing a value with itself. So Haskell’s (==) just has to be different from the IEEE floating point equivalence test. What about providing a separate function for the latter?
As a compromise, what about an option to make NaN (and presumably the infinities) cause an immediate exception? (And, cetero censeo, exceptions for Int overflow as well.)
This would be far better (and far more Haskell-like).
-k
Best wishes, Wolfgang

Wolfgang Jeltsch wrote:
Am Freitag, 11. Januar 2008 11:33 schrieben Sie:
Wolfgang Jeltsch
writes: However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq. Tough luck, but that's how floating point works, and what the numericalists know, and possibly even love (although I have my doubts). Sanitizing this behavior would make Haskell less usable for real-world numerical problems.
The IEEE floating point equivalence test has to yield false when comparing NaN with NaN. Haskell’s (==) has to yield True or undefined when comparing a value with itself. So Haskell’s (==) just has to be different from the IEEE floating point equivalence test. What about providing a separate function for the latter?
I wonder where the requirement on (==) you mention above is specified. I can't find it in the report but maybe I just overlooked it. OTOH, the report does say: "The Ord class is used for totally ordered datatypes". IEEE comparisons are not a total ordering so in principle, they can't be used in the Ord methods. Also, comparing IEEE-like numbers for equality is practically always a mistake (the cases where it isn't are exceedingly rare) so the Eq instance for Float and Double doesn't actually provide any meaningful functionality. Anyway, having "correct" but inefficient implementations of Eq and Ord method for floating-point numbers sounds like a good idea to me, provided that the fast comparisons are still available. Personally, I'd be fine with not having those instances at all but that's just me, I guess.
As a compromise, what about an option to make NaN (and presumably the infinities) cause an immediate exception? (And, cetero censeo, exceptions for Int overflow as well.)
This would be far better (and far more Haskell-like).
No, it would be far more Haskell-like not to use Float and Double (nor Int, for that matter) for things they shouldn't be used for. If all you want are fractions use Rational. Float and Double are (or should be) only for high-performance computations and people implementing those ought to know what they are doing. If they don't, they'll have much bigger problems than NaNs not being equal to themselves. BTW, some algorithms depend on silent NaNs and many depend on infinity. As an aside, I'd recommend http://citeseer.ist.psu.edu/goldberg91what.html as an introduction to some of the problems with floating point. The paper also talks about some uses for silent NaNs. Roman

Ketil Malde:
Wolfgang Jeltsch:
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq.
Tough luck, but that's how floating point works, and what the numericalists know, and possibly even love (although I have my doubts). Sanitizing this behavior would make Haskell less usable for real-world numerical problems.
As a compromise, what about an option to make NaN (and presumably the infinities) cause an immediate exception? (And, cetero censeo, exceptions for Int overflow as well.)
People, you are monsters. First, despite the *common, well known* truth that Haskell is not Mathematics, this illusion seems to be extremely persistent! Haskell is a victim - no, some users are victims of its success as a formal language, not just as a coding tool... They *want* to have Eq as they imagine the equality, including the comparison between incomparable. This is BTW a long standing philosophical problem. For centuries some speculative guys tried to analyse such "assertions" as God == God, or death==death. Or myself==myself. Of course, even if they produced some cute conclusions, they had no whatsoever sense for the others. Now we have the modern variants of it: NaN == NaN, bottom == bottom ... Of course, there are differences, since NaN is, no - ARE well defined *objects*. In IEEE there may be several NaNs, if the exponent is e_max+1, then *any* significand (mantissa for the dinosaurs) is good for a NaN. ++ Then, I see here, and on some other lists some tendency to speculate on the numerics by people who really don't need it, and don't use it. The bombing of NaN *might* be a profound compilation option, but for people who really do numerical work, this is a blessing NOT to have it. - Zero (or minimum, etc.) finders don't explode on your face when the automaton gets out of the range because of the instabilities. - Vectorized computations which produce plenty of good numbers and sometimes diverge, do not invalidate all work. - Ignoring Int overflow is a cheap way of having `mod` (MAXINT+1). Useful for many purposes. - In such vector/matrix packages as Matlab, where arrays may represent geometric objects, NaNs mean: "no coordinates here, empty". Simple and useful. etc. So, don't "sanitize" anything, unless you know what you are really doing! I would suggest to Wolfgang Jeltsch a little more of reserve before making sharp categorical proposals concerning the fl. point computations (and also acknowledge that NaN is not a unique entity). It is easy to propose - in the name of "purity" to massacre existing structures; several religious sects and political doctrines were born in such a way. The result was usually horrible... The Num hierarchy in Haskell is bad, we know it, especially for people who do some more formal mathematics. There are more interesting problems to solve than organising a crusade against IEEE, "illegalizing" the Ord instance for numbers, etc. Jerzy Karczmarczuk

On Fri, 11 Jan 2008 14:21:45 +0200,
Ketil Malde:
Wolfgang Jeltsch:
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq.
Tough luck, but that's how floating point works, and what the numericalists know, and possibly even love (although I have my doubts). Sanitizing this behavior would make Haskell less usable for real-world numerical problems.
As a compromise, what about an option to make NaN (and presumably the infinities) cause an immediate exception? (And, cetero censeo, exceptions for Int overflow as well.)
People, you are monsters. First, despite the *common, well known* truth that Haskell is not Mathematics, this illusion seems to be extremely persistent! Haskell is a victim - no, some users are victims of its success as a formal language, not just as a coding tool... They *want* to have Eq as they imagine the equality, including the comparison between incomparable. This is BTW a long standing philosophical problem. For centuries some speculative guys tried to analyse such "assertions" as God == God, or death==death. Or myself==myself. Of course, even if they produced some cute conclusions, they had no whatsoever sense for the others. Now we have the modern variants of it: NaN == NaN, bottom == bottom ...
Well, Haskell has this "referential transparency" thing which say that a function is a function and you will never be able to build anything else :-) ________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

Cristian Baboi writes after my long political speech on numerics:
Well, Haskell has this "referential transparency" thing which say that a function is a function and you will never be able to build anything else :-)
What do you want to say/claim/suggest/propose/deny?? All these speculations about NaN==NaN, and other disgraceful exceptions don't convey anything which would really break the referential transparency, unless you outlaw yourself. Travesting a US military saying: If 'em things are not comparable, don't! We can formalize this attitude, e.g. by extending/modifying the Num (and the Eq and Ord) classes to take into account the specificity of the non-standard arithmetic, but in general this contains plenty of traps. It will be eventually done, but perhaps not tomorrow... Ketil Malde reacts to:
The bombing of NaN *might* be a profound compilation option, but for people who really do numerical work, this is a blessing NOT to have it.
I don't understand this. Are you worried users will edit the language pragmas in your code, and complain about NaN errors?
I do not worry about anything. I signal that the philosophy of crashing the program as soon as an exceptional number is created (or even used), is not the only one possible.
Back when I *was* using the abbreviation FP for 'Floatin Point', I often got NaNs due to programming errors. Given the NaN's nature of contaminating subsequent results, getting an exception at the first occurrence would aid in tracking it down.
- Ignoring Int overflow is a cheap way of having `mod` (MAXINT+1). Useful for many purposes.
...and ditto for this. The usefulness of one case in no way justifies sabotagin all other cases.
As I said, the exceptional treatment of exceptional values might be
a compilation option, as it was in some Fortrans I used long time ago.
You want your programs to react histerically to all difficulties in math,
since you are afraid of propagating NaNs, etc. And *then* you will have to
sit down and correct your code. If there is no exception, your program
may run happily, and produce rubbish, yes?
I am more conservative. If you are afraid of rubbish, protect your code
by appropriate checks *before* the errors occur. What you call a "sabotage"
I call the programmers negligence. You say that the usage of NaN as "empty"
is a <

jerzy.karczmarczuk@info.unicaen.fr writes:
The difference between you (and/or Wolfgang J.) and myself is that I enjoy more my freedom, even if I have to pay with a little more work. You want to enforce rigid reactions of the system. You should be free to do it on *your* machine, not on mine.
You are putting words in my mouth! I do not want to enforce rigid reactions on the system, I want the option to enforce them on my programs.
As I said, the exceptional treatment of exceptional values might be a compilation option, as it was in some Fortrans I used long time ago.
*I* proposed a compile-time option, *you* responded that it is a "blessing NOT to have it".
You want your programs to react histerically to all difficulties in math, since you are afraid of propagating NaNs, etc.
If you consider halting execution to be a hysterical reaction, arithmetic errors to be all difficulties in math, and wishing for accurate error messages to be afraid, I guess the answer is 'yes'.
And *then* you will have to sit down and correct your code. If there is no exception, your program may run happily, and produce rubbish, yes?
Yes.
I am more conservative. If you are afraid of rubbish, protect your code by appropriate checks *before* the errors occur.
I've written a bit of checking code in my time. The problem is that you quickly end up with more checking code than 'real' code, that the checking code paths are rarely used, and thus even more bug prone than the rest of the code, and that usually, there is little you can sensibly do other than halt execution anyway. The nice thing about checking for integer overflow and arithmetic exception is that they can be added with no cost in code size or complexity, and (at least on some architectures) no cost in performance - perhaps even improvements, for signaling NaNs. My Haskell programs tend to be rather Spartan, and I think this makes them more readable, and thus more likely to actually be correct.
What you call a "sabotage"
I.e. the insistence on wrap-around Ints (a minority use case!) and quiet NaNs as the One True Way, disallowing all other approaches.
I call the programmers negligence.
I'll pleade guilty to the charge of being negiligent and not checking intermediate results for errors and overflows. But the only reason for using Int (and not Integer) and arguably floating point, is performance. Wrapping everything in checks will be laborious, and I am fairly certain that performance will suffer by magnitudes. So yes, I am lazy, I use Int and cross my fingers. (I'm not alone; I've read a fair bit of Haskell code (starting with the Prelude), I must just have been unfortunate to miss all the code written by the industrious and serious people who actually check their Int operations...) -k -- If I haven't seen further, it is by standing in the footprints of giants

jerzy.karczmarczuk@info.unicaen.fr writes:
People, you are monsters.
Well, bring on the torches and the pitchforks (although the image in my mind is more like a mob carrying lenses and bananas).
no, some users are victims of its success as a formal language, not just as a coding tool
I think Haskell's theoretical basis is part of its success as a coding tool.
... They *want* to have Eq as they imagine the equality, including the comparison between incomparable.
In an ideal world, yes, but I think the monster to which you respond was fairly clear on being 'practical' here?
The bombing of NaN *might* be a profound compilation option, but for people who really do numerical work, this is a blessing NOT to have it.
I don't understand this. Are you worried users will edit the language pragmas in your code, and complain about NaN errors? Back when I *was* using the abbreviation FP for 'Floatin Point', I often got NaNs due to programming errors. Given the NaN's nature of contaminating subsequent results, getting an exception at the first occurrence would aid in tracking it down.
- Ignoring Int overflow is a cheap way of having `mod` (MAXINT+1). Useful for many purposes.
...and ditto for this. The usefulness of one case in no way justifies sabotagin all other cases. I'd wager Ints see wider use in settings where the silent 'mod' is harmful, than where this effect is desired. Again, the current behavior causes errors that are very hard to track down. IMHO, these are two very different types, and I'm sligtly baffled that the fact is not reflected in Haskell. -k -- If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde
The bombing of NaN *might* be a profound compilation option, but for people who really do numerical work, this is a blessing NOT to have it.
I'll expand a bit of this, after I've checked with Wikipedia. Please correct me (and it) if I'm wrong, but: 1) Intel CPUs generate exceptions, not NaNs (unless a NaN is already involved), so NaNs are introduced by choice in the run-time system. 2) IEE754 supports both 'signaling' and 'quiet' NaNs, so it seems the standard is not blessed in this regard. And, in Haskell, I'd consider using NaNs for missing values slightly abusive of the system, this is just a poor man's way of spelling "Maybe Double". -k -- If I haven't seen further, it is by standing in the footprints of giants

Am Freitag, 11. Januar 2008 13:21 schrieb jerzy.karczmarczuk@info.unicaen.fr:
Ketil Malde:
Wolfgang Jeltsch:
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn’t adhere to any meaningful axiom set for Eq.
Tough luck, but that's how floating point works, and what the numericalists know, and possibly even love (although I have my doubts). Sanitizing this behavior would make Haskell less usable for real-world numerical problems.
As a compromise, what about an option to make NaN (and presumably the infinities) cause an immediate exception? (And, cetero censeo, exceptions for Int overflow as well.)
People, you are monsters. First, despite the *common, well known* truth that Haskell is not Mathematics, this illusion seems to be extremely persistent! Haskell is a victim -
I was arguing from a software technology point of view: if (==) yields false when comparing a value with itself, this can break code (like a set implementation) which relies on certain guarantees. The fact that Haskell allows to define (==) rather arbitrarily doesn’t mean that the use of arbitrary definitions of (==) is what we want. It’s just that the language is to weak to express axioms. My impression is that staying close to math is good from a software technology point of view. And it has the advantage of less confusion for the user.
[…]
Then, I see here, and on some other lists some tendency to speculate on the numerics by people who really don't need it, and don't use it.
Even if I don’t use numerics, I do care about the Haskell language as a whole.
[…]
I would suggest to Wolfgang Jeltsch a little more of reserve before making sharp categorical proposals concerning the fl. point computations (and also acknowledge that NaN is not a unique entity). It is easy to propose - in the name of "purity" to massacre existing structures; several religious sects and political doctrines were born in such a way. The result was usually horrible...
I don’t see what’s so extreme about suggesting that IEEE floating point comparison should maybe be a seperate operator. And I think, it is really inappropriate to compare this to horrible sects and doctrines. Do you really want to argue that someone who insists on a rather clean language is dangerous? Than more or less everyone on this list would be dangerous—from a C programmer’s point of view.
The Num hierarchy in Haskell is bad, we know it, especially for people who do some more formal mathematics. There are more interesting problems to solve than organising a crusade against IEEE, "illegalizing" the Ord instance for numbers, etc.
Please don’t suggest that “illegalizing” some Ord instance is similar to killing people out of religious motives.
Jerzy Karczmarczuk
Best wishes, Wolfgang

Wolfgang Jeltsch protests (all this is about pathologies of the floating point computations in Haskell, of course...):
Please don’t suggest that “illegalizing” some Ord instance is similar to killing people out of religious motives.
Did I? Where?? This is silly... I admit that I have over-reacted to Ketil, I didn't notice that he proposed the "bombing reaction" as an *option*, not as the Only True way of dealing with NaNs. I am sorry, although then Ketil seems saying that raising the exception *is* the correct approach, while I think it is not, being too brutal. OK, probably the best for people who learn the language. No good for professionals, and impossible for some library procedures. But *you* seem to have decided that all "weakness" in treating exceptional values is absolutely wrong and sinful. The analogy with religious issues is weak, there is simply a thought pattern of "purists" who want to break the existing imperfections in name of their visions. You demonstrate this kind of absolute thinking through the usage of the clause "as a whole":
Even if I don’t use numerics, I do care about the Haskell language as a whole.
Your name is Legion... But I think that Lennart is right. *DO* something positive. In other terms, instead of destroying an existing church, gather some followers, and make your own. Seriously. No type Double, but something possessing variants: * Float2 (regular) * UnderF (non-normalizable, underflow fl.p.) * PositiveInfinity | NegativeInfinity * NaN Mantissa [where Mantissa is a restricted numerical thingie, their number is the no. of floats between 0.5 and 1] etc. With appropriate arithmetic operators. And if you are unhappy with: blah>blah = False, but: compare blah blah giving GT, please tell what do you think SHOULD be the answer. I say that those operators are being used outside their allowed domain. Any answer is bad. Exception? Maybe... Better: leave it... But anyway, I don't buy this:
if (==) yields false when comparing a value with itself, this can break code (like a set implementation) which relies on certain guarantees.
In pathological cases you are not allowed really to use the word "itself" as casually as you would like to. I agree that
The fact that Haskell allows to define (==) rather arbitrarily doesn’t mean that the use of arbitrary definitions of (==) is what we want.
The problem of equality/identity of things relies partially in the fact that it is a human construction. The Nature does not "compare things for equality". So, the arbitrariness is unavoidable.
My impression is that staying close to math is good from a software technology point of view. And it has the advantage of less confusion for the user.
What does it mean "close to math"? How close? Does math raise exceptions upon the division by zero? Does *MATH* answer the question what is: (0/0)==(0/0) ? Nope! David Roundy says ...
Prelude> x/x NaN
The "true" answer here is that x/x == 1.0 (not 0 or +Infinity), but there's no way for the computer to know this, so it's NaN.
His comment uses not very legally the word "true", even in quotes. Who is to say what the "truth" mean here? We cannot oblige the computer to interpret the division operator as we think, in math terms! Wherever you look, you will see some red lines not to neglect. If the physical processes which implement real arithmetics are far from true real math, trying to purify a programming language to make it more math- oriented serves nobody. Or the Devil... We are discussing simple things, but I have seen already a guy who claimed that all computer logic is inherently wrong because of the Gödel's theorem... This is a neverending discussion.
I don’t see what’s so extreme about suggesting that IEEE floating point comparison should maybe be a seperate operator.
You were more picky than that. Here, I would only repeat that this - for me - is not just a question of operators, but of the *type* of numbers.
And I think, it is really inappropriate to compare this to horrible sects and doctrines.
Ugh... Again. Sorry if you took it personally. But a doctrine is just that, a system which is meant to be autoritative, and if applied, then without discussion. Usually based on personal visions of one or a few people. It doesn't need to be negative nor horrible, although often is. http://en.wikipedia.org/wiki/Doctrine I mentioned a particular variant of doctrinal thinking: the *purification* of existing. It doesn't need to be dangerous, but often is... ==== An anecdote. At the time of Konrad Duden (Vater der deutschen Rechtschreibung, 1829–1911 as you probably know much better than I...) there was a tendency to "purify" the German language, full of foreign words, roots, forms, etc. There is a story of one of such words, the shortwriting: "stenography". Stenografie. Who needs Greek? And a good Germanic word has been proposed: Kurzschrift. Yes! Please - (if you are not German, they don't need to do that) - repeat a few times: Kurzschrift, Kurzschrift, Kurzschrift, Kurzschrift ... You will hear the Walkyrien galloping through the skies, the huge shadow of Siegfried or some other Lohengrin will lie over the murky fields, etc. *absolutely* Germanic! But... This word comes from "curtus", "short" in Latin, and the verb "scribere", "scriptus", Latin again... In such a way, German purists translated Greek into Latin. This is the story, I hope nobody will shout that I have something against a particular European nation; I know this story from a decent Berliner, who told me it while eating a Hamburger. Or, perhaps it was vice-versa. Too much beer... Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote:
Wolfgang Jeltsch wrote:
My impression is that staying close to math is good from a software technology point of view. And it has the advantage of less confusion for the user.
What does it mean "close to math"? How close? Does math raise exceptions upon the division by zero? Does *MATH* answer the question what is: (0/0)==(0/0) ? Nope!
Exactly. So why try to give an answer in Haskell? MATH says: the expression 0/0 is undefined, thus comparing (0/0)==(0/0) is undefined, too. I would expect Haskell to say the same. Cheers Ben

Ben Franksen writes:
jerzy.karczmarczuk@info.unicaen.fr wrote: ...
Does *MATH* answer the question what is: (0/0)==(0/0) ? Nope!
Exactly. So why try to give an answer in Haskell? MATH says: the expression 0/0 is undefined, thus comparing (0/0)==(0/0) is undefined, too. I would expect Haskell to say the same.
I don't know whether you are serious, or you are pulling my leg... Let's suppose that it is serious. When math says that something is undefined, in my little brain I understand that there is no answer. NO answer. Is this the "undefined" you want to have? The bottom non-termination? Now, this is obviously the *worst* possible reaction of the system, the IEEE indefinite is much better, at least you know what had happened. Would you propose the non-termination as the issue of all "errors", such as negative argument to the real sqrt, etc? Well, as you wish... But don't write medical software, please... Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote:
When math says that something is undefined, in my little brain I understand that there is no answer. NO answer.
Math doesn't say that something is undefined, but tells you that you did something that's illegal, i.e. impossible, in the system you're working with. Trying to divide by zero is like trying to break out of a mental asylum with a banana: It's neither a good tool to fight your way free, as well as using it will get you back into the asylum, by system-inherent laws. The mathematically right, and only really sane way to handle such things is not to do them at all, which would change Haskell's semantics rather drastically. There just is no unsafePerformMath :: Math a -> a. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

Achim Schneider writes:
jerzy.karczmarczuk@info.unicaen.fr wrote:
When math says that something is undefined, in my little brain I understand that there is no answer.
Math doesn't say that something is undefined, but tells you that you did something that's illegal, i.e. impossible, in the system you're working with.
Yeah, sure. Thanks God, we have on this list a fellow who gets direct telephone calls from her Majesty the Queen Mathematics. And knows better. Go tell to all people who use the word 'undefined' in math that they are stupid. At least, more stupid than you. http://mathworld.wolfram.com/Undefined.html http://en.wikipedia.org/wiki/Defined_and_undefined http://mathforum.org/library/drmath/view/53336.html http://encarta.msn.com/encyclopedia_761568582/Calculus_(mathematics).html etc. Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote:
Achim Schneider writes:
jerzy.karczmarczuk@info.unicaen.fr wrote:
When math says that something is undefined, in my little brain I understand that there is no answer.
Math doesn't say that something is undefined, but tells you that you did something that's illegal, i.e. impossible, in the system you're working with.
Yeah, sure.
Thanks God, we have on this list a fellow who gets direct telephone calls from her Majesty the Queen Mathematics. And knows better. Go tell to all people who use the word 'undefined' in math that they are stupid. At least, more stupid than you.
That's funny, you just proved that I am more stupid than myself, as I use the term. For things that are illegal or impossible in the system I'm working with. It's not that I posted it because I didn't agree with what you said. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

On Jan 15, 2008 12:29 AM,
Ben Franksen writes:
jerzy.karczmarczuk@info.unicaen.fr wrote: ...
Does *MATH* answer the question what is: (0/0)==(0/0) ? Nope!
Exactly. So why try to give an answer in Haskell? MATH says: the expression 0/0 is undefined, thus comparing (0/0)==(0/0) is undefined, too. I would expect Haskell to say the same.
I don't know whether you are serious, or you are pulling my leg... Let's suppose that it is serious.
When math says that something is undefined, in my little brain I understand that there is no answer. NO answer.
I'm not sure if I'm agreeing or disagreeing with you here. Depends on exactly what you mean by no answer. When I was a TA for calculus 2, students were presented with something like the following problem: Find the limit: lim (sin x / x) x->0 And proceeding, they would reduce this to: sin 0 / 0 0 / 0 Look in the book that said "0/0 is undefined", and write "undefined" on the paper as if that were the answer. For the calculus uninclined (I figure that is a vast minority on this list), the answer is 1. In that case, undefined meant "you did the problem wrong". And the error was precisely when they wrote 0 on the bottom of the division sign. To me, when math says "undefined"... let me stop there. Math doesn't say "undefined". Instead, when mathematicians say "undefined", they usually mean not that the question has no answer, but that the question you're asking doesn't even make sense as a question. For example, division is only defined when the denominator is not 0. Asking what the answer to "0/0" is is like asking "what happens when I move the king across the board in chess?". That operation doesn't even make sense with those arguments. However, Haskell is not blessed (cursed?) with dependent types, so it is forced to answer such questions even though they don't make sense. But looking to math (at least with a classical interpretation of numbers) is just not going to work. Personally, I loathe the existence of NaN and +-Infinity in floating point types. Someone here made an enlightinging point encouraging us to think of floating point numbers not as numbers but as intervals. That makes me slightly more okay with the infinities, but NaN still bugs me. I'd prefer an error: Haskell's way of saying "you did the problem wrong". That would be most useful in the kinds of things I do with floating point numbers (games). But there are probably other areas where the IEEE semantics are more useful. Well... that paragraph was just a long-winded way of saying "I have an opinion, but I don't think it's very important...". Luke

Luke Palmer dialog with myself:
On Jan 15, 2008 12:29 AM,
wrote:
When math says that something is undefined, in my little brain I understand that there is no answer.
I'm not sure if I'm agreeing or disagreeing with you here. Depends on exactly what you mean by no answer.
Yes, this is a doctrinal problem. Since *any* concrete reaction, e.g., an error message is a kind of answer, the only - unusable as it is - way of not providing it is to fail the termination...
When I was a TA for calculus 2, students were presented with something like the following problem: Find the limit: lim (sin x / x) x->0 And proceeding, they would reduce this to: sin 0 / 0 0 / 0 Look in the book that said "0/0 is undefined", and write "undefined" on the paper as if that were the answer. For the calculus uninclined (I figure that is a vast minority on this list), the answer is 1. In that case, undefined meant "you did the problem wrong". And the error was precisely when they wrote 0 on the bottom of the division sign.
What I see faulty here is that your students were not given, or could not grasp the only "serious" word in the problem statement, the word *limit*. In engineering, physics, etc., sometimes math is conveyed a bit approxima- tively, swallowing some touchy points, and relying on intuition. This is unavoidable, question of time. I taught math to physics students a bit and there is no free lunch here. But a true mathematician lecturing to genuine math students *here* makes the things clear as crystal. And what I learnt then was exactly that: undefined means no answer. I may be wrong, but it is possible that in such "state of spirit" of the Creators the Haskell <<undefined>> object arose into "being". Whether you say that:
Instead, when mathematicians say "undefined", they usually mean not that the question has no answer, but that the question you're asking doesn't even make sense as a question. -
or use such words as "illegal", "forbidden", etc., this is a question of philosophy more than math. At least: semantics. For Leibniz (or his eternal foe, Spinoza) it could even belong to ethics... We, or whoever might speculate until the solution of the non-termination problem whether math forbids anything, or what is the sense of a question, or the sense of the sense. Mathematicians I crossed in my life (plenty of them; same building...), at least some of them, *did* say that undefined means no answer. Your example with the King on the chessboard goes along the doctrine professed by Achim S., "forbidding" something. But this word, "legality", etc. is a juridic term, something not so meaningful in math. OK, you are forbidden to try 0/0. But you DO. So what? You claim that math doesn't say "undefined", mathematicians do, etc. Now, does MATH say "Hey! there is no sense in what you are doing!"? But math as an abstract domain, does it have "built-in" the notion of sense neither. You say: "you are out of system/sense". I say "you get no answer". I believe that my standpoint is more operational.
Personally, I loathe the existence of NaN and +-Infinity in floating point types.
I conclude that you live far from the numeric world. But I was raised as physicist, and without them, the implementation of several algorithms would be extremely difficult.
Someone here made an enlightinging point encouraging us to think of floating point numbers not as numbers but as intervals. That makes me slightly more okay with the infinities, but NaN still bugs me. I'd prefer an error: Haskell's way of saying "you did the problem wrong". That would be most useful in the kinds of things I do with floating point numbers (games). But there are probably other areas where the IEEE semantics are more useful.
Once more, last time... The interval interpretation is a bit pulled out of thin air. It might be thought of, as the last bit rounding has sth. to do with "fuzzy" arith., but I don't think that this is the issue. I believe that if we wanted to be as close to math as we would be happy with, the only way is to extend the typing of numbers. There are numbers, and there are NaNs and infinities. PERFECTLY decent objects, with concrete representations (or families of). The arith. ops may generate those types, and if applied to them, the disease propagates. Simple as that. For many, many numerical library procedures which use automatic search through some domains, for vectorized computations, etc., bombing is almost as bad as non-termination. It provides you gratuitously with a waste of time. Absolutely inacceptable for engineering reasons. In games it might happen as well, when, e.g., a collision handling module finds a pathological geometrical situation, with a singularity so close that math bombs. Then, a possible solution is to output /some/ random value, and not break the game. In other words., accept NaNs and infinities, and do with them what your program requires. YOU take the responsability of raising an exception, not the system. Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr wrote:
Your example with the King on the chessboard goes along the doctrine professed by Achim S., "forbidding" something. But this word, "legality", etc. is a juridic term, something not so meaningful in math. OK, you are forbidden to try 0/0. But you DO. So what? You claim that math doesn't say "undefined", mathematicians do, etc. Now, does MATH say "Hey! there is no sense in what you are doing!"? But math as an abstract domain, does it have "built-in" the notion of sense neither. You say: "you are out of system/sense". I say "you get no answer". I believe that my standpoint is more operational.
I believe it's basically the same point. Legal, btw, is meant along the lines of "it is not allowed for an apple to reinterpret gravity and fly into earth's orbit". Natural, not human-made law. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

Yes, this is a doctrinal problem. Since *any* concrete reaction, e.g., an error message is a kind of answer, the only - unusable as it is - way of not providing it is to fail the termination...
You can just disallow the call, using the type system. Not that it's always easy or practical either, mind you. Stefan

jerzy.karczmarczuk@info.unicaen.fr wrote:
Personally, I loathe the existence of NaN and +-Infinity in floating point types.
I conclude that you live far from the numeric world. But I was raised as physicist, and without them, the implementation of several algorithms would be extremely difficult.
Hm. Can you give an example? (A web link, maybe to a paper or some, such would be enough). I am asking because I am used to calculating with infinity from integration theory where the rationale is similar: formulation of most of the basic theorems would loose a lot of its elegance if the 'infinite case' had to be handled explicitly. Cheers Ben

jerzy.karczmarczuk@info.unicaen.fr wrote:
Ben Franksen writes:
jerzy.karczmarczuk@info.unicaen.fr wrote: ...
Does *MATH* answer the question what is: (0/0)==(0/0) ? Nope!
Exactly. So why try to give an answer in Haskell? MATH says: the expression 0/0 is undefined, thus comparing (0/0)==(0/0) is undefined, too. I would expect Haskell to say the same.
I don't know whether you are serious, or you are pulling my leg... Let's suppose that it is serious.
When math says that something is undefined, in my little brain I understand that there is no answer. NO answer.
Oh, come on. You know very well what I am talking about. Defined or undefined is always relative to a set of axioms/laws that we usually take for granted. Division by zero is left undefined (in the sense of 'not defined') for a reason: there is no way to define it without breaking some laws. E.g. A one-point compactification of the real line is a nice thing but it breaks total (global) ordering (and certain algebraic laws, IIRC). I knwo of no mathematical theory that includes a NaN.
Is this the "undefined" you want to have? The bottom non-termination?
In practice, it will be an exception.
Now, this is obviously the *worst* possible reaction of the system, the IEEE indefinite is much better, at least you know what had happened.
It depends. An unsigned Infinity is ok if you are willing to concede that odering is local only and not total. I.e. to every number there is an open interval enclosing it on which the order is total, but it is not (necessarily) total globally. Two (signed) infinities are ok if you are willing to give up more laws.
Would you propose the non-termination as the issue of all "errors", such as negative argument to the real sqrt, etc?
What do you suppose is the answer to head [] ? What makes you think this is a less defined value, compared to, say, 0/0?
Well, as you wish... But don't write medical software, please...
For medical software, somewhere at the end of the calculation there will be some effect to the outside world, if only to display the calculated value. Every pure calculation will eventually be called from the IO monad, where we can catch the exception. What is the advantage of letting the pure calculation continue while it is almost sure that the result will be Nan (=Not a Nan ;) anyway? In the end it won't make a big difference whether the 'undefined' result is due to a (cought) exception or due to a Nan result. I think the exception is cleaner, though. Cheers Ben

On Jan 11, 2008 9:27 AM, Wolfgang Jeltsch
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn't adhere to any meaningful axiom set for Eq. So I think that this behavior should be changed. Think of a set implementation which uses (==) to compare set elements for equality. The NaN behavior would break this implementation since it would allow for sets which contain NaN multiple times.
Here's another thing that makes me want to throw up. Prelude> let nan :: Double = 0/0 Prelude> compare nan nan GT Prelude> nan > nan False Luke

If you talk to anyone who uses floating point numbers for real they would
find (0/0)==(0/0) perfectly natural.
It disobeys some axioms that Eq instances don't fulfill anyway, but changing
it would make a lot of people surprised too.
In general, the floating point instances break almost all axioms that you
might think exist for numbers.
-- Lennart
On Jan 11, 2008 1:27 AM, Wolfgang Jeltsch
Some people seem to think that == is an equality predicate. This is a big source of confusion for them; until they realize that == is just another function returning Bool they will make claims like [1..]==[1..] having an unnatural result.
The == function is only vaguely related to the equality predicate in
Am Freitag, 11. Januar 2008 08:11 schrieb Lennart Augustsson: that
it is meant to be a computable approximation of semantic equality (but since it's overloaded it can be anything, of course).
-- Lennart
But class methods are expected to fulfill some axioms. I'd suppose that (==) should be an equivalence relation. Of course, this is not implementable because of infininte data structures. But one could relax the axioms such that it's allowed for (==) to return _|_ instead of the expected value. Differentiating between data and codata would of course be the better solution.
However, the fact that (0 / 0) == (0 / 0) yields False is quite shocking. It doesn't adhere to any meaningful axiom set for Eq. So I think that this behavior should be changed. Think of a set implementation which uses (==) to compare set elements for equality. The NaN behavior would break this implementation since it would allow for sets which contain NaN multiple times.
Best wishes, Wolfgang _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 11 Jan 2008 09:11:52 +0200, Lennart Augustsson
Some people seem to think that == is an equality predicate. This is a big source of confusion for them; until they realize that == is just another function returning Bool they will make claims like [1..]==[1..] having an unnatural result. The == function is only vaguely related to the equality predicate in that it is meant to be a computable approximation of semantic equality (but since it's overloaded it can be anything, of course).
I think that confusion came from the fact that the type of == is called (Eq a)=> a -> a -> Bool and not (Bla a) => a -> a -> Binary and not realizing it's just an overloaded polimorphic function. ________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

On Fri, 11 Jan 2008 09:11:52 +0200, Lennart Augustsson
Some people seem to think that == is an equality predicate. This is a big source of confusion for them; until they realize that == is just another function returning Bool they will make claims like [1..]==[1..] having an unnatural result. The == function is only vaguely related to the equality predicate in that it is meant to be a computable approximation of semantic equality (but since it's overloaded it can be anything, of course).
So let's imagine:
ones = 1 : ones
ones' = repeat 1 where repeat n = n : repeat n
(==) :: Eq a => a -> a -> Bool -- what is (y (y) ) by the way ? -- how about ( y id ) ? y f = f (y f). ones :: Num a => [a] ones = y (1 :) repeat :: a -> [a] repeat = \n -> y (n:) ones' :: Num a => [a] ones' = repeat 1 = (\n->y(n:)) 1 = y (1 : ) To be able to test them for equality, we must have Eq a. So, the reason we cannot test them for equality is that we cannot test y (a : ) == y (a : ) where a == a is testable. ________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

"Cristian Baboi"
On Fri, 11 Jan 2008 09:11:52 +0200, Lennart Augustsson
wrote: Some people seem to think that == is an equality predicate. This is a big source of confusion for them; until they realize that == is just another function returning Bool they will make claims like [1..]==[1..] having an unnatural result. The == function is only vaguely related to the equality predicate in that it is meant to be a computable approximation of semantic equality (but since it's overloaded it can be anything, of course).
So let's imagine:
ones = 1 : ones
ones' = repeat 1 where repeat n = n : repeat n
(==) :: Eq a => a -> a -> Bool
-- what is (y (y) ) by the way ? -- how about ( y id ) ?
y f = f (y f).
ones :: Num a => [a] ones = y (1 :)
repeat :: a -> [a] repeat = \n -> y (n:)
ones' :: Num a => [a] ones' = repeat 1 = (\n->y(n:)) 1 = y (1 : )
To be able to test them for equality, we must have Eq a.
So, the reason we cannot test them for equality is that we cannot test y (a : ) == y (a : ) where a == a is testable.
Yes, thanks. I actually do think that many things would be easier if every recursion would be translated to its fixpoint, making the term tree completely finite and defining y internal, as it's arcane, black magic. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

The thing is that y already is a *builtin* function in Haskell.
On Fri, 11 Jan 2008 15:59:50 +0200, Achim Schneider
"Cristian Baboi"
wrote:
So let's imagine:
ones = 1 : ones
ones' = repeat 1 where repeat n = n : repeat n
(==) :: Eq a => a -> a -> Bool
-- what is (y (y) ) by the way ? -- how about ( y id ) ?
y f = f (y f).
ones :: Num a => [a] ones = y (1 :)
repeat :: a -> [a] repeat = \n -> y (n:)
ones' :: Num a => [a] ones' = repeat 1 = (\n->y(n:)) 1 = y (1 : )
To be able to test them for equality, we must have Eq a.
So, the reason we cannot test them for equality is that we cannot test y (a : ) == y (a : ) where a == a is testable.
Yes, thanks. I actually do think that many things would be easier if every recursion would be translated to its fixpoint, making the term tree completely finite and defining y internal, as it's arcane, black magic.
________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

That would give you a language with a semantics I don't want to touch.
Sometimes useful, yes, but far to intensional for my taste.
-- Lennart
On Jan 11, 2008 5:59 AM, Achim Schneider
Yes, thanks. I actually do think that many things would be easier if every recursion would be translated to its fixpoint, making the term tree completely finite and defining y internal, as it's arcane, black magic.
-- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Can someone explain (in simple terms) what is meant by existential and universal types. Preferably illustrating it in terms of logic rather than lambda calculus. There's plenty of stuff out there on it....but most of it seems double dutch (no offense to the dutch intended).

On 11/01/2008, Nicholls, Mark
Preferably illustrating it in terms of logic rather than lambda calculus.
There's plenty of stuff out there on it….but most of it seems double dutch (no offense to the dutch intended).
I think the preferred idiom, considering the notation, is that "it's all Greek to me" ;-) D -- Dougal Stanton dougal@dougalstanton.net // http://www.dougalstanton.net

2008/1/11 Nicholls, Mark
Can someone explain (in simple terms) what is meant by existential and universal types.
Preferably illustrating it in terms of logic rather than lambda calculus.
Well, I don't know about logic. While they are certainly related to existential and universal types in logic, I don't really see a way to explain them in terms of that. Universal types are easy, you use them every day in Haskell. Take for example id:
id :: a -> a
Or better illustrated (using ghc extension):
id :: forall a. a -> a
That means that for any type a I pick, I can get a value of type a -> a from id. If you wrap an existential type up in a constructor, not much changes:
newtype ID = ID (forall a. a -> a)
ID can hold any value of type forall a. a -> a; i.e. it can hold any value which exhibits the property that it can give me a value of type a -> a for any type a I choose. In this case the only things ID can hold are id and _|_, because id is the only function that has that type. Here's how I might use it:
applyID :: ID -> (Int,String) -> (Int,String) applyID (ID f) (i,s) = (f i, f s)
Note how I can use f, which is a universal type, on both an Int and a String in the same place. You can also put typeclass constraints on universals. Take the universal type forall a. Num a => a -> a. Among functions that have this type are:
add1 :: forall a. Num a => a -> a add1 x = x + 1
id' :: forall a. Num a => a -> a id' = id -- it doesn't have to use the constraint if it doesn't want to
Wrapping this up in a constructor:
newtype NUM = NUM (forall a. Num a => a -> a)
We can create values:
NUM add1 :: NUM NUM id :: NUM
And use them:
applyNUM :: NUM -> (Int, Double) -> (Int, Double) applyNUM (NUM f) (i,d) = (f i, f d)
Existential types are dual, but you need to use constructors to use them. I'll write them using GADTs, because I think they're a lot clearer that way: data NUM' where NUM' :: Num a => a -> NUM' Look at the type of the constructor NUM'. It has a universal type, meaning whatever type a you pick (as long as it is a Num), you can create a NUM' value with it. So the type contained inside the NUM' constructor is called existential (note that NUM' itself is just a regular ADT; NUM' is not existential). So when you have:
negNUM' :: NUM' -> NUM' negNUM' (NUM' n) = NUM' (negate n)
Here the argument could have been constructed using any numeric type n, so we know very little about it. The only thing we know is that it is of some type a, and that type has a Num instance. So we can perform operations to it which work for any Num type, such as negate, but not things that only work for particular Num types, such as div.
doubleNUM' :: NUM' -> NUM' doubleNUM' (NUM' n) = NUleM' (n + n)
We can add it to itself, but note:
addNUM' :: NUM' -> NUM' -> NUM' addNUM' (NUM' a) (NUM' b) = NUM (a + b) -- Illegal!
We can't add them to each other, because the first argument could have been constructed with, say, a Double and the other with a Rational. But do you see why we're allowed to add it to itself? How about this:
data Variant where Variant :: a -> Variant
This is a type that can be constructed with any value whatsoever. Looks pretty powerful... but it isn't. Why not? Luke

-----Original Message----- From: Luke Palmer [mailto:lrpalmer@gmail.com] Sent: 11 January 2008 17:11 To: Nicholls, Mark Cc: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] type questions again....
2008/1/11 Nicholls, Mark
: Can someone explain (in simple terms) what is meant by existential and universal types.
Preferably illustrating it in terms of logic rather than lambda calculus.
Well, I don't know about logic. While they are certainly related to existential and universal types in logic, I don't really see a way to explain them in terms of that.
Universal types are easy, you use them every day in Haskell. Take for example id:
id :: a -> a
Or better illustrated (using ghc extension):
id :: forall a. a -> a
That means that for any type a I pick, I can get a value of type a -> a from id.
Yep....it's universal because forall types a.
If you wrap an existential type up in a constructor, not much changes:
If you wrap a what?....should this read existential or universal?
newtype ID = ID (forall a. a -> a)
ID can hold any value of type forall a. a -> a; i.e. it can hold any value which exhibits the property that it can give me a value of type a -> a for any type a I choose. In this case the only things ID can hold are id and _|_, because id is the only function that has that type. Here's how I might use it:
It's the only function you've defined the type of.... Id2 :: forall a. a -> a Now it can hold id2?
applyID :: ID -> (Int,String) -> (Int,String) applyID (ID f) (i,s) = (f i, f s)
Note how I can use f, which is a universal type, on both an Int and a String in the same place.
Yep.
You can also put typeclass constraints on universals. Take the universal type forall a. Num a => a -> a. Among functions that have this type are:
add1 :: forall a. Num a => a -> a add1 x = x + 1
id' :: forall a. Num a => a -> a id' = id -- it doesn't have to use the constraint if it doesn't
want to "it doesn't have to use the constraint if it doesn't want to" ? If id was of type.. Id::forall a. Ord a => a -> a Then I assume it would complain?
Wrapping this up in a constructor:
newtype NUM = NUM (forall a. Num a => a -> a)
We can create values:
NUM add1 :: NUM NUM id :: NUM
And use them:
applyNUM :: NUM -> (Int, Double) -> (Int, Double) applyNUM (NUM f) (i,d) = (f i, f d)
Yep.
Existential types are dual,
Dual? (like a dual basis rather than double?)
but you need to use constructors to use them. I'll write them using GADTs, because I think they're a lot clearer that way:
data NUM' where NUM' :: Num a => a -> NUM'
Look at the type of the constructor NUM'. It has a universal type, meaning whatever type a you pick (as long as it is a Num), you can create a NUM' value with it.
yes and then it goes wrong...
So the type contained inside the NUM' constructor
?
is called existential (note that NUM' itself is just a regular ADT; NUM' is not existential).
Why existential....see below...I have a guess
So when you have:
negNUM' :: NUM' -> NUM' negNUM' (NUM' n) = NUM' (negate n)
Here the argument could have been constructed using any numeric type n, so we know very little about it. The only thing we know is that it is of some type a, and that type has a Num instance.
I think one of the harrowing things about Haskell is the practice of overloading data constructors with type names....it confuses the hell out of me.... OK so this declaration says that forall x constructed using "NUM' n"...there *exists* a type T s.t. T is a member of type class NUM"... which in term implies that that there exists the function negate... yes? It's existential...because the word "exists" occurred in the above translation.
So we can perform operations to it which work for any Num type, such as negate, but not things that only work for particular Num types, such as div.
Because the existence of the value implies the existence of a type in the typeclass?
doubleNUM' :: NUM' -> NUM' doubleNUM' (NUM' n) = NUleM' (n + n)
We can add it to itself, but note:
addNUM' :: NUM' -> NUM' -> NUM' addNUM' (NUM' a) (NUM' b) = NUM (a + b) -- Illegal!
We can't add them to each other, because the first argument could have been constructed with, say, a Double and the other with a Rational.
But do you see why we're allowed to add it to itself?
We can add it to itself because "+" is of type "a->a->a"... I think....
How about this:
data Variant where Variant :: a -> Variant
This is a type that can be constructed with any value whatsoever. Looks pretty powerful... but it isn't. Why not?
Eeek..... Because a could be of any type whatsover?...so how I actually do anything with it? Don't know complete guess really.
Luke

On Jan 11, 2008 5:47 PM, Nicholls, Mark
If you wrap an existential type up in a constructor, not much changes:
If you wrap a what?....should this read existential or universal?
Whoops, right, universal.
newtype ID = ID (forall a. a -> a)
ID can hold any value of type forall a. a -> a; i.e. it can hold any value which exhibits the property that it can give me a value of type a -> a for any type a I choose. In this case the only things ID can hold are id and _|_, because id is the only function that has that type. Here's how I might use it:
It's the only function you've defined the type of....
Id2 :: forall a. a -> a
Now it can hold id2?
Well, that's not what I meant, but yes it can hold id2. What I meant was that, in this case, id2 = _|_ or id2 = id, there are no other possibilities.
id' :: forall a. Num a => a -> a id' = id -- it doesn't have to use the constraint if it doesn't want to
"it doesn't have to use the constraint if it doesn't want to" ?
If id was of type..
Id::forall a. Ord a => a -> a
Then I assume it would complain?
Yes.
but you need to use constructors to use them. I'll write them using GADTs, because I think they're a lot clearer that way:
data NUM' where NUM' :: Num a => a -> NUM'
Look at the type of the constructor NUM'. It has a universal type, meaning whatever type a you pick (as long as it is a Num), you can create a NUM' value with it.
yes
and then it goes wrong...
So the type contained inside the NUM' constructor
?
is called existential (note that NUM' itself is just a regular ADT; NUM' is not existential).
Why existential....see below...I have a guess
Okay, I was being handwavy here. Explaining this will allow me to clear this up. If you take the non-GADT usage of an existential type: data Foo = forall a. Num a => Foo a That is isomorphic to a type: data Foo = Foo (exists a. Num a => a) Except GHC doesn't support a keyword 'exists', and if it did, it would only be able to be used inside constructors like this (in order for inference to be decidable), so why bother? That's what I meant by "the type inside the constructor", Foo is not existential, (exists a. Num a => a) is.
So when you have:
negNUM' :: NUM' -> NUM' negNUM' (NUM' n) = NUM' (negate n)
Here n has an existential type, specifically (exists a. Num a => a).
Here the argument could have been constructed using any numeric type n, so we know very little about it. The only thing we know is that it is of some type a, and that type has a Num instance.
I think one of the harrowing things about Haskell is the practice of overloading data constructors with type names....it confuses the hell out of me....
Yeah that took a little getting used to for me too. But how am I supposed to come up with enough names if I want to name them differently!? That would require too much creativity... :-)
OK so this declaration says that forall x constructed using "NUM' n"...there *exists* a type T s.t. T is a member of type class NUM"...
(you probably meant type class Num here)
which in term implies that that there exists the function negate...
yes?
Huh, I had never thought of it like that, but yes. I just realized that I think of programming in a way quite different than I think of logic. Maybe I should try to have my brain unify them.
doubleNUM' :: NUM' -> NUM' doubleNUM' (NUM' n) = NUleM' (n + n)
We can add it to itself, but note:
addNUM' :: NUM' -> NUM' -> NUM' addNUM' (NUM' a) (NUM' b) = NUM (a + b) -- Illegal!
We can't add them to each other, because the first argument could have been constructed with, say, a Double and the other with a Rational.
But do you see why we're allowed to add it to itself?
We can add it to itself because "+" is of type "a->a->a"...
Yep, so whatever type a n happens to have, it matches both arguments.
How about this:
data Variant where Variant :: a -> Variant
This is a type that can be constructed with any value whatsoever. Looks pretty powerful... but it isn't. Why not?
Eeek.....
Because a could be of any type whatsover?...so how I actually do anything with it?
Right. Luke

Is my problem here, simply that the forall extension in GHC is misleading.....that the "forall" in
"MkSwizzle :: (forall a. Ord a => [a] -> [a]) -> Swizzle"
is not the same beast as the "forall" in..
data Accum a = forall s. MkAccum s (a -> s -> s) (s -> a)
really
data Accum a = exists s. MkAccum s (a -> s -> s) (s -> a)
would be much better syntax....
don't get me wrong....I still don't especially understand...but if it's a misleading label...I can mentally substitute "exists" whenever I see a "forall" without a "=>".
________________________________
From: Luke Palmer [mailto:lrpalmer@gmail.com]
Sent: Fri 11/01/2008 18:03
To: Nicholls, Mark
Cc: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] type questions again....
On Jan 11, 2008 5:47 PM, Nicholls, Mark
If you wrap an existential type up in a constructor, not much changes:
If you wrap a what?....should this read existential or universal?
Whoops, right, universal.
newtype ID = ID (forall a. a -> a)
ID can hold any value of type forall a. a -> a; i.e. it can hold any value which exhibits the property that it can give me a value of type a -> a for any type a I choose. In this case the only things ID can hold are id and _|_, because id is the only function that has that type. Here's how I might use it:
It's the only function you've defined the type of....
Id2 :: forall a. a -> a
Now it can hold id2?
Well, that's not what I meant, but yes it can hold id2. What I meant was that, in this case, id2 = _|_ or id2 = id, there are no other possibilities.
id' :: forall a. Num a => a -> a id' = id -- it doesn't have to use the constraint if it doesn't want to
"it doesn't have to use the constraint if it doesn't want to" ?
If id was of type..
Id::forall a. Ord a => a -> a
Then I assume it would complain?
Yes.
but you need to use constructors to use them. I'll write them using GADTs, because I think they're a lot clearer that way:
data NUM' where NUM' :: Num a => a -> NUM'
Look at the type of the constructor NUM'. It has a universal type, meaning whatever type a you pick (as long as it is a Num), you can create a NUM' value with it.
yes
and then it goes wrong...
So the type contained inside the NUM' constructor
?
is called existential (note that NUM' itself is just a regular ADT; NUM' is not existential).
Why existential....see below...I have a guess
Okay, I was being handwavy here. Explaining this will allow me to clear this up. If you take the non-GADT usage of an existential type: data Foo = forall a. Num a => Foo a That is isomorphic to a type: data Foo = Foo (exists a. Num a => a) Except GHC doesn't support a keyword 'exists', and if it did, it would only be able to be used inside constructors like this (in order for inference to be decidable), so why bother? That's what I meant by "the type inside the constructor", Foo is not existential, (exists a. Num a => a) is.
So when you have:
negNUM' :: NUM' -> NUM' negNUM' (NUM' n) = NUM' (negate n)
Here n has an existential type, specifically (exists a. Num a => a).
Here the argument could have been constructed using any numeric type n, so we know very little about it. The only thing we know is that it is of some type a, and that type has a Num instance.
I think one of the harrowing things about Haskell is the practice of overloading data constructors with type names....it confuses the hell out of me....
Yeah that took a little getting used to for me too. But how am I supposed to come up with enough names if I want to name them differently!? That would require too much creativity... :-)
OK so this declaration says that forall x constructed using "NUM' n"...there *exists* a type T s.t. T is a member of type class NUM"...
(you probably meant type class Num here)
which in term implies that that there exists the function negate...
yes?
Huh, I had never thought of it like that, but yes. I just realized that I think of programming in a way quite different than I think of logic. Maybe I should try to have my brain unify them.
doubleNUM' :: NUM' -> NUM' doubleNUM' (NUM' n) = NUleM' (n + n)
We can add it to itself, but note:
addNUM' :: NUM' -> NUM' -> NUM' addNUM' (NUM' a) (NUM' b) = NUM (a + b) -- Illegal!
We can't add them to each other, because the first argument could have been constructed with, say, a Double and the other with a Rational.
But do you see why we're allowed to add it to itself?
We can add it to itself because "+" is of type "a->a->a"...
Yep, so whatever type a n happens to have, it matches both arguments.
How about this:
data Variant where Variant :: a -> Variant
This is a type that can be constructed with any value whatsoever. Looks pretty powerful... but it isn't. Why not?
Eeek.....
Because a could be of any type whatsover?...so how I actually do anything with it?
Right. Luke

On Fri, 11 Jan 2008 09:11:52 +0200, Lennart Augustsson
Some people seem to think that == is an equality predicate. This is a big source of confusion for them; until they realize that == is just another function returning Bool they will make claims like [1..]==[1..] having an unnatural result. The == function is only vaguely related to the equality predicate in that it is meant to be a computable approximation of semantic equality (but since it's overloaded it can be anything, of course).
Imagine one can manage to define a function which stops when applied on any value of type T1, but not when applied on some values of type T2 including bottom. Because bottom is a member of any type and referential transparency must not be broken, such a function cannot be defined. ________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

jerzy.karczmarczuk@info.unicaen.fr wrote:
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
Prelude> sum [1..1000000] *** Exception: stack overflow Prelude> Data.List.foldl' (+) 0 [1..1000000] 500000500000 Tillmann

rendel:
jerzy.karczmarczuk@info.unicaen.fr wrote:
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
Prelude> sum [1..1000000] *** Exception: stack overflow
Prelude> Data.List.foldl' (+) 0 [1..1000000] 500000500000
See, http://hackage.haskell.org/trac/ghc/ticket/1997 Strictness for for atomic numeric types is inconsitently applied across the base library. Fixing the inconsitencies would let fix a range of similar issues. Note the strictness analyser handles this in compiled code, but doesn't run in ghci. -- Don

On Thu, Jan 10, 2008 at 10:10:21AM -0800, Don Stewart wrote:
rendel:
jerzy.karczmarczuk@info.unicaen.fr wrote:
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE.
Prelude> sum [1..1000000] *** Exception: stack overflow
Prelude> Data.List.foldl' (+) 0 [1..1000000] 500000500000
See, http://hackage.haskell.org/trac/ghc/ticket/1997
Strictness for for atomic numeric types is inconsitently applied across the base library. Fixing the inconsitencies would let fix a range of similar issues.
Having followed this discussion, I agree with your analysis, but also think that rendel has chosen a good example of a pretty obscure bug caused by laziness in code written by folks who are actually decent Haskell programmers. It's unfortunate that there's no way to include strictness behavior in function types (at least that I'm aware of), so one has to rely on possibly-undocumented (and certainly never checked by a compiler) strictness behavior of many functions in order to write truly correct code. I wish there were a nice way around this issue (but can't really even imagine one). -- David Roundy Department of Physics Oregon State University

David Roundy writes:
It's unfortunate that there's no way to include strictness behavior in function types (at least that I'm aware of), so one has to rely on possibly-undocumented (and certainly never checked by a compiler) strictness behavior of many functions in order to write truly correct code.
I wish there were a nice way around this issue (but can't really even imagine one).
I wonder not how to get around, but WHY we can't help the strictness analyser in a way Clean permits, say: add :: !Integer -> !Integer -> Integer I thought it could be done without any serious revolution within the compiler, but perhaps I am too naïve (which in general is true...) Jerzy Karczmarczuk

On Thu, 10 Jan 2008 19:38:07 +0200, Tillmann Rendel
jerzy.karczmarczuk@info.unicaen.fr wrote:
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko Example, PLEASE.
Prelude> sum [1..1000000] *** Exception: stack overflow
Not true in Hugs.
Prelude> Data.List.foldl' (+) 0 [1..1000000] 500000500000
Tillmann _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com
________ Information from NOD32 ________ This message was checked by NOD32 Antivirus System for Linux Mail Servers. part000.txt - is OK http://www.eset.com

jerzy.karczmarczuk@info.unicaen.fr wrote:
Niko Korhonen writes: ...
Although it could be argued that laziness is the cause of some very obscure bugs... <g> Niko
Example, PLEASE. Jerzy Karczmarczuk
Write any piece of code that processes data from hard disk in a non-trivial manner, uses buffering with IOUArray or does pretty much anything IO heavy. You're bound to hit into a few laziness-related stack/heap overflows before you get it right. Sometimes finding the space leak can be really, really hard and causes the amount of strictness annotations in your code to explode. Hence the notion that the art of optimizing Haskell programs is akin to black magic and is fully understood only by a handful of some of the best minds in the universe. For some concrete examples see my posts on randomR and IOUArray on this newsgroup. Niko

Actually, here's a better example: Prelude> foldl (+) 0 [1..1000000] *** Exception: stack overflow Prelude Data.List> foldl' (+) 0 [1..1000000] 500000500000 The explanation to what happens here is trivial to Haskell gurus but completely baffling to newbies. It is difficult to explain to someone why the first example doesn't work, although at a glance it should. The virtue of laziness is that allows constructs like [1..1000000], but the downside is that it can bite you on the backside when you least expect it. Niko

Neil wrote:
Laziness. It is very difficult to have a lazy language which is not pure.
Exactly.
Laziness and purity together help with equational reasoning, compiler transformations, less obscure bugs, better compositionality etc.
Related, but nevertheless a shameless plug: Jurriaan Hage and myself did some thinking about this lately and put some stuff on paper that I presented two days ago at PEPM [1]. Here's a quote, taken from the introduction: "Functional programming languages can be classified along several axes: we can distinguish between pure and impure langauges as well as between langauges with strict and nonstrict semantics. In practice, not all combinations make sense. Nonstrict languages better be pure, because reasoning about unrestricted side-effects becomes more complicated when the order of evaluation gets less predictable." "Purity has some clear advantages. For example, it enables equational reasoning and it opens the road to memoization, common subexpression elimination, and parallel evaluation strategies. The driving force that enables these opportunities is referential transparency: in a pure language, each of a program's terms can, at any time, be replaced by its value without changing the meaning of the program as a whole." Cheers, Stefan ----- [1] Jurriaan Hage and Stefan Holdermans. Heap recycling for lazy languages. In John Hatcliff, Robert Glück, and Oege de Moor, editors, _Proceedings of the 2008 ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation_, PEPM'08, San Francisco, California, USA, January 7--8, 2008, pages 189--197. ACM Press, 2008. http://doi.acm.org/10.1145/1328408.1328436 .

"Yu-Teh Shen"
I got question about why haskell insist to be a purely FL. I mean is there any feature which is only support by pure? I mean maybe the program is much easier to prove? Or maybe we can cache some value for laziness. Could anyone give me some more information about why haskell needs to be pure.
To give a short answer: Because if pureness would be abandoned for easy implementation of feature X, features Y and Z would become completely awkward. The pureness is an ideological design decision basing on the assumption that pureness is ideal to ensure whole-system consistency. Most important is referential transparency, its lack of spooky side-effects and thus making global memoization of evaluation results feasible. In fact, the most obvious point really seems to be that getChar :: Char cant' be cached, though its type implies it, while getChar :: IO Char enshures that only the _action_ of getting a char can be cached, not the char itself. (and asking why and how the IO type does that opens a big can of worms) It's along the lines of "a value is an unary, static function that returns a value" vs. "an input function is an unary, dynamic function that returns a value" and the proper handling of the terms "static" and "dynamic" in every possible case. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.

On 09/01/2008, Yu-Teh Shen
I got question about why haskell insist to be a purely FL. I mean is there any feature which is only support by pure?
Have a look at the ueber-retrospective on Haskell, fifty-five pages of all the history and motivation one could possibly want. http://research.microsoft.com/~simonpj/papers/history-of-haskell/ It's interesting reading, I promise! ;-) D. -- Dougal Stanton dougal@dougalstanton.net // http://www.dougalstanton.net

On Wed, 2008-01-09 at 15:06 +0000, Dougal Stanton wrote:
Have a look at the ueber-retrospective on Haskell, fifty-five pages of all the history and motivation one could possibly want.
http://research.microsoft.com/~simonpj/papers/history-of-haskell/
It's interesting reading, I promise! ;-)
I actually think it was a really interesting read when i read it a year ago or so! :)

On Wed, 2008-01-09 at 15:06 +0000, Dougal Stanton wrote:
On 09/01/2008, Yu-Teh Shen
wrote: I got question about why haskell insist to be a purely FL. I mean is there any feature which is only support by pure?
Have a look at the ueber-retrospective on Haskell, fifty-five pages of all the history and motivation one could possibly want.
http://research.microsoft.com/~simonpj/papers/history-of-haskell/
It's interesting reading, I promise! ;-)
A shorter and lighter and and also interesting and entertaining read is: http://research.microsoft.com/~simonpj/Papers/haskell-retrospective/index.ht... While the reason Haskell was pure was to support laziness, at this point though it's safe to say Haskell "insists" on being pure no more than water "insists" on being wet.

Derek Elkins wrote:
A shorter and lighter and and also interesting and entertaining read is:
http://research.microsoft.com/~simonpj/Papers/haskell-retrospective/index.ht... Just read it, quoting: `Tony Hoare’s comment: “I fear that Haskell is doomed to succeed”` LOL! Very good stuff If Haskell wasn't pure, I would not spent so much time trying to bend my over-imperative mind to it. I had it with impure languages, way to tricky. Those languages were nice when I was young & reckless :) But I'm amazed that impure (albeit strong) languages like LISP and Scheme are still used for projects... I mean, in Scheme you can "set!" the addition operator to become a multiplication operator at runtime, modifying global behavior... Now that's a side effect, C/C++ is nothing compared to that! So I guess that with such great power comes great responsibility? Peter

Peter Verswyvelen wrote:
But I'm amazed that impure (albeit strong) languages like LISP and Scheme are still used for projects... I mean, in Scheme you can "set!" the addition operator to become a multiplication operator at runtime, modifying global behavior... Now that's a side effect, C/C++ is nothing compared to that! So I guess that with such great power comes great responsibility?
Scheme and Lisp typically constrain this feature in various ways, ranging from compiler options and declarations to, more recently, features of a module system. For example, in R6RS Scheme, variables and syntax imported via the module system cannot be mutated, so the denotation of a name (including names like "+") can be statically determined. OTOH, the freedom to change things on the fly can be nice to have, and if used with "great responsibility" (mainly an understanding of what's safe to do and what isn't), the downside can be vanishingly small. Anton

anton:
OTOH, the freedom to change things on the fly can be nice to have, and if used with "great responsibility" (mainly an understanding of what's safe to do and what isn't), the downside can be vanishingly small.
It can be small, unless you need to have any kind of static assurance (say for high assurance software, or for new kinds of optimisations, or if you want to reorder code in the compiler for parallelism). Then the downside to arbitrary, untracked effects in the system is huge. -- Don

Don Stewart wrote:
anton:
OTOH, the freedom to change things on the fly can be nice to have, and if used with "great responsibility" (mainly an understanding of what's safe to do and what isn't), the downside can be vanishingly small.
It can be small, unless you need to have any kind of static assurance (say for high assurance software, or for new kinds of optimisations, or if you want to reorder code in the compiler for parallelism).
Then the downside to arbitrary, untracked effects in the system is huge.
Oh dear - I'm going to have to rethink the paper I was working on, provisionally titled "In defense of arbitrary untracked effects in high assurance software." ;) But by "can be vanishingly small", I definitely meant something like "in cases where it's technically and economically appropriate". Anton

anton:
Don Stewart wrote:
anton:
OTOH, the freedom to change things on the fly can be nice to have, and if used with "great responsibility" (mainly an understanding of what's safe to do and what isn't), the downside can be vanishingly small.
It can be small, unless you need to have any kind of static assurance (say for high assurance software, or for new kinds of optimisations, or if you want to reorder code in the compiler for parallelism).
Then the downside to arbitrary, untracked effects in the system is huge.
Oh dear - I'm going to have to rethink the paper I was working on, provisionally titled "In defense of arbitrary untracked effects in high assurance software." ;)
That would be an awesome paper :) -- Don

On Jan 9, 2008 6:20 PM, Don Stewart
anton:
Oh dear - I'm going to have to rethink the paper I was working on, provisionally titled "In defense of arbitrary untracked effects in high assurance software." ;)
That would be an awesome paper :)
Hear, hear! Anton, if you're looking for a co-author, and you're willing to tackle the high-assurance parts, I have years of experience with arbitrary untracked effects. ;-) Graham

On Jan 9, 2008 4:21 PM, Don Stewart
anton:
OTOH, the freedom to change things on the fly can be nice to have, and if used with "great responsibility" (mainly an understanding of what's safe to do and what isn't), the downside can be vanishingly small.
It can be small, unless you need to have any kind of static assurance (say for high assurance software, or for new kinds of optimisations, or if you want to reorder code in the compiler for parallelism).
Then the downside to arbitrary, untracked effects in the system is huge.
I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used. (Yes, it's unsafe, but you can write a safe module with a purely safe interface that uses unsafePerformIO in its core.) David

On Wed, 9 Jan 2008, David Roundy wrote:
On Jan 9, 2008 4:21 PM, Don Stewart
wrote: anton:
OTOH, the freedom to change things on the fly can be nice to have, and if used with "great responsibility" (mainly an understanding of what's safe to do and what isn't), the downside can be vanishingly small.
It can be small, unless you need to have any kind of static assurance (say for high assurance software, or for new kinds of optimisations, or if you want to reorder code in the compiler for parallelism).
Then the downside to arbitrary, untracked effects in the system is huge.
I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
Indeed, there are hacks and they are some times necessary. The good thing about Haskell is, that hacks look like hacks. In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell? Wouldn't this also simplify http://www.haskell.org/haskellwiki/Safely_running_untrusted_Haskell_code ?

On Jan 9, 2008 5:42 PM, Henning Thielemann
I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
Indeed, there are hacks and they are some times necessary. The good thing about Haskell is, that hacks look like hacks.
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea. It comes down to a question of whether you think it should be allowed for Haskell code to be used to write core Haskell libraries in a first-class manner. Perhaps you think libraries should always be in C in order to avoid the use of unsafePerformIO, but I prefer to allow them to be written in Haskell. But then, I don't see unsafePerformIO as inherently a hack. It's the only possible way that certain useful abstractions can be impelemented--at least, that's understanding. I'd be curious as to how much of the Prelude would be marked unsafe if you had your wish... David

On Thu, 10 Jan 2008, David Roundy wrote:
On Jan 9, 2008 5:42 PM, Henning Thielemann
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea. It comes down to a question of whether you think it should be allowed for Haskell code to be used to write core Haskell libraries in a first-class manner. Perhaps you think libraries should always be in C in order to avoid the use of unsafePerformIO, but I prefer to allow them to be written in Haskell.
The Modula-3 designers acknowledged that there are things that should not be done, but must be done, illustrated by the quotation: "There are some cases that no law can be framed to cover." That is, marking a module as UNSAFE does not discourage their usage, but it swaps the burden of proof: For a safe module the compiler guarantees that nasty things can't happen, for an unsafe module the programmer must provide this warranty. If your program seg-faults you only have to scan the UNSAFE modules. For running untrusted Haskell code, this means, that if someone sends Haskell code to you (maybe as solution of an exercise) that is labelled 'safe' and uses no IO or a restricted IO wrapper type, then some especially nasty things can't happen - however the according Wiki page lists even more problems.

"David Roundy"
I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea.
I think the point is (should be) to mark functions unsafe when they may be unsafe to /use/, and not when they just make use of potentially unsafe functionality. It is perfectly reasonable to write safe (pure) code that uses unsafe ones. You just have to trust the author to get it right.
I'd be curious as to how much of the Prelude would be marked unsafe if you had your wish...
All of it? In the end it is all passed to GCC (or generates assembly), which is inherently "unsafe". :-) -k -- If I haven't seen further, it is by standing in the footprints of giants

On Jan 10, 2008 8:06 PM, Ketil Malde
"David Roundy"
writes: I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea.
I think the point is (should be) to mark functions unsafe when they may be unsafe to /use/,
I think using the IO monad for this works well... -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

On Thu, Jan 10, 2008 at 08:10:57PM +0000, Sebastian Sylvan wrote:
On Jan 10, 2008 8:06 PM, Ketil Malde
wrote: "David Roundy"
writes: I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea.
I think the point is (should be) to mark functions unsafe when they may be unsafe to /use/,
I think using the IO monad for this works well...
Would you suggest moving head and tail into the IO monad? -- David Roundy Department of Physics Oregon State University

On Jan 10, 2008 8:12 PM, David Roundy
On Thu, Jan 10, 2008 at 08:10:57PM +0000, Sebastian Sylvan wrote:
On Jan 10, 2008 8:06 PM, Ketil Malde
wrote: "David Roundy"
writes: I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea.
I think the point is (should be) to mark functions unsafe when they may be unsafe to /use/,
I think using the IO monad for this works well...
Would you suggest moving head and tail into the IO monad?
They're not really unsafe, though, they just have the potential to fail (from which you can recover). Unsafe to me mean "may reformat your harddrive, kill your dog, and break up with your girlfriend". They really should return a Maybe type, but I recognize that this would be inconvenient (so would having e.g. integer division returning Maybe).
-- David Roundy Department of Physics Oregon State University _______________________________________________
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

On Thu, 10 Jan 2008, David Roundy wrote:
On Thu, Jan 10, 2008 at 08:10:57PM +0000, Sebastian Sylvan wrote:
On Jan 10, 2008 8:06 PM, Ketil Malde
wrote: "David Roundy"
writes: I just want to point out that unsafePerformIO is at the core of the (safe) bytestring library. As SPJ et al pointed out, this is crucial functionality, and is only unsafe if unsafely used.
In Modula-3 modules using hacks must be explicitly marked as UNSAFE. See http://www.cs.purdue.edu/homes/hosking/m3/reference/unsafe.html Maybe this is also an option for Haskell?
I don't think this is a good idea.
I think the point is (should be) to mark functions unsafe when they may be unsafe to /use/,
I think using the IO monad for this works well...
Would you suggest moving head and tail into the IO monad?
I'm afraid we are talking about different notions of 'safe'. Modula-3's 'safe' means no "segmentation fault", but program abortion due to ASSERT is still allowed. Ported to Haskell this means: 'head' and 'tail' are safe, but not total. I've seen function definitions like 'safeHead', that I would have named 'maybeHead'. For running untrusted code this means: I think it is ok that the program aborts with an error or runs into an infinite loop and must be terminated after a time-out, but it is not ok, that it overwrites some memory area or deletes some files, because unsafePerformIO was invoked.

On Jan 9, 2008 5:43 PM, Derek Elkins
A shorter and lighter and and also interesting and entertaining read is: http://research.microsoft.com/~simonpj/Papers/haskell-retrospective/index.ht...
While the reason Haskell was pure was to support laziness, at this point though it's safe to say Haskell "insists" on being pure no more than water "insists" on being wet.
Wow, cool presentation.
participants (35)
-
Achim Schneider
-
ajb@spamcop.net
-
Anton van Straaten
-
Ben Franksen
-
Cristian Baboi
-
Daniel Yokomizo
-
David Roundy
-
David Roundy
-
Derek Elkins
-
Don Stewart
-
Dougal Stanton
-
Duncan Coutts
-
Graham Fawcett
-
Henning Thielemann
-
Hugh Perkins
-
jerzy.karczmarczuk@info.unicaen.fr
-
Jonathan Cast
-
Jules Bean
-
Ketil Malde
-
Lennart Augustsson
-
Luke Palmer
-
Mattias Bengtsson
-
Neil Mitchell
-
Nicholls, Mark
-
Niko Korhonen
-
Peter Verswyvelen
-
Roman Leshchinskiy
-
Sebastian Sylvan
-
Stefan Holdermans
-
Stefan Monnier
-
Tillmann Rendel
-
Vladimir Zlatanov
-
Wilhelm B. Kloke
-
Wolfgang Jeltsch
-
Yu-Teh Shen