RE: "interact" behaves oddly if used interactively

[ taking this one to haskell-café... ]
I still do not quite agree with Simon that 'interact' exposes anything but non-strictness. Non-strictness means that
map toUpper _|_ = _|_ map toUpper ('a':_|_) = ('A':_|_) map toUpper ('a':'b':_|_) = ('A':'B':_|_)
and 'interact (map toUpper)' is a great way to experience this property.
However, you can also experience the property without 'interact', evaluating expressions like
take 2 (map toUpper ('a':'b':undefined))
Certainly you can observe non-strictness, that's not the point. The point is that you can also observe more than just non-strictness using interact, and I don't think that is desirable. For example: interact (\xs -> let z = length xs in "Hello World\n") Now, Haskell is a pure language, so it shouldn't matter whether the implementation evaluates z or not, as long as it is careful not to violate the non-strict semantics by turning a terminating program into a non-terminating one. A parallel Haskell implementation might happily spawn off another thread to evaluate z for example. An optimistic implementation might evaluate z for a fixed amount of time before continuing with the main thread of evaluation. BUT in the presence of lazy I/O this simply isn't true any more. Why? Because z is not pure; evaluating it has a side-effect. And yet it has type Int. Am I the only one who thinks this is wrong? Cheers, Simon

[snip]
Certainly you can observe non-strictness, that's not the point. The point is that you can also observe more than just non-strictness using interact, and I don't think that is desirable. For example:
interact (\xs -> let z = length xs in "Hello World\n")
Now, Haskell is a pure language, so it shouldn't matter whether the implementation evaluates z or not, as long as it is careful not to violate the non-strict semantics by turning a terminating program into a non-terminating one. A parallel Haskell implementation might happily spawn off another thread to evaluate z for example. An optimistic implementation might evaluate z for a fixed amount of time before continuing with the main thread of evaluation.
BUT in the presence of lazy I/O this simply isn't true any more. Why? Because z is not pure; evaluating it has a side-effect. And yet it has type Int. Am I the only one who thinks this is wrong?
I agree with you completely. It is wrong for all the same reasons that unsafePerformIO is wrong, except that it is worse than that because unsafePerformIO has "unsafe" in the title, and people are discouraged from using it without due care. By contrast, "interact" and "getContents" are presented as being nice friendly functions that everyone should use. If I had my way, getContents would be called unsafeGetContents and interact would be called unsafeInteract. Even better would be for them to both be removed from Haskell completely :-)) But then, if I had my way, I would abandon lazy evaluation correctly and fork a strict-by-default language derived from Haskell. I have spent sufficient time analysing lazy evaluation during my PhD to become convinced that it is the wrong evaluation strategy in 90\% of cases, and that in the 10\% of cases where it is needed, the program would be much clearer if this was stated explicitly in the program text. Haskell is a good language, pureness is good, type classes are good, monads are good - but laziness is holding it back. [end rant] -Rob

Robert Ennals wrote:
It is wrong for all the same reasons that unsafePerformIO is wrong, except that it is worse than that because unsafePerformIO has "unsafe" in the title, and people are discouraged from using it without due care. By contrast, "interact" and "getContents" are presented as being nice friendly functions that everyone should use.
If I had my way, getContents would be called unsafeGetContents and interact would be called unsafeInteract. Even better would be for them to both be removed from Haskell completely :-))
I agree that getContents can easily cause some complex and unwanted IO behaviour and that users should be warned about it. interact probably only makes sense as the only IO action of a program. However, they are both not unsafe in the sense that they do not change the `standard' semantics of Haskell. unsafePerformIO even undermines the type system. [begin rant] I'd be far more in favour of removing polymorphic seq and strictness annotations. Their presence changes the equational semantics of the language. Their presence makes most theorems for free invalid. [end rant]
But then, if I had my way, I would abandon lazy evaluation correctly and fork a strict-by-default language derived from Haskell. I have spent sufficient time analysing lazy evaluation during my PhD to become convinced that it is the wrong evaluation strategy in 90\% of cases, and that in the 10\% of cases where it is needed, the program would be much clearer if this was stated explicitly in the program text. Haskell is a good language, pureness is good, type classes are good, monads are good - but laziness is holding it back.
[end rant]
-- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767

On Wed, 1 Oct 2003, Robert Ennals wrote:
Haskell is a good language, pureness is good, type classes are good, monads are good - but laziness is holding it back.
Hear hear. I have often wondered how much simpler the various Haskell implementations would be if they used strict evaluation. It seems like laziness complicates implementations tremendously; the STG-machine paper makes my head spin. If laziness is really getting you down, you could try Mercury -- it is pure, strict and has type classes, although it has no built-in monad handling. I seem to remember seeing something about it having some support for explicitly-marked laziness, but I might be wrong about that. N

Am Donnerstag, 2. Oktober 2003, 10:52 schrieb Nicholas Nethercote:
On Wed, 1 Oct 2003, Robert Ennals wrote:
Haskell is a good language, pureness is good, type classes are good, monads are good - but laziness is holding it back.
Hear hear.
I have often wondered how much simpler the various Haskell implementations would be if they used strict evaluation. It seems like laziness complicates implementations tremendously; the STG-machine paper makes my head spin.
Yes, but I think that the reason for laziness is not to make compiler constructors' lifes easier but language users'.
[...]
N
W (or J)

On Thu, 2 Oct 2003, Wolfgang Jeltsch wrote:
Yes, but I think that the reason for laziness is not to make compiler constructors' lifes easier but language users'.
I appreciate the prefer-users'-ease-over-compiler-writers' idea. For example, syntactic sugar can be a great thing. But I think there's a point where it becomes too much. Haskell has arguably passed that point. C++ arguably has too. Also, I'm not convinced that laziness does make users' lives easier -- witness a lot of the traffic on this list. Witness the subject of this thread. In which case the extra difficulty heaped upon compiler writers is of questionable value. Just my two cents. N

Nicholas Nethercote wrote:
Also, I'm not convinced that laziness does make users' lives easier -- witness a lot of the traffic on this list. Witness the subject of this thread. In which case the extra difficulty heaped upon compiler writers is of questionable value.
I'm convinced that if laziness (or call by name) were the norm in languages in general, then there would be similar traffic in lists like this one about the problems of strict evaluation -- and there would be a lot more of it, since strictness constrains the programming style a lot more!
Just my two cents.
Likewise, my 50 öre (smallest denomination currently in use :-) -- Thomas

On Thu, 2 Oct 2003, Thomas Johnsson wrote:
I'm convinced that if laziness (or call by name) were the norm in languages in general, then there would be similar traffic in lists like this one about the problems of strict evaluation -- and there would be a lot more of it, since strictness constrains the programming style a lot more!
You can divide "problems" into two areas... I can imagine people might complain on lists like this about strictness preventing them from taking a certain approach that laziness would allow. But I can't imagine they would complain about the problem of being confused... "this strict code evaluated immediately, what's going on?!" :) This is because strict evaluation is always easy to understand. Lazy/non-strict evaluation can be non-intuitive/confusing/surprising, even for people who are used to it. N

On Thu, Oct 02, 2003 at 12:08:07PM +0100, Nicholas Nethercote wrote:
But I can't imagine they would complain about the problem of being confused... "this strict code evaluated immediately, what's going on?!" :) This is because strict evaluation is always easy to understand. Lazy/non-strict evaluation can be non-intuitive/confusing/surprising, even for people who are used to it.
what is easy or confusing and what becomes intuitive is learned and depends a lot on background.. take x = x * 2 strict programmers think 'double the value of x' mathematitions think 'x must equal zero' lazy programmers think 'x is bottom' There is no reason to think any model is easier to learn than any other, it just happens that most people who write programs have a background in strict programming. This property of people should not be used to judge the systems. IMHO lazy semantics can become just as intuitive and easy as strict or mathematical semantics. John -- --------------------------------------------------------------------------- John Meacham - California Institute of Technology, Alum. - john@foo.net ---------------------------------------------------------------------------

John Meacham wrote:
On Thu, Oct 02, 2003 at 12:08:07PM +0100, Nicholas Nethercote wrote:
This is because strict evaluation is always easy to understand. Lazy/non-strict evaluation can be non-intuitive/confusing/surprising, even for people who are used to it.
what is easy or confusing and what becomes intuitive is learned and depends a lot on background..
... IMHO lazy semantics can become just as intuitive and easy as strict or mathematical semantics. John
I agree. I once got confused by strict evalution, even though I had never seen a lazy functional language. This happened when I was exposed to Standard ML as an undergraduate student. Before then, I had been learning programming using imperative languages like Pascal and Modula-2. One if the things I liked in SML was the conciseness offered by if-then-else *expressions*, e.g., fun fac n = if n=0 then 1 else n*fac(n-1) as compared to the more verbose if-then-else *statements* I had used in other languages. But then I had they idea that I could do the same in Module-2 simply by defining an if-then-else function! PROCEDURE IfThenElse(b:BOOLEAN; t,e:INTEGER):INTEGER; BEGIN IF b THEN RETURN t ELSE RETURN e END END IfThenElse; PROCEDURE fac(n:INTEGER):INTEGER; BEGIN RETURN IfThenElse(n=0,1,fac(n-1)) END fac; But I was surprised to notice that this implementation of the factorial function looped. I soon realized what the problem was: the IfThenElse function is too strict! I was very disappointed... So, for over ten years now, I have done most of my programming in lazy functional languages. I am sure I would run into occasional surprises if I were to switch back to a strict language. -- Thomas H "Delay is preferable to error." - Thomas Jefferson (3rd President of the United States)

On Thu, 2 Oct 2003, Nicholas Nethercote wrote:
I appreciate the prefer-users'-ease-over-compiler-writers' idea. For example, syntactic sugar can be a great thing. But I think there's a point where it becomes too much. Haskell has arguably passed that point.
Sorry, this was ambiguous; when I say "where it becomes too much", the "it" refers *not* to syntactic sugar, but to the idea of preferring users to compiler writers. N

Simon Marlow wrote:
Certainly you can observe non-strictness, that's not the point. The point is that you can also observe more than just non-strictness using interact, and I don't think that is desirable. For example:
interact (\xs -> let z = length xs in "Hello World\n")
Now, Haskell is a pure language, so it shouldn't matter whether the implementation evaluates z or not, as long as it is careful not to violate the non-strict semantics by turning a terminating program into a non-terminating one. A parallel Haskell implementation might happily spawn off another thread to evaluate z for example. An optimistic implementation might evaluate z for a fixed amount of time before continuing with the main thread of evaluation.
BUT in the presence of lazy I/O this simply isn't true any more. Why? Because z is not pure; evaluating it has a side-effect. And yet it has type Int. Am I the only one who thinks this is wrong?
I don't know what everyone else thinks, but I do not see your problem. There is nothing impure about z. According to non-strict semantics we have (\xs -> let z = length xs in "Hello World\n") _|_ = "Hello World\n" So every implementation has to make sure that this equation holds. You know it is not enough for optimistic evaluation to `avoid _|_' by evaluating eagerly only for a fixed amount of time. You also have to bail out if evaluation raises an exception. This in particular if eager evaluation causes a black hole: reevaluation at some later time may yield a perfectly fine value. Likewise the presence of lazy IO forces you to test if the value is currently available or needs to be read from some handle; the presence of lazy IO adds another sort of black hole. Yes, the presence of lazy IO makes optimistic evaluation more complicated, but I do not see that it compromises the purity of the language in anyway (whatever purity is ;-). Olaf -- OLAF CHITIL, Dept. of Computer Science, The University of York, York YO10 5DD, UK. URL: http://www.cs.york.ac.uk/~olaf/ Tel: +44 1904 434756; Fax: +44 1904 432767
participants (8)
-
John Meacham
-
Nicholas Nethercote
-
Olaf Chitil
-
Robert Ennals
-
Simon Marlow
-
Thomas Hallgren
-
Thomas Johnsson
-
Wolfgang Jeltsch