
On 30 January 2006 12:32, John Hughes wrote:
I don't really agree that ~ adds complexity to the language design, given that we need SOME mechanism for lazy pattern matching. If you write a denotational semantics of pattern matching, then ~ fits in naturally. It has a simple, compositional semantics. Where I see complexity is that a pattern means different things in a let or in a case, so I have to remember, for each construct, whether it's a strict-matching or lazy-matching construct. Quick, when I write do (x,y)<-e... is that pair matched strictly, like case, or lazily, like let? If strictly, then why? There's no choice to be made here, so why not wait until x or y is used before matching? I expect you know the answer to this question off-hand, but it's an obstacle to learning the language--I'll bet there are many quite experienced Haskell programmers who are uncertain. If only pattern matching was *always* strict, unless a ~ appeared, then the language would be more regular and easier to learn.
For what it's worth, I agree with this point. I'd be quite happy for pattern matching to be strict by default in let and where. I can imagine it might still be confusing to some, though, because let x = fac 99 does not evaluate 'fac 99', but let (x,y) = quotRem a b does evaluate 'quotRem a b'. Still, I suppose it's no more confusing than the current situation. If pattern matching in where was strict, I imagine I'd use ~ a lot more. A common practice is to throw a bunch of bindings into a where clause, with no regard for whether they get evaluated or not - variable bindings and pattern bindings alike. If the pattern bindings are strict, I have to add ~ to each one to get the same effect. On the other hand, if pattern bindings were strict by default, I bet there would be a lot fewer accidental space leaks. Cheers, Simon

On Tue, Jan 31, 2006 at 11:05:37PM -0000, Simon Marlow wrote:
On the other hand, if pattern bindings were strict by default, I bet there would be a lot fewer accidental space leaks.
I don't think this is true. I think there would just be a whole lot of a different type of space leak. Lazy by default is more in the spirit of haskell. case, function, and monadic binding matching is only strict out of necessity since they actually need to scrutinize the values and that makes perfect sense. if anything were to change, I'd make lambda patterns lazy. (though, I don't feel particularly strongly about that, since a case could be made for them being just a degenerate function binding with only one alternative) John -- John Meacham - ⑆repetae.net⑆john⑈

Hello John, Wednesday, February 01, 2006, 6:48:48 AM, you wrote:
On the other hand, if pattern bindings were strict by default, I bet there would be a lot fewer accidental space leaks.
JM> I don't think this is true. I think there would just be a whole lot of a JM> different type of space leak. Lazy by default is more in the spirit of JM> haskell. i had one idea, what is somewhat corresponding to his discussion: make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign what it will give? ability to use Haskell as powerful strict language, what is especially interesting for "real-world" programmers. i have found myself permanently fighting against the lazyness once i starting to optimize my programs. for the newcomers, it just will reduce learning path - they don't need to know anything about lazyness another interesting application of such a language is to make strict and lazy versions of data structures just by compiling the same module in the strict and lazy Haskell modes -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Wed, 1 Feb 2006, Bulat Ziganshin wrote:
Hello John,
Wednesday, February 01, 2006, 6:48:48 AM, you wrote:
On the other hand, if pattern bindings were strict by default, I bet there would be a lot fewer accidental space leaks.
JM> I don't think this is true. I think there would just be a whole lot of a JM> different type of space leak. Lazy by default is more in the spirit of JM> haskell.
i had one idea, what is somewhat corresponding to his discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
what it will give? ability to use Haskell as powerful strict language, what is especially interesting for "real-world" programmers. i have found myself permanently fighting against the lazyness once i starting to optimize my programs. for the newcomers, it just will reduce learning path - they don't need to know anything about lazyness
another interesting application of such a language is to make strict and lazy versions of data structures just by compiling the same module in the strict and lazy Haskell modes
I apologize in advance if I say something silly, but wouldn't such a language transformation be a use for Template Haskell? Superficially, it seems like you should be able to do that.

On Wed, 1 Feb 2006, Creighton Hogg wrote:
I apologize in advance if I say something silly, but wouldn't such a language transformation be a use for Template Haskell? Superficially, it seems like you should be able to do that.
It can, but so far it's really ugly to apply transformations to entire modules. A little syntactic sugar could be good there. -- flippa@flippac.org "My religion says so" explains your beliefs. But it doesn't explain why I should hold them as well, let alone be restricted by them.

On Wed, Feb 01, 2006 at 03:44:37PM +0000, Philippa Cowderoy wrote:
It can, but so far it's really ugly to apply transformations to entire modules. A little syntactic sugar could be good there.
module $hat.Foo(..) where ... could mean pass the entire module through the 'hat' function of TH. this would be a really cool feature a lot of rather complicated preprocessors (hat) could be implemented this way. John -- John Meacham - ⑆repetae.net⑆john⑈

John Meacham wrote:
module $hat.Foo(..) where ...
Before we invent more ad-hoc notation for annotations (we already have deriving, {-# .. #-}, {-! .. -!} (DrIFT) ) can we replace all (or most) of this with a unified annotation syntax, e. g. Java uses "@" notation which is basically allowed at any declaration, and (important points) programmers can define their own annotations, and annotations can also have values. Also, they have a retention policy saying whether they should be visible at compile time or at run time. Compile time is for tools/preprocessors, and visibility at run time is helpful for reflection. see e. g. JLS 9.6f http://java.sun.com/docs/books/jls/third_edition/html/interfaces.html#9.6 some example text is here: http://dfa.imn.htwk-leipzig.de/~waldmann/draft/meta-haskell/ Best regards, -- -- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 -- ---- http://www.imn.htwk-leipzig.de/~waldmann/ -------

I'll second that.
I'll just throw in that not all pragmas ({-# ... #-}) are really
annotations, because they do not necessarily pertain to one particular
entity each. Some could be attached -- e.g. DEPRECATED, INLINE /
NOINLINE, SPECIALIZE. Others, however, couldn't -- say, rewrite rules
-- and are bound to remain as pragmas, even if the rest are converted
to annotations.
Cheers,
DInko
On 2/2/06, Johannes Waldmann
John Meacham wrote:
module $hat.Foo(..) where ...
Before we invent more ad-hoc notation for annotations (we already have deriving, {-# .. #-}, {-! .. -!} (DrIFT) ) can we replace all (or most) of this with a unified annotation syntax, e. g. Java uses "@" notation which is basically allowed at any declaration, and (important points) programmers can define their own annotations, and annotations can also have values. Also, they have a retention policy saying whether they should be visible at compile time or at run time. Compile time is for tools/preprocessors, and visibility at run time is helpful for reflection. see e. g. JLS 9.6f http://java.sun.com/docs/books/jls/third_edition/html/interfaces.html#9.6 some example text is here: http://dfa.imn.htwk-leipzig.de/~waldmann/draft/meta-haskell/
Best regards, -- -- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 -- ---- http://www.imn.htwk-leipzig.de/~waldmann/ -------

On Thu, 2006-02-02 at 09:29 +0100, Johannes Waldmann wrote:
John Meacham wrote:
module $hat.Foo(..) where ...
Before we invent more ad-hoc notation for annotations (we already have deriving, {-# .. #-}, {-! .. -!} (DrIFT) ) can we replace all (or most) of this with a unified annotation syntax, e. g. Java uses "@" notation which is basically allowed at any declaration, and (important points) programmers can define their own annotations, and annotations can also have values.
The ticket for Johannes's proposal is here: http://hackage.haskell.org/trac/haskell-prime/ticket/88 This looks to me like it's related to "specifying language extensions" here: http://www.haskell.org//pipermail/haskell-prime/2006-February/000335.html Patryk, have you created a ticket for your proposal? Could it be implemented w/ annotations as described by Johannes? Could the two of you put together a specific proposal? I've put a meta-proposal here for this question: http://hackage.haskell.org/trac/haskell-prime/ticket/91 peace, isaac

Hello John, Thursday, February 02, 2006, 4:24:06 AM, you wrote:
It can, but so far it's really ugly to apply transformations to entire modules. A little syntactic sugar could be good there.
JM> module $hat.Foo(..) where JM> ... JM> could mean pass the entire module through the 'hat' function of TH. this JM> would be a really cool feature a lot of rather complicated preprocessors JM> (hat) could be implemented this way. well, i think even more - that TH by itself can substitute much of the better module system that we need. It can implement parametrization, conditional compilation, hiding. But it will require some more advanced syntax sugar. on the other side, even the existing TH facilities can be used to implement all these features and moreover - remain compatible with other Haskell compilers: module Implement where #ifdef GHC module = [d| #endif foo = ... var = ... #ifdef GHC |] #endif module Use where import Implement $(hat module) -- Best regards, Bulat mailto:bulatz@HotPOP.com

Am Mittwoch, 1. Februar 2006 11:49 schrieb Bulat Ziganshin:
[...]
i had one idea, what is somewhat corresponding to his discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
what it will give? ability to use Haskell as powerful strict language, what is especially interesting for "real-world" programmers. i have found myself permanently fighting against the lazyness once i starting to optimize my programs. for the newcomers, it just will reduce learning path - they don't need to know anything about lazyness
Since laziness often allows you to solve problems so elegantly, I'm really scared of the idea of a "Strict Haskell"! :-( Is laziness really so "unreal" that real-world programmers have to see it as an enemy which they have to fight against? In fact, I was kind of shocked as I read in Simon Peyton Jones' presentation "Wearing the hair shirt" [1] that in his opinion "Lazyness doesn't really matter".
[...]
Best wishes, Wolfgang [1] http://research.microsoft.com/Users/simonpj/papers/haskell-retrospective/

Hello Wolfgang, Friday, February 03, 2006, 1:46:56 AM, you wrote:
i had one idea, what is somewhat corresponding to this discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
what it will give? ability to use Haskell as powerful strict language, what is especially interesting for "real-world" programmers. i have found myself permanently fighting against the lazyness once i starting to optimize my programs. for the newcomers, it just will reduce learning path - they don't need to know anything about lazyness
WJ> Since laziness often allows you to solve problems so elegantly, I'm really WJ> scared of the idea of a "Strict Haskell"! :-( Is laziness really so "unreal" WJ> that real-world programmers have to see it as an enemy which they have to WJ> fight against? WJ> In fact, I was kind of shocked as I read in Simon Peyton Jones' presentation WJ> "Wearing the hair shirt" [1] that in his opinion "Lazyness doesn't really WJ> matter". i suggest you to write some large program like darcs and try to make it as efficient as C++ ones. i'm doing sort of it, and i selected Haskell primarily because it gives unprecedented combination of power and safety due to its strong but expressive type system, higher-order functions and so on. i also use benefits of lazyness from time to time, and may be even don't recognize each occasion of using lazyness. but when i'm going to optimize my program, when i'm asking myself "why it is slower than C counterparts?", the answer is almost exclusively "because of lazyness". for example, i now wrote I/O library. are you think that i much need lazyness here? no, but that i really need is the highest possible speed, so now i'm fighting against lazyness even more than usual :) well, 80% of any program don't need optimization at all. but when i write remaining 20% or even 5%, i don't want to fight against something that can be easily fixed in systematic way. all other widespread languages have _optional_, explicitly stated lazyness in form of callable blocks, even the Omega goes in this way. and i'm interested in playing with such Haskell dialect in order to see how my programming will change if i need to explicitly specify lazyness when i need it, but have strictness implicitly. i think that newcomers from other languages who wants to implement real projects instead of experimenting will also prefer strict Haskell you may hear that last days Haskell become one of fastest language in the Shootout. why? only because all those programs was rewritten to be strict. it was slow and hard process. and adding preprocessor that makes all code strict automagically will allow to write efficient Haskell programs without reading fat manuals each laguage feature has its time. 15 years ago i could substantially speed up C program by rewriting it in asm. Now the C compilers in most cases generate better code than i can. moreover, strict FP languages now are ready to compete with gcc. But lazy languages are still not compiled so efficient that they can be used for time-critical code. so, if we don't want to wait another 10 years, we should implement easier ways to create strict programs. if you think that lazy programming is great, you can show this in shootout or by showing me the way to optimize code of my real programs. i'm open to new knowledge :) -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Thu, Feb 02, 2006 at 11:46:56PM +0100, Wolfgang Jeltsch wrote:
Am Mittwoch, 1. Februar 2006 11:49 schrieb Bulat Ziganshin:
[...]
i had one idea, what is somewhat corresponding to his discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
what it will give? ability to use Haskell as powerful strict language, what is especially interesting for "real-world" programmers. i have found myself permanently fighting against the lazyness once i starting to optimize my programs. for the newcomers, it just will reduce learning path - they don't need to know anything about lazyness
Since laziness often allows you to solve problems so elegantly, I'm really scared of the idea of a "Strict Haskell"! :-( Is laziness really so "unreal" that real-world programmers have to see it as an enemy which they have to fight against?
In fact, I was kind of shocked as I read in Simon Peyton Jones' presentation "Wearing the hair shirt" [1] that in his opinion "Lazyness doesn't really matter".
I am with you. If Haskell switches to strictness, I am going to stay with the old compilers, I guess... I just love laziness (or non-strictness). Maybe speculative evaluation is the way to go? Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Hello Tomasz, Saturday, February 04, 2006, 12:39:38 PM, you wrote:
make a strict Haskell dialect.
TZ> I am with you. If Haskell switches to strictness, as i said, strict _dialect_ is interesting for optimization, moving from other languages and making strict variants of data structures -- Best regards, Bulat mailto:bulatz@HotPOP.com

Wolfgang Jeltsch wrote:
Since laziness often allows you to solve problems so elegantly, I'm really scared of the idea of a "Strict Haskell"! :-( Is laziness really so "unreal" that real-world programmers have to see it as an enemy which they have to fight against?
Non-strictness gives you some useful guarantees about program behavior, but strictness also gives you useful guarantees. Strict datatypes are inductively defined (at least in SML), which means that you can prove useful properties of functions defined inductively over them. Consider length :: [a] -> Int. In SML you can prove that this terminates and returns an Int for any list. In Haskell, you can't prove anything about this function. It might return an Int after unboundedly many reductions, or it might diverge, or it might throw any exception whatsoever. In my experience exceptions are less useful in Haskell than in other languages for this reason. The dynamic execution stack that's used for exception handling isn't apparent from the static structure of the code. Exception specifications in the style of Java or C++ simply couldn't be made to work in Haskell. All of the above applies only to non-strict data, though. I don't know of any theoretical disadvantages of non-strict let/where binding, just the usual practical problems (constant factors in the time complexity and tricky space complexity). Personally I'd like to see Haskell become "the world's finest strict language" just as it's already "the world's finest imperative language". I think I know how to mix strictness and nonstrictness at the implementation level [1], but I don't know how best to expose it in Haskell. -- Ben [1] http://research.microsoft.com/Users/simonpj/papers/not-not-ml/index.htm
participants (11)
-
Ben Rudiak-Gould
-
Bulat Ziganshin
-
Creighton Hogg
-
Dinko Tenev
-
isaac jones
-
Johannes Waldmann
-
John Meacham
-
Philippa Cowderoy
-
Simon Marlow
-
Tomasz Zielonka
-
Wolfgang Jeltsch