
Hello Wolfgang, Friday, February 03, 2006, 1:46:56 AM, you wrote:
i had one idea, what is somewhat corresponding to this discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
what it will give? ability to use Haskell as powerful strict language, what is especially interesting for "real-world" programmers. i have found myself permanently fighting against the lazyness once i starting to optimize my programs. for the newcomers, it just will reduce learning path - they don't need to know anything about lazyness
WJ> Since laziness often allows you to solve problems so elegantly, I'm really WJ> scared of the idea of a "Strict Haskell"! :-( Is laziness really so "unreal" WJ> that real-world programmers have to see it as an enemy which they have to WJ> fight against? WJ> In fact, I was kind of shocked as I read in Simon Peyton Jones' presentation WJ> "Wearing the hair shirt" [1] that in his opinion "Lazyness doesn't really WJ> matter". i suggest you to write some large program like darcs and try to make it as efficient as C++ ones. i'm doing sort of it, and i selected Haskell primarily because it gives unprecedented combination of power and safety due to its strong but expressive type system, higher-order functions and so on. i also use benefits of lazyness from time to time, and may be even don't recognize each occasion of using lazyness. but when i'm going to optimize my program, when i'm asking myself "why it is slower than C counterparts?", the answer is almost exclusively "because of lazyness". for example, i now wrote I/O library. are you think that i much need lazyness here? no, but that i really need is the highest possible speed, so now i'm fighting against lazyness even more than usual :) well, 80% of any program don't need optimization at all. but when i write remaining 20% or even 5%, i don't want to fight against something that can be easily fixed in systematic way. all other widespread languages have _optional_, explicitly stated lazyness in form of callable blocks, even the Omega goes in this way. and i'm interested in playing with such Haskell dialect in order to see how my programming will change if i need to explicitly specify lazyness when i need it, but have strictness implicitly. i think that newcomers from other languages who wants to implement real projects instead of experimenting will also prefer strict Haskell you may hear that last days Haskell become one of fastest language in the Shootout. why? only because all those programs was rewritten to be strict. it was slow and hard process. and adding preprocessor that makes all code strict automagically will allow to write efficient Haskell programs without reading fat manuals each laguage feature has its time. 15 years ago i could substantially speed up C program by rewriting it in asm. Now the C compilers in most cases generate better code than i can. moreover, strict FP languages now are ready to compete with gcc. But lazy languages are still not compiled so efficient that they can be used for time-critical code. so, if we don't want to wait another 10 years, we should implement easier ways to create strict programs. if you think that lazy programming is great, you can show this in shootout or by showing me the way to optimize code of my real programs. i'm open to new knowledge :) -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat Ziganshin wrote:
Hello Wolfgang,
Friday, February 03, 2006, 1:46:56 AM, you wrote:
i had one idea, what is somewhat corresponding to this discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
[Apologies for replying to a reply of a reply but I don't seem to have received the original post] I've been thinking along these lines too, because it has always seemed to me that laziness is just a real nuisance because it hides a lot of inefficiency under the carpet as well as making the time/space behaviour of programs difficult to understand... One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie: go {e1;e2;e3} === e1 >>= (\_-> (e2 >>= (\_->e3))) Of course this doesn't solve the problem of how to translate programs that make heavy use of mapM etc. I wonder: is monadic programming really dependent on lazyness or is there a realistic (ie not impossibly complicated) way to use monads in a strict setting? A related question is: could monadic programming ever be as efficient as side-effect programming? Regards, Brian.

On Feb 3, 2006, at 2:33 PM, Brian Hulley wrote:
Bulat Ziganshin wrote:
Hello Wolfgang,
Friday, February 03, 2006, 1:46:56 AM, you wrote:
i had one idea, what is somewhat corresponding to this discussion:
make a strict Haskell dialect. implement it by translating all expressions of form "f x" into "f $! x" and then going to the standard (lazy) haskell translator. the same for data fields - add to all field definitions "!" in translation process. then add to this strict Haskell language ability to _explicitly_ specify lazy fields and lazy evaluation, for example using this "~" sign
[Apologies for replying to a reply of a reply but I don't seem to have received the original post]
I've been thinking along these lines too, because it has always seemed to me that laziness is just a real nuisance because it hides a lot of inefficiency under the carpet as well as making the time/ space behaviour of programs difficult to understand...
I pointed out some problems with strict Haskell in a recent talk, but I think it'd be worth underscoring them here in this forum. First off, I should mention that I was one of the main implementors of pH, which had Haskell's syntax, but used eager evaluation. So what I'm about to say is based on my experience with Haskell code which was being eagerly evaluated. There is one very difficult piece of syntax in a strict setting: The *where* clause. The problem is that it's natural to write a bunch of bindings in a where clause which only scope over a few conditional clauses. I'm talking about stuff like this: f x | p x = ..... a ...a . a .... a ... | complex_condition = ......... b .. b ... b ...... | otherwise = ..... a ....... b ..... where a = horrible expression in x which is bottom when complex_condition is true. b = nasty expression in x which doesn't terminate when p x is true. complex_condition = big expression which goes on for lines and lines and would drive the reader insane if it occurred in line. Looks pretty reasonable, right? Not when you are using eager or strict evaluation. I think a strict variant of Haskell would either end up virtually where-free (with tons of lets instead---a pity as I often find where clauses more readable) or the semantics of where would need to change. This came up surprisingly more often than I expected, though it was hardly a universal problem. The more "interesting" the code, the more likely there would be trouble in my experience. A bunch of other stuff would have to be added, removed, or modified. The use of lists as generators would need to be re-thought (and probably discarded), idioms involving infinite lists would have to go, etc., etc. But this is a simple matter of libraries (well, and which type(s) get(s) to use square brackets as special builtin notation). -Jan-Willem Maessen

Jan-Willem Maessen wrote:
I pointed out some problems with strict Haskell in a recent talk, but I think it'd be worth underscoring them here in this forum.
Is the text of this talk or points raised in it available online anywhere?
<snip> There is one very difficult piece of syntax in a strict setting: The *where* clause. The problem is that it's natural to write a bunch of bindings in a where clause which only scope over a few conditional clauses. I'm talking about stuff like this:
f x | p x = ..... a ...a . a .... a ... | complex_condition = ......... b .. b ... b ...... | otherwise = ..... a ....... b ..... where a = horrible expression in x which is bottom when complex_condition is true. b = nasty expression in x which doesn't terminate when p x is true. complex_condition = big expression which goes on for lines and lines and would drive the reader insane if it occurred in line.
Surely it would not be too difficult for the compiler to only evaluate the where bindings that are relevant depending on which guard evaluates to True ie in your example, the binding for a would be evaluated if p x is True, otherwise the complex_condition would be evaluated, and if True, b would be evaluated, otherwise a and b would be evaluated: f x | p x = let a = ..... in ....a a ... | otherwise = let complex_condition = ... b = ... in if complex_condition then .... b .... b else let a = ..... a in .... a.....b where all the messy (possibly duplicated) let's are generated by the compiler so the user can still use the nice where syntax. Regards, Brian.

On Feb 3, 2006, at 8:16 PM, Brian Hulley wrote:
Jan-Willem Maessen wrote:
I pointed out some problems with strict Haskell in a recent talk, but I think it'd be worth underscoring them here in this forum.
Is the text of this talk or points raised in it available online anywhere?
<snip> There is one very difficult piece of syntax in a strict setting: The *where* clause. The problem is that it's natural to write a bunch of bindings in a where clause which only scope over a few conditional clauses. I'm talking about stuff like this:
f x | p x = ..... a ...a . a .... a ... | complex_condition = ......... b .. b ... b ...... | otherwise = ..... a ....... b ..... where a = horrible expression in x which is bottom when complex_condition is true. b = nasty expression in x which doesn't terminate when p x is true. complex_condition = big expression which goes on for lines and lines and would drive the reader insane if it occurred in line.
Surely it would not be too difficult for the compiler to only evaluate the where bindings that are relevant depending on which guard evaluates to True ie in your example, the binding for a would be evaluated if p x is True, otherwise the complex_condition would be evaluated, and if True, b would be evaluated, otherwise a and b would be evaluated: ...
In principle, yes, this is eminently doable. But the translation becomes surprisingly messy when the bindings in question are mutually recursive. Certainly it's not a simple syntax-directed translation, in contrast to essentially every other piece of syntactic sugar in the language. -Jan-Willem Maessen

Brian Hulley wrote:
...
[Apologies for replying to a reply of a reply but I don't seem to have received the original post]
I've been thinking along these lines too, because it has always seemed to me that laziness is just a real nuisance because it hides a lot of inefficiency under the carpet as well as making the time/space behaviour of programs difficult to understand...
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
go {e1;e2;e3} === e1 >>= (\_-> (e2 >>= (\_->e3)))
Of course this doesn't solve the problem of how to translate programs that make heavy use of mapM etc.
I wonder: is monadic programming really dependent on lazyness or is there a realistic (ie not impossibly complicated) way to use monads in a strict setting?
A related question is: could monadic programming ever be as efficient as side-effect programming?
Regards, Brian.
What about writing functions in a modified form of Control.Monad.Identity that ensures the return value that forces the return values:
module Control.Monad.Strict (Weak,mkWeak,unsafeMkWeak,runWeak, Deep,mkDeep,unsafeMkDeep,runDeep) where
Weak uses seq to achieve WHNF for it's argument
newtype Weak a = WeakCon {runWeak :: a} mkWeak x = seq x (WeakCon x) unsafeMkWeak x = WeakCon x
instance Functor Weak where fmap f w = mkWeak (f (runWeak w))
instance Monad Weak where return x = mkWeak x w >>= f = f (runWeak w)
I can't make the deepSeq version typecheck: Deep uses deepSeq to evaluate it's argument
newtype Deep a = DeepCon {runDeep :: a} mkDeep x = deepSeq x (DeepCon a) unsafeDeep x = DeepCon x
instance Functor Deep where fmap f d = mkDeep (f (runDeep d))
instance Monad Deep where return d = mkDeep d d >>= f = f (runDeep d)

Chris Kuklewicz wrote:
Weak uses seq to achieve WHNF for it's argument
newtype Weak a = WeakCon {runWeak :: a} mkWeak x = seq x (WeakCon x) unsafeMkWeak x = WeakCon x
This doesn't actually do what you think it does. mkWeak and unsafeMkWeak are the same function. mkWeak 123 = seq 123 (WeakCon 123) = WeakCon 123 unsafeMkWeak 123 = WeakCon 123 mkWeak _|_ = seq _|_ (WeakCon _|_) = _|_ unsafeMkWeak _|_ = WeakCon _|_ = _|_ To quote John Meacham: | A quick note, | x `seq` x | is always exactly equivalant to x. the reason being that your seq | would never be called to force x unless x was needed anyway. | | I only mention it because for some reason this realization did not hit | me for a long time and once it did a zen-like understanding of seq | (relative to the random placement and guessing method I had used | previously) suddenly was bestowed upon me. I remember this anecdote because when I first read it, a zen-like understanding of seq suddenly was bestowed upon /me/. Maybe it should be in the docs. :-) -- Ben

What I wanted to make was a Deep / DeepCon Monad which called deepSeq or some strategy. But I could not make it type check. Ben Rudiak-Gould wrote:
Chris Kuklewicz wrote:
Weak uses seq to achieve WHNF for it's argument
newtype Weak a = WeakCon {runWeak :: a} mkWeak x = seq x (WeakCon x) unsafeMkWeak x = WeakCon x
This doesn't actually do what you think it does. mkWeak and unsafeMkWeak are the same function.
mkWeak 123 = seq 123 (WeakCon 123) = WeakCon 123 unsafeMkWeak 123 = WeakCon 123 mkWeak _|_ = seq _|_ (WeakCon _|_) = _|_ unsafeMkWeak _|_ = WeakCon _|_ = _|_
To quote John Meacham:
| A quick note, | x `seq` x | is always exactly equivalant to x. the reason being that your seq | would never be called to force x unless x was needed anyway. | | I only mention it because for some reason this realization did not hit | me for a long time and once it did a zen-like understanding of seq | (relative to the random placement and guessing method I had used | previously) suddenly was bestowed upon me.
I remember this anecdote because when I first read it, a zen-like understanding of seq suddenly was bestowed upon /me/. Maybe it should be in the docs. :-)
-- Ben
Yeah, that was silly. Falling back to `seq` was useless.

Brian Hulley wrote:
Bulat Ziganshin wrote:
[Apologies for replying to a reply of a reply but I don't seem to have received the original post]
I've been thinking along these lines too, because it has always seemed to me that laziness is just a real nuisance because it hides a lot of inefficiency under the carpet as well as making the time/space behaviour of programs difficult to understand...
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
go {e1;e2;e3} === e1 >>= (\_-> (e2 >>= (\_->e3)))
Of course this doesn't solve the problem of how to translate programs that make heavy use of mapM etc.
I wonder: is monadic programming really dependent on lazyness or is there a realistic (ie not impossibly complicated) way to use monads in a strict setting?
A related question is: could monadic programming ever be as efficient as side-effect programming?
Regards, Brian. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
What about writing functions in a modified form of Control.Monad.Identity that ensures the return value that forces the return values:
module Control.Monad.Strict (Weak,mkWeak,unsafeMkWeak,runWeak, Deep,mkDeep,unsafeMkDeep,runDeep) where
Weak uses seq to achieve WHNF for it's argument
newtype Weak a = WeakCon {runWeak :: a} mkWeak x = seq x (WeakCon x) unsafeMkWeak x = WeakCon x
instance Functor Weak where fmap f w = mkWeak (f (runWeak w))
instance Monad Weak where return x = mkWeak x w >>= f = f (runWeak w)
I can't make the deepSeq version typecheck: Deep uses deepSeq to evaluate it's argument
newtype Deep a = DeepCon {runDeep :: a} mkDeep x = deepSeq x (DeepCon a) unsafeDeep x = DeepCon x
instance Functor Deep where fmap f d = mkDeep (f (runDeep d))
instance Monad Deep where return d = mkDeep d d >>= f = f (runDeep d)

On Fri, 3 Feb 2006 19:33:12 -0000
"Brian Hulley"
I've been thinking along these lines too, because it has always seemed to me that laziness is just a real nuisance because it hides a lot of inefficiency under the carpet as well as making the time/space behaviour of programs difficult to understand...
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
go {e1;e2;e3} === e1 >>= (\_-> (e2 >>= (\_->e3)))
That's not necessary. >> has something in common with if', where if' True x _ = x if' False _ y = y - in both cases, it makes sense to evaluate the arguments lazily. So simply make strictness the default and have laziness annotations (for arguments), instead of making laziness the default and having strictness annotations.
A related question is: could monadic programming ever be as efficient as side-effect programming?
Monads can be viewed as code generators. So, with partial evaluation, my guess is yes, at least in many important cases. -- Robin

Robin Green wrote:
On Fri, 3 Feb 2006 19:33:12 -0000 "Brian Hulley"
wrote: I've been thinking along these lines too, because it has always seemed to me that laziness is just a real nuisance because it hides a lot of inefficiency under the carpet as well as making the time/space behaviour of programs difficult to understand...
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
go {e1;e2;e3} === e1 >>= (\_-> (e2 >>= (\_->e3)))
That's not necessary. >> has something in common with if', where
if' True x _ = x if' False _ y = y
- in both cases, it makes sense to evaluate the arguments lazily.
So simply make strictness the default and have laziness annotations (for arguments), instead of making laziness the default and having strictness annotations.
Where would you put these laziness annotations? If you put them in the function declaration eg as in: if' :: ~a -> ~b -> Bool presumably you'd want the compiler to pass the args as thunks instead of evaluated values. However this means that all args to every function would have to be passed as thunks, even though for strict functions these thunks would immediately be evaluated. The problem is that there is no way for the compiler to optimize out the thunk creation / evaluation step because it occurs across the "black box" of a function call, thus we wouldn't get the same efficiency as in a language such as ML where no thunks are created in the first place. Ie there is a fundamental asymmetry between lazy annotations and strict annotations - it is trivial to go from lazy to strict before the function body is evaluated but impossible to unevaluate from strict back to lazy... Regards, Brian.

Brian Hulley wrote:
Robin Green wrote:
<snip> So simply make strictness the default and have laziness annotations (for arguments), instead of making laziness the default and having strictness annotations.
Where would you put these laziness annotations? If you put them in the function declaration eg as in:
if' :: Bool -> ~a -> ~a -> a [corrected]
presumably you'd want the compiler to pass the args as thunks instead of evaluated values. However this means that all args to every function would have to be passed as thunks, even though for strict functions these thunks would immediately be evaluated. The problem is that there is no way for the compiler to optimize out the thunk creation / evaluation step because it occurs across the "black box" of a function call, thus we wouldn't get the same efficiency as in a language such as ML where no thunks are created in the first place.
I'm just soooo slow!!! ;-) Of course the laziness info would now be part of the function's type so the compiler would be able to generate the correct code to prepare thunks or evaluated values before calling the function. So your idea of laziness annotations for args would give the best of both worlds :-) Regards, Brian.

Brian Hulley wrote:
Brian Hulley wrote:
Robin Green wrote:
<snip> So simply make strictness the default and have laziness annotations (for arguments), instead of making laziness the default and having strictness annotations.
Where would you put these laziness annotations? If you put them in the function declaration eg as in:
if' :: Bool -> ~a -> ~a -> a [corrected]
presumably you'd want the compiler to pass the args as thunks instead of evaluated values. However this means that all args to every function would have to be passed as thunks, even though for strict functions these thunks would immediately be evaluated. The problem is that there is no way for the compiler to optimize out the thunk creation / evaluation step because it occurs across the "black box" of a function call, thus we wouldn't get the same efficiency as in a language such as ML where no thunks are created in the first place.
I'm just soooo slow!!! ;-) Of course the laziness info would now be part of the function's type so the compiler would be able to generate the correct code to prepare thunks or evaluated values before calling the function. So your idea of laziness annotations for args would give the best of both worlds :-)
For an eager language, a state monad could perhaps be defined by data ST m a = ST ~(m -> (m,a)) and the other operations would work as normal without any additional annotations. (?) I must admit I'm a bit confused as to why the strictness annotations in Haskell (and Clean) are only allowed in data declarations and not function declarations, since it seems a bit random to have to guess which args can be evaluated strictly at the call site although it of course gives flexibility (eg to use (+) strictly or lazily). The type system doesn't prevent someone from writing (>>) m0 $! m1 even though the author of (>>) may have been relying on m1 being lazily evaluated... (?) For an eager language, it would seem that lazy annotations would have to be allowed as part of a function's type so that if' could be implemented. Does anyone know of a type system that incorporates lazy annotations, and/or how these would be propagated? What would the signature of a lazy map function be? map :: (~a -> ~b) -> ~[a] -> ~[b] map :: (a -> b) -> ~[~a~] -> ~[b~] etc etc - quite a puzzle!!! Thanks, Brian.

On Sun, Feb 05, 2006 at 05:18:55PM -0000, Brian Hulley wrote:
I must admit I'm a bit confused as to why the strictness annotations in Haskell (and Clean) are only allowed in data declarations and not function declarations
Clean does allow strictness annotations in function types. Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Tomasz Zielonka wrote:
On Sun, Feb 05, 2006 at 05:18:55PM -0000, Brian Hulley wrote:
I must admit I'm a bit confused as to why the strictness annotations in Haskell (and Clean) are only allowed in data declarations and not function declarations
Clean does allow strictness annotations in function types.
Thanks for pointing this out - I must admit I had only taken a very quick look at Clean (I was overwhelmed by the complicated type system) but now I've found the place in the Clean book that describes strictness annotations for function types so I must look into this a bit more. If I wanted to write a 3d computer game in Haskell (or Clean), would lazy evaluation with strictness annotations lead to as fast a program as eager evaluation with lazy annotations for the same amount of programming effort? And would the result be as fast as an equivalent program in C++ or OCaml or MLton? If so, there would obviously be no point wasting time trying to develop an eager dialect of Haskell (or Clean). I wonder if current compilation technology for lazy Haskell (or Clean) has reached the theoretical limits on what is possible for the compiler to optimize away, or if it is just that optimization has not received so much attention as work on the type system etc? Regards, Brian.

On Feb 5, 2006, at 2:02 PM, Brian Hulley wrote:
... I wonder if current compilation technology for lazy Haskell (or Clean) has reached the theoretical limits on what is possible for the compiler to optimize away, or if it is just that optimization has not received so much attention as work on the type system etc?
I would answer resoundingly that there is still a good deal to learn / perfect in the compilation technology, but there's been a lack of manpower/funding to make it happen. -Jan-Willem Maessen
Regards, Brian. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 05/02/06, Jan-Willem Maessen
On Feb 5, 2006, at 2:02 PM, Brian Hulley wrote:
... I wonder if current compilation technology for lazy Haskell (or Clean) has reached the theoretical limits on what is possible for the compiler to optimize away, or if it is just that optimization has not received so much attention as work on the type system etc?
I would answer resoundingly that there is still a good deal to learn / perfect in the compilation technology, but there's been a lack of manpower/funding to make it happen.
-Jan-Willem Maessen
Besides, haven't you heard of the full-employment theorem for compiler-writers? ;) To paraphrase it: For every optimising compiler, there is one which does better on at least one program. In any event, if compilers which preserve non-strict semantics aren't producing programs from naively written code which are even as fast as the corresponding hand-tuned C+Assembly programs, then there's still plenty of room for improvement. It just takes a lot of time, resources, and effort, as mentioned, to make it (or more reasonable approximations to it) happen. Perhaps some small amount of overhead will always be needed to implement programs with non-strict semantics, (short of solving the halting problem) but I think that with a lot of hard work, this is something which could be squeezed down a lot, perhaps to the point of being negligible. (It's already negligible for many, perhaps even most applications, on modern hardware.) I think that as programming languages become higher level, one has more and more fun opportunities to optimise that it would be much more difficult to locate and attempt in lower level languages. For example, knowing that a piece of code is a 'map' or 'foldr' operation, algebraic rules can be applied at the higher levels, performing fusion. This part has been done to some extent, but perhaps there are much deeper things which could be done at that level. At a lower level, special optimisers could be used in native code generation, which would take advantage of the fact that there will be no state or limited state to carry around and no real side effects (potentially limiting the loads and stores and operating system calls one would have to do). One might choose an appropriate scheduler which handled code differently based on the particular higher-order function in which it was wrapped, since different structures of computation put different kinds of strain on any given processor. Anyway, don't ever fool yourself into thinking that any otherwise-reasonable language is somehow inherently slow. There's always potential for a better implementation. - Cale

On Sun, Feb 05, 2006 at 05:18:55PM -0000, Brian Hulley wrote:
I must admit I'm a bit confused as to why the strictness annotations in Haskell (and Clean) are only allowed in data declarations and not function declarations, since it seems a bit random to have to guess which args can be evaluated strictly at the call site although it of course gives flexibility (eg to use (+) strictly or lazily). The type system doesn't prevent someone from writing (>>) m0 $! m1 even though the author of (>>) may have been relying on m1 being lazily evaluated... (?)
It is because a data declaration is defining the form of the data, which includes both its representation and the type of its constructors. the strictness annotations affect its representation (or at least its desugaring) but not its type. The strictness of the fields is not reflected in the type. A function declaration is just declaring the type of the function, where strictness is not reflected either just like in data types. another way you can think of it is that for data Foo = Bar !Int !Char the bangs arn't being assosiated with the Int and Char types, but rather the Bar data constructor. However, the syntax is a little confusing in that it makes the bangs look as though they were part of the types of the constructor arguments. John -- John Meacham - ⑆repetae.net⑆john⑈

Hello John, Monday, February 06, 2006, 10:58:06 AM, you wrote:
I must admit I'm a bit confused as to why the strictness annotations in Haskell (and Clean) are only allowed in data declarations and not function declarations
JM> data Foo = Bar !Int !Char JM> the bangs arn't being assosiated with the Int and Char types, but rather JM> the Bar data constructor. with foo :: !Int -> !Int bangs are also associated with type of foo, is not? :) and Clean already has this sort of annotations -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat Ziganshin
JM> data Foo = Bar !Int !Char
JM> the bangs arn't being assosiated with the Int and Char types, but rather JM> the Bar data constructor.
foo :: !Int -> !Int
(Is the second ! actually meaningful?) Personally, I think is much nicer than sprinkling seq's around, and generally sufficient. However, there could perhaps be disambiguities? Last time this came up, I think examples resembling these were brought up: foo :: [!a] -> ![a] -> a foo' :: Map !Int String -> Int -> String Anyway, if a reasonable semantics can be formulated, I think strictness type annotations would be a great, useful, and relatively non-intrusive (AFAICT, entirely backwards compatible) addtion to Haskell'. -k -- If I haven't seen further, it is by standing in the footprints of giants

Hello Ketil, Monday, February 06, 2006, 4:06:35 PM, you wrote:
foo :: !Int -> !Int
KM> (Is the second ! actually meaningful?) yes! it means that the function is strict in its result - i.e. can't return undefined value when strict arguments are given. this sort of knowledge should help a compiler to "propagate" strictness and figure out the parts of program that can be compiled as strict code. really, i think ghc is able to figure functions with strict result just like it is able to figure strict function arguments KM> Personally, I think is much nicer than sprinkling seq's around, and KM> generally sufficient. However, there could perhaps be disambiguities? btw, it's just implemented in the GHC HEAD KM> Last time this came up, I think examples resembling these were brought KM> up: KM> foo :: [!a] -> ![a] -> a yes, i remember this SPJ's question :) "[!a]" means that list elements are strict, it's the same as defining new list type with strict elements and using it here. "![a]" means "strict list", it is the same as defining list with "next" field strict: data List1 a = Nil1 | List1 !a (List1 a) data List2 a = Nil2 | List2 a !(List2 a) data List3 a = Nil3 | List3 !a !(List3 a) the type List3 is a simple strict list, like in any strict programming language. foo :: [!a] -> ![a] -> ![!a] -> a translates to foo :: List1 a -> List2 a -> List3 a -> a KM> foo' :: Map !Int String -> Int -> String that means that keys in this map saved as strict values. for example, the following definition type Map a b = [(a,b)] will be instantiated to Map !Int String ==> [(!Int, String)] KM> Anyway, if a reasonable semantics can be formulated, I think KM> strictness type annotations would be a great, useful, and KM> relatively non-intrusive (AFAICT, entirely backwards compatible) KM> addtion to Haskell'. such proposal already exists and supported by implementing this in GHC HEAD -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Feb 6, 2006, at 9:19 AM, Bulat Ziganshin wrote:
Hello Ketil,
Monday, February 06, 2006, 4:06:35 PM, you wrote:
foo :: !Int -> !Int
KM> (Is the second ! actually meaningful?)
yes! it means that the function is strict in its result - i.e. can't return undefined value when strict arguments are given. this sort of knowledge should help a compiler to "propagate" strictness and figure out the parts of program that can be compiled as strict code. really, i think ghc is able to figure functions with strict result just like it is able to figure strict function arguments
KM> Personally, I think is much nicer than sprinkling seq's around, and KM> generally sufficient. However, there could perhaps be disambiguities?
btw, it's just implemented in the GHC HEAD
Actually, I think strict _patterns_ are implemented. You are talking about strict _type annotations_, which is rather different. As I understand it, strict patterns are just sugar for putting 'seq' in the right places. There has been some work dealing with folding strictness and totality information into types systems; I find the resulting type systems pretty ugly, and I think they'd be pretty hard to bolt onto an HM base. Robert Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

Hello Robert, Monday, February 06, 2006, 8:40:11 PM, you wrote:
foo :: !Int -> !Int btw, it's just implemented in the GHC HEAD
RD> Actually, I think strict _patterns_ are implemented. You are talking RD> about strict _type annotations_, which is rather different. yes, i was wrong RD> There has been some work dealing with folding strictness and totality RD> information into types systems; I find the resulting type systems RD> pretty ugly, and I think they'd be pretty hard to bolt onto an HM base. i'm not a professor, just a programmer who needs to optimize some code :) so, that can be really hard or impossible to implement. what i mean: given expression "a*b+c" and know that a/b/c is strict Int values, GHC can determine that whole expression is strict. how it is fone? i don't know. but i think that strictness annotation on result type should say compiler just about this -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat Ziganshin wrote:
Hello Ketil,
KM> (Is the second ! actually meaningful?)
yes! it means that the function is strict in its result - i.e. can't return undefined value when strict arguments are given.
Unfortunately this interpretation runs pretty quickly into theoretical difficulties. A ! on the right hand side of a function arrow isn't like a ! on the left hand side. If you used this notation for this purpose, it would have to be special-cased. Note that in GHC at present, a function of type Int# -> Int# can diverge.
KM> foo :: [!a] -> ![a] -> a
"![a]" means "strict list", it is the same as defining list with "next" field strict:
data List2 a = Nil2 | List2 a !(List2 a)
This isn't consistent with the general rule that ! means absence of _|_. The semantics that you want could be implemented as a special case for the [] constructor, but polymorphism breaks this, e.g. data Foo a = MkFoo Int !a data Bar a = MkFoo Int a Foo [Bool] /= Bar ![Bool]
for example, the following definition
type Map a b = [(a,b)]
will be instantiated to
Map !Int String ==> [(!Int, String)]
As long as you're only specializing datatypes this works fine, but when you try to do the same with polymorphic functions acting on those datatypes, you run into serious problems. E.g. f :: forall a. a -> Maybe a f _ = Just undefined Now we have (f :: Int -> Maybe Int) 3 == Just _|_, but (f :: !Int -> Maybe !Int) 3 == _|_. This means that either f and all of its callers must be specialized at compile time (despite having no type class constraints) or f must inspect its implicit type argument at run time.
such proposal already exists and supported by implementing this in GHC HEAD
As Robert Dockins said, it's not implemented, and it isn't clear how to implement it. At this point it's looking fairly likely that my PhD thesis will be on this very topic, so stay tuned. -- Ben

Ben Rudiak-Gould wrote:
As Robert Dockins said, it's not implemented, and it isn't clear how to implement it. At this point it's looking fairly likely that my PhD thesis will be on this very topic, so stay tuned.
Isn't all this already implemented in Clean? Regards, Brian.

Bulat Ziganshin wrote:
yes, i remember this SPJ's question :) "[!a]" means that list elements are strict, it's the same as defining new list type with strict elements and using it here. "![a]" means "strict list", it is the same as defining list with "next" field strict:
data List1 a = Nil1 | List1 !a (List1 a) data List2 a = Nil2 | List2 a !(List2 a) data List3 a = Nil3 | List3 !a !(List3 a)
Clean allows (AFAIK) several distinctions to be made: 1) ![a] means that the list of a's is a strict argument, just like writing !b 2) [!a] means that the list is head strict (List1 a) 3) [a!] means that the list is tail strict (List2 a) 4) [!a!] means that the list is head and tail strict (List3 a) 5) ![!a!] means that the head-and-tail-strict-list-argument is strict!!! I think also (though I'm not entirely sure) that these distinctions are generalized for other data types by talking about element strictness and spine strictness. One motivation seems to be that in the absence of whole program optimization, the strictness annotations on a function's type can allow the compiler to avoid creating thunks at the call site for cross-module calls whereas using seq in the function body itself means that the thunk still has to be created at the call site because the compiler can't possibly know that it's going to be immediately evaluated by seq. Regards, Brian.

On Feb 6, 2006, at 19:33, Brian Hulley wrote:
Clean allows (AFAIK) several distinctions to be made:
1) ![a] means that the list of a's is a strict argument, just like writing !b
2) [!a] means that the list is head strict (List1 a)
3) [a!] means that the list is tail strict (List2 a)
4) [!a!] means that the list is head and tail strict (List3 a)
5) ![!a!] means that the head-and-tail-strict-list-argument is strict!!!
Right. I think it's worth stressing that the four possible list types are all different as far as type checking is concerned. There's special list syntax however to denote lists that are overloaded in the list type. At run time the different list types share the same nil and cons constructors. This means that conversions between list types are often cheap. I have a small library that does this, with the following costs (monospaced table): \to \ [ ] [! ] [ !] [!!] frm\ [ ] 0 c e! !e! [! ] 0 0 e! e! [ !] 0 c 0 !e [!!] 0 0 0 0 0 = doesn't traverse list e = traverses list (evaluating elements (!e), spine (e!), or both (! e!)) c = copies list (lazily, evaluating elements)
I think also (though I'm not entirely sure) that these distinctions are generalized for other data types by talking about element strictness and spine strictness.
No, there's no such generalisation. Cheers, Ronny Wichers Schreur

Brian Hulley wrote:
One motivation seems to be that in the absence of whole program optimization, the strictness annotations on a function's type can allow the compiler to avoid creating thunks at the call site for cross-module calls whereas using seq in the function body itself means that the thunk still has to be created at the call site because the compiler can't possibly know that it's going to be immediately evaluated by seq.
GHC solves this with the worker-wrapper transformation: the code for the wrapper is exported as part of the module's interface and inlined at external call sites. It handles seq, unboxing, and so on and calls the worker via a private interface. Not that I think strictness information in the type system is a bad idea. -- Ben

Ben Rudiak-Gould wrote:
Brian Hulley wrote:
One motivation seems to be that in the absence of whole program optimization, the strictness annotations on a function's type can allow the compiler to avoid creating thunks at the call site for cross-module calls whereas using seq in the function body itself means that the thunk still has to be created at the call site because the compiler can't possibly know that it's going to be immediately evaluated by seq.
GHC solves this with the worker-wrapper transformation: the code for the wrapper is exported as part of the module's interface and inlined at external call sites. It handles seq, unboxing, and so on and calls the worker via a private interface.
Not that I think strictness information in the type system is a bad idea.
Sounds cool. I wonder if strictness annotations are ever really needed (eg if the "perfect optimizer" were possible to construct)? One problem I see with strictness annotations in functions is that there could easily be an exponential growth in variants of a function (and callers of that function...) to handle different strictness requirements (eg for map with all of Clean's list types etc) leading to a bit of a minefield for the humble programmer... :-) Regards, Brian.

On Fri, Feb 03, 2006 at 07:33:12PM -0000, Brian Hulley wrote:
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
you can override (>>) in your monad instance Monad ... where a >> b = a `seq` b `seq` (a >>= \_ -> b) .... unless I am misunderstanding what you want. John -- John Meacham - ⑆repetae.net⑆john⑈

John Meacham wrote:
On Fri, Feb 03, 2006 at 07:33:12PM -0000, Brian Hulley wrote:
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
you can override (>>) in your monad
instance Monad ... where a >> b = a `seq` b `seq` (a >>= \_ -> b) ....
unless I am misunderstanding what you want.
John
If strictness was the default (eg if the language were ML not Haskell), then in putStr "hello" >> putStr (show 1) both args to >> would be evaluated before >> was called. Thus putStr (show 1) would be evaluated before the combined monad is actually run, which would be wasteful if we were using a monad with a >> function that only runs the rhs conditionally on the result of the lhs. If Haskell were a strict language I think an equivalent for the do notation would have to lift everything (except the first expression) and use >>= instead of >> . Regards, Brian.

Hello Brian, Saturday, February 04, 2006, 4:50:44 AM, you wrote:
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
BH> If strictness was the default (eg if the language were ML not Haskell), then BH> in BH> putStr "hello" >> putStr (show 1) BH> both args to >>> would be evaluated before >> was called. Thus putStr (show BH> 1) would be evaluated before the combined monad is actually run, which would BH> be wasteful if we were using a monad with a >> function that only runs the BH> rhs conditionally on the result of the lhs. BH> If Haskell were a strict language I think an equivalent for the do notation BH> would have to lift everything (except the first expression) and use >>= BH> instead of >>> . it seems that you misunderstand the monads (or may be i misunderstand :) each and every monadic operation is a function! type "IO a" is really "RealWorld -> (RealWorld,a)" and the same for any other monad. concept of the monad by itself means carrying "hidden" state from one monadic operation to the next. that allows to _order_ monadic operations plus this state used for zillions other things, including state, logs, fails and so on, so on -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat Ziganshin wrote:
Hello Brian,
Saturday, February 04, 2006, 4:50:44 AM, you wrote:
One question is how to get some kind of "do" notation that would work well in a strict setting. The existing "do" notation makes use of lazyness in so far as the second arg of >> is only evaluated when needed. Perhaps a new keyword such as "go" could be used to use >>= instead ie:
If strictness was the default (eg if the language were ML not Haskell), then in
putStr "hello" >> putStr (show 1)
both args to >>> would be evaluated before >> was called. Thus putStr (show 1) would be evaluated before the combined monad is actually run, which would be wasteful if we were using a monad with a >> function that only runs the rhs conditionally on the result of the lhs. If Haskell were a strict language I think an equivalent for the do notation would have to lift everything (except the first expression) and use >>= instead of >>> .
it seems that you misunderstand the monads (or may be i misunderstand :)
each and every monadic operation is a function! type "IO a" is really "RealWorld -> (RealWorld,a)" and the same for any other monad. concept of the monad by itself means carrying "hidden" state from one monadic operation to the next. that allows to _order_ monadic operations plus this state used for zillions other things, including state, logs, fails and so on, so on
exp1 >> exp2 in a strict setting would force exp1 to be evaluated to a monad, exp2 to be evaluated to a monad, then these monads to be combined using >> into another monad, which at some later point would actually be run. But it is this eager evaluation of exp2 into the rhs monad that is the problem, because in the example above, (show 1) would be evaluated during the evaluation of (putStr "hello" >> putStr (show 1)) whereas in Haskell it would only be evaluated when the combined monad is actually run (because it is only at this point that Haskell actually creates the combined monad from the thunk). Regards, Brian.

Am Sonntag, 5. Februar 2006 17:36 schrieb Bulat Ziganshin:
[...]
each and every monadic operation is a function!
What do you mean with "monadic operatation"? (>>=), (>>) and return are, of course, functions but an I/O action like getChar is *not* a function. Also a list is not a function but a value of the list monad.
type "IO a" is really "RealWorld -> (RealWorld,a)"
This representation is just there to help people understand what I/O is but actually, IO a is a type which is not implementable in ordinary Haskell and therefore cannot be a function. In addition, RealWorld -> (RealWorld,a) as an explanation of what IO a is has its limitations. If we run an I/O action, we aren't just interested in the final state but also in intermediate states.
and the same for any other monad. concept of the monad by itself means carrying "hidden" state from one monadic operation to the next.
That's too specific. A list, for example, doesn't have to do anything with state and [a] is not represented as a function.
[...]
Best wishes, Wolfgang

Hello Wolfgang, Saturday, February 11, 2006, 3:17:12 PM, you wrote:
each and every monadic operation is a function!
WJ> What do you mean with "monadic operatation"? (>>=), (>>) and return are, of WJ> course, functions but an I/O action like getChar is *not* a function. Also a WJ> list is not a function but a value of the list monad.
type "IO a" is really "RealWorld -> (RealWorld,a)"
WJ> This representation is just there to help people understand what I/O is but WJ> actually, IO a is a type which is not implementable in ordinary Haskell and WJ> therefore cannot be a function. In addition, RealWorld -> (RealWorld,a) as WJ> an explanation of what IO a is has its limitations. If we run an I/O action, WJ> we aren't just interested in the final state but also in intermediate states. {putStr "a"} is a function, which receives previous world state and returns updated world state where "a" is written to the terminal. it's an _essential_ part of monadic way to I/O in the list comprehension, filters also use value of current list element. if filter don't use this element value, it can be computed prior to comprehesion to speed the things up
and the same for any other monad. concept of the monad by itself means carrying "hidden" state from one monadic operation to the next.
WJ> That's too specific. A list, for example, doesn't have to do anything with WJ> state and [a] is not represented as a function. -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat Ziganshin wrote:
Hello Wolfgang,
Saturday, February 11, 2006, 3:17:12 PM, you wrote:
each and every monadic operation is a function!
WJ> What do you mean with "monadic operatation"? (>>=), (>>) and return are, of WJ> course, functions but an I/O action like getChar is *not* a function. Also a WJ> list is not a function but a value of the list monad.
type "IO a" is really "RealWorld -> (RealWorld,a)"
WJ> This representation is just there to help people understand what I/O is but WJ> actually, IO a is a type which is not implementable in ordinary Haskell and WJ> therefore cannot be a function. In addition, RealWorld -> (RealWorld,a) as WJ> an explanation of what IO a is has its limitations. If we run an I/O action, WJ> we aren't just interested in the final state but also in intermediate states.
{putStr "a"} is a function, which receives previous world state and returns updated world state where "a" is written to the terminal. it's an _essential_ part of monadic way to I/O
There is nothing in the Haskell specification that tells you how the IO type is implemented, so you can't say that putStr takes an old world state and returns a new one. I've personally done an implementation that was totally different, so making assumptions about how IO is implemented is just wrong. -- Lennart

On Sat, Feb 11, 2006 at 04:34:37PM +0300, Bulat Ziganshin wrote:
{putStr "a"} is a function, which receives previous world state and returns updated world state where "a" is written to the terminal. it's an _essential_ part of monadic way to I/O
not true. there are actually several ways to implement IO. There is a paper about it somewhere that explores various methods, but I can't seem to find it, does anyone know which one i am thinking of? I know it at least explores the state and continuation versions as well as some that don't use monads I thought. It was either part of a general paper on monads or something specific to doing IO...
in the list comprehension, filters also use value of current list element. if filter don't use this element value, it can be computed prior to comprehesion to speed the things up
and the same for any other monad. concept of the monad by itself means carrying "hidden" state from one monadic operation to the next.
WJ> That's too specific. A list, for example, doesn't have to do anything with WJ> state and [a] is not represented as a function.
or even the trivial Identity monad certainly has no state. The maybe monad doesn't either. it carries no extra state with it, but rather allows one to discard later computations. the reader monad has no state but rather distributes a value commutativly to its subcomputations. The State monad is just one of many very useful monads. A monad is precisely anything that satisfies the monad laws. John -- John Meacham - ⑆repetae.net⑆john⑈

John Meacham
there are actually several ways to implement IO. There is a paper about it somewhere that explores various methods, but I can't seem to find it, does anyone know which one i am thinking of? I know it at least explores the state and continuation versions as well as some that don't use monads I thought. It was either part of a general paper on monads or something specific to doing IO...
This one? http://research.microsoft.com/~simonpj/Papers/imperative.ps.Z

Johan Bockgård wrote:
John Meacham
writes: there are actually several ways to implement IO. There is a paper about it somewhere that explores various methods, but I can't seem to find it, does anyone know which one i am thinking of? I know it at least explores the state and continuation versions as well as some that don't use monads I thought. It was either part of a general paper on monads or something specific to doing IO...
This one? http://research.microsoft.com/~simonpj/Papers/imperative.ps.Z
Or perhaps the one that is attached. -Paul

On Wed, Feb 15, 2006 at 04:58:34PM +0100, Johan Bockgård wrote:
John Meacham
writes: there are actually several ways to implement IO. There is a paper about it somewhere that explores various methods, but I can't seem to find it, does anyone know which one i am thinking of? I know it at least explores the state and continuation versions as well as some that don't use monads I thought. It was either part of a general paper on monads or something specific to doing IO...
This one?
http://research.microsoft.com/~simonpj/Papers/imperative.ps.Z
Ah, yes. perhaps we could get a local copy of every paper somewhere on haskell.org so we can do a scholar.google.com search with site:haskell.org to get full text search of all relevant haskell papers. John -- John Meacham - ⑆repetae.net⑆john⑈
participants (16)
-
Ben Rudiak-Gould
-
bojohan+news@dd.chalmers.se
-
Brian Hulley
-
Bulat Ziganshin
-
Cale Gibbard
-
Chris Kuklewicz
-
Jan-Willem Maessen
-
John Meacham
-
Ketil Malde
-
Lennart Augustsson
-
Paul Hudak
-
Robert Dockins
-
Robin Green
-
Ronny Wichers Schreur
-
Tomasz Zielonka
-
Wolfgang Jeltsch