
I think I've just about got monads figured out, but there's one detail that still escapes me. As I understand it, a monad is a kind of programming trick the uses data dependency to force evaluation order. x >>= f means apply f to x; since the value of f x depends on the value of x, the evaluator must evaluate x before f x. However, consider: getChar >>= \x -> getChar An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics. But that would change the behavior, so to get the desired behavior, there must be some principle that prevents this from happening, ensuring that x >>= f always evaluates f x. I can see that the monad laws ensure this But I haven't found anything that states this. Am I missing something? Thanks, gregg

Hello Gregg, Thursday, February 5, 2009, 6:20:06 PM, you wrote:
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
this is prohibited by using pseudo-value of type RealWorld which is passed through entire action stream. actually, order of execution is controlled by dependencies on this values http://haskell.org/haskellwiki/IO_inside -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hello Gregg,
Thursday, February 5, 2009, 6:20:06 PM, you wrote:
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
this is prohibited by using pseudo-value of type RealWorld which is passed through entire action stream. actually, order of execution is controlled by dependencies on this values
http://haskell.org/haskellwiki/IO_inside
Thanks. I actually read that a few weeks ago and forgot all about it. So
On Thu, Feb 5, 2009 at 9:27 AM, Bulat Ziganshin

There's nothing magic about IO when it comes to monad semantics.
If you take ghc's implementation of IO, it's a state monad.
The new state generated by x is passed to f, so there's no way to skip x.
(Well, if the compiler can show that the state is not used anywhere
then it can start removing things, but in that case there really is no
change in semantics.)
-- Lennart
2009/2/5 Gregg Reynolds
On Thu, Feb 5, 2009 at 9:27 AM, Bulat Ziganshin
wrote: Hello Gregg,
Thursday, February 5, 2009, 6:20:06 PM, you wrote:
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
this is prohibited by using pseudo-value of type RealWorld which is passed through entire action stream. actually, order of execution is controlled by dependencies on this values
Thanks. I actually read that a few weeks ago and forgot all about it. So the gist is that type IO has special magic semantics. Formal, but hidden. Which means monad semantics are built in to the language, at least for that type. The Haskell Report doesn't seem to say anything about this, which seems odd.
But then for non-IO monads, the optimization would be allowed, no? Of course; only the IO monad has external world behavior.
Thanks,
gregg
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Feb 5, 2009 at 10:00 AM, Lennart Augustsson
There's nothing magic about IO when it comes to monad semantics. If you take ghc's implementation of IO, it's a state monad.
Doesn't that mean the semantics are defined by the implementation? My problem is that I'm not seeing how correct eval sequencing can be forced unless a magic token is passed around, which means that /some/ such hidden semantics must be part of the formal semantics of IO. In other words, it's not enough for it to be a monad, since >>= by itself cannot guarantee data dependency. If it doesn't pass around the World token, we don't get sequencing. -g

* Gregg Reynolds
I think I've just about got monads figured out, but there's one detail that still escapes me. As I understand it, a monad is a kind of programming trick the uses data dependency to force evaluation order. x >>= f means apply f to x; since the value of f x depends on the value of x, the evaluator must evaluate x before f x. However, consider:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics. But that would change the behavior, so to get the desired behavior, there must be some principle that prevents this from happening, ensuring that x >>= f always evaluates f x.
x >>= f doesn't mean apply f to x. It means, roughly, "construct (IO) action from actions x and f(..), so that they are executed sequentially and f depends on a resuls produced by x". Even if f does not depend on its argument, there's no reason for compiler to think that the first action may be ignored. If you think in terms of dependency, the second action depends on the state of the "world" left after executing x. So all IO actions take an implicit "world" argument and (>>=) operator implicitely passes modified "world" to the next action.
I can see that the monad laws ensure this But I haven't found anything that states this. Am I missing something?
-- Roman I. Cheplyaka :: http://ro-che.info/ "Don't let school get in the way of your education." - Mark Twain

Gregg Reynolds wrote:
However, consider:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
Let's imagine that IO datatype is defined thus:
{-# LANGUAGE GADTs #-} {-# LANGUAGE NoImplicitPrelude #-}
import Prelude(Monad, Char) data IO a where GetChar :: IO Char Bind :: IO a -> (a -> IO b) -> IO b
getChar = GetChar (>>=) = Bind
It is perfectly possible to construct IO actions as values of this data type and execute them by some function evalIO :: IO -> Prelude.IO with the obvious definition. Now the question arises: do you think getChar >>= \x -> getChar would be optimized to getChar by compiler? If no, why would GHC want to do this optimization for standard IO?

On Thu, Feb 5, 2009 at 9:53 AM, Gleb Alexeyev
Let's imagine that IO datatype is defined thus:
{-# LANGUAGE GADTs #-} {-# LANGUAGE NoImplicitPrelude #-}
import Prelude(Monad, Char) data IO a where GetChar :: IO Char Bind :: IO a -> (a -> IO b) -> IO b
getChar = GetChar (>>=) = Bind
It is perfectly possible to construct IO actions as values of this data type and execute them by some function evalIO :: IO -> Prelude.IO with the obvious definition. Now the question arises: do you think getChar >>= \x -> getChar would be optimized to getChar by compiler?
I must be misunderstanding something. I don't know if it would be optimized out, but I see no reason why it couldn't be. There's no data dependency, right? -g

Gregg Reynolds wrote:
I must be misunderstanding something. I don't know if it would be optimized out, but I see no reason why it couldn't be. There's no data dependency, right?
Of course there is data dependency. In my example, where IO is defined as a (generalized) algebraic datatype, the value of getChar is GetChar. The value of 'getChar >>= \x -> getChar' is 'Bind GetChar (\x -> GetChar'. 'x' is not used anywhere, but this doesn't change the fact that these are totally different values, no sane compiler would prove them equal. For some monads, the evaluation of 'a >>= \x -> b' yields b, but it's not true in general.

On Thu, Feb 5, 2009 at 10:49 AM, Gleb Alexeyev
Gregg Reynolds wrote:
I must be misunderstanding something. I don't know if it would be
optimized out, but I see no reason why it couldn't be. There's no data dependency, right?
Of course there is data dependency. In my example, where IO is defined as a (generalized) algebraic datatype, the value of getChar is GetChar. The value of 'getChar >>= \x -> getChar' is 'Bind GetChar (\x -> GetChar'. 'x' is not used anywhere, but this doesn't change the fact that these are totally different values, no sane compiler would prove them equal.
Are you saying that using equations to add a level of indirection prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function? -g

Are you saying that using equations to add a level of indirection
prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function?
No, no compiler should change "getChar >>= \x -> getChar" to just getChar, because it's just wrong. The first will try to read in two values of type Char, the second will only try to read one. Side effects aren't THAT hated!

On Thu, Feb 5, 2009 at 11:12 AM, Andrew Wagner
Are you saying that using equations to add a level of indirection prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function?
No, no compiler should change "getChar >>= \x -> getChar" to just getChar, because it's just wrong. The first will try to read in two values of type Char, the second will only try to read one. Side effects aren't THAT hated!
Right, but that's only because the compiler either somehow knows about side effects or there is some other mechanism - e.g. an implicit World token that gets passed around - that prevents optiimization. As far as the formal semantics of the language are concerned, there's no essential difference between "getChar >>= \x -> getChar" and " Foo 3
= \x -> Foo 7 " for some data constructor Foo. I should think the latter would be optimized; if so, why not the former? The presence of some hidden (from me, anyway) semantics is inescapable.
-g

Gregg Reynolds wrote:
Right, but that's only because the compiler either somehow knows about side effects or there is some other mechanism - e.g. an implicit World token that gets passed around - that prevents optiimization. As far as the formal semantics of the language are concerned, there's no essential difference between "getChar >>= \x -> getChar" and " Foo 3
= \x -> Foo 7 " for some data constructor Foo. I should think the latter would be optimized; if so, why not the former? The presence of some hidden (from me, anyway) semantics is inescapable.
There's no reason to assume the latter would be "optimized" either. Consider: > data NotIO a = NotIO String a > > instance Monad NotIO where > return x = NotIO "" x > (NotIO s x) >>= f = (\(NotIO z y) -> NotIO (s++z) y) (f x) > > notGetChar = NotIO "foo" 'a' Let's consider your example: notGetChar >>= \x -> notGetChar == (NotIO "foo" 'a') >>= (\x -> NotIO "foo" 'a') == (\(NotIO z y) -> NotIO ("foo"++z) y) ((\x -> NotIO "foo" 'a') 'a') == (\(NotIO z y) -> NotIO ("foo"++z) y) (NotIO "foo" 'a') == NotIO ("foo"++"foo") 'a' It's clear to see that this result is not equal to notGetChar. And the compiler needn't know anything special about NotIO to realize that. There's nothing special about IO. The only difference between NotIO and IO is that NotIO has a Chars for "side effects". IO is a perfectly fine mathematical object, the only difference is that it is appending sequences of an abstract type that we mere mortals like to think of as "actions" or "side effects" or "a program". The mathematics neither knows nor cares about any of that rubbish. The only relevant thing[1] is that this abstract type ---whatever it is, however it's implemented--- forms a monoid. The return function associates the identity of the monoid with some value (which is also irrelevant as far as the monad is concerned), and the bind operation uses the associative operation of the monoid to combine them. The latter is more obvious when it's phrased in terms of join :: Monad m => m(m a) -> m a, and the monad law: join . join == join . fmap join It so happens that a sequence of anything is the "free" monoid. Appending sequences of characters is no different than appending sequences of actions or program instructions. For lists or sets as monads, the monoid is a path through a decision tree aka a sequence of choices. These "choices" aren't stored in bits anywhere, just like IO's RealWorld[2], but people don't seem to get as flustered trying to figure out what exactly a "choice" is and what a "choice" means as a mathematical object. The point is, it doesn't matter. The choice, the side effect, the character, these are all just arbitrary sets of things. We can construct a free monoid over any set, and the monoid neither knows nor cares anything about the "values" in that set. The only way we can optimize away one of the calls to notGetChar (or any other monadic function) is if we can guarantee that no other computation will use either the monoidal values or the "contained" values associated with it. IO is defined with a bind operator that's strict in the monoidal value, which means we can never remove things (unless, perhaps, we can demonstrate that they only append the identity of the monoid, in which case evaluating the function has no side effects--- by definition). There's nothing particularly intriguing about a strict bind operator, State has one as do other monads. And there's nothing particularly intriguing about "stateful" monads either, there is the Lazy State monad too (and the ACIO monad is something like a lazy IO). [1] Okay, not the *only* relevant thing :) [2] This is just like the binary search algorithm. Effectively there's a tree that spans the set we're searching, and each iteration of search is walking down one ply of that tree. Only for binary search we usually don't bother constructing that tree, the tree is realized in the recursion pattern of the algorithm. Or dually, binary tree datastructures are a reification of the algorithm. You get this same duality with, frex, lazy lists vs for-loops. There's nothing special about being stored in bits vs being stored in invariants or actions. -- Live well, ~wren

On Thu, Feb 5, 2009 at 7:19 PM, wren ng thornton
Gregg Reynolds wrote:
as the formal semantics of the language are concerned, there's no essential difference between "getChar >>= \x -> getChar" and " Foo 3
= \x -> Foo 7 " for some data constructor Foo. I should think the
latter would be optimized; if so, why not the former? The presence of some hidden (from me, anyway) semantics is inescapable.
There's no reason to assume the latter would be "optimized" either. Consider:
data NotIO a = NotIO String a
instance Monad NotIO where return x = NotIO "" x (NotIO s x) >>= f = (\(NotIO z y) -> NotIO (s++z) y) (f x)
notGetChar = NotIO "foo" 'a'
Let's consider your example:
notGetChar >>= \x -> notGetChar == (NotIO "foo" 'a') >>= (\x -> NotIO "foo" 'a') == (\(NotIO z y) -> NotIO ("foo"++z) y) ((\x -> NotIO "foo" 'a') 'a') == (\(NotIO z y) -> NotIO ("foo"++z) y) (NotIO "foo" 'a') == NotIO ("foo"++"foo") 'a'
You've defined >>= in such a way that it carries additional semantic weight. My example doesn't do that. Bind can be thought of as an "augmented application" operator: x >>= f applies f to x, but it may fiddle with x first. If it doesn't then it amounts to ordinary function application, which can be optimized out. That's the point of my example: data constructors don't munge their arguments, and getChar is essentially a data constructor.
NotIO has a Chars for "side effects". IO is a perfectly fine mathematical object, the only difference is that it is appending sequences of an abstract
Gotcha! "appending sequences" is not something mathematical values are inclined to do. They neither spin nor toil, let alone append. The dominant idiom - "IO values are actions" just doesn't work if you want formal semantics, no matter how useful it may be pedagogically.
The only way we can optimize away one of the calls to notGetChar (or any other monadic function) is if we can guarantee that no other computation will use either the monoidal values or the "contained" values associated with it. IO is defined with a bind operator that's strict in the monoidal value, which means we can never remove things (unless, perhaps, we can
Ok; but where does it say that in the language definition? I understand how it all works, but that comes from the many informal narrative expositions; my point is that if you tried to write a formal semantics for Haskell, a la ML, you couldn't do it without coming up with a different story for IO. -gregg

Gregg Reynolds wrote:
On Thu, Feb 5, 2009 at 7:19 PM, wren ng thornton
wrote: Gregg Reynolds wrote:
as the formal semantics of the language are concerned, there's no essential difference between "getChar >>= \x -> getChar" and " Foo 3
= \x -> Foo 7 " for some data constructor Foo. I should think the latter would be optimized; if so, why not the former? The presence of some hidden (from me, anyway) semantics is inescapable. There's no reason to assume the latter would be "optimized" either. Consider:
data NotIO a = NotIO String a
instance Monad NotIO where return x = NotIO "" x (NotIO s x) >>= f = (\(NotIO z y) -> NotIO (s++z) y) (f x)
notGetChar = NotIO "foo" 'a'
Let's consider your example:
notGetChar >>= \x -> notGetChar == (NotIO "foo" 'a') >>= (\x -> NotIO "foo" 'a') == (\(NotIO z y) -> NotIO ("foo"++z) y) ((\x -> NotIO "foo" 'a') 'a') == (\(NotIO z y) -> NotIO ("foo"++z) y) (NotIO "foo" 'a') == NotIO ("foo"++"foo") 'a'
You've defined >>= in such a way that it carries additional semantic weight. My example doesn't do that.
Yes, is does. Every bind operator does so (trivially for the Identity monad). What else would bind be "augmented" with if it's "augmented application"? Augmentation means to add structure. Bind does not "apply f to x", x is a monadic value which carries additional structure. What bind does is map f over x's structure and then calls join to collapse the monadic layers from the result.
If it doesn't then it amounts to ordinary function application, which can be optimized out. That's the point of my example: data constructors don't munge their arguments, and getChar is essentially a data constructor.
= f which the monad laws require to equal f x. Just because the Identity monad is trivial does not mean that bind is trivial. That's
Data constructors in general are not monads. To the extent that your Foo example is well typed it is isomorphic to the Identity monad. In which case your Foo x >>= f is exactly return x like saying that addition must do nothing because of it's behavior on 0.
NotIO has a Chars for "side effects". IO is a perfectly fine mathematical object, the only difference is that it is appending sequences of an abstract
Gotcha! "appending sequences" is not something mathematical values are inclined to do. They neither spin nor toil, let alone append.
Balderdash.
The dominant idiom - "IO values are actions" just doesn't work if you want formal semantics, no matter how useful it may be pedagogically.
IO values are absolutely not actions, that's the whole point. Actions, such as they exist at all, are merely a domain over which we've constructed a monoid. The semantics from this are minimal, but if you'd like to define the domain more precisely there are many people who would be grateful for your contribution to research on effect systems and the ontology of computational effects on reality.
The only way we can optimize away one of the calls to notGetChar (or any other monadic function) is if we can guarantee that no other computation will use either the monoidal values or the "contained" values associated with it. IO is defined with a bind operator that's strict in the monoidal value, which means we can never remove things (unless, perhaps, we can
Ok; but where does it say that in the language definition? I understand how it all works, but that comes from the many informal narrative expositions; my point is that if you tried to write a formal semantics for Haskell, a la ML, you couldn't do it without coming up with a different story for IO.
It says that in the function definitions for bind and return on the IO type. In GHC the IO type is defined such that it's a specialized version of the State monad where RealWorld is the type of the state it carries. You can look in the source code that defines IO if you care. Other compilers may choose a different implementation for IO which has the same observational behavior in which case their "story" will be different. IO is an abstract type which is defined only by its observational behavior. One aspect of this behavior is that it is a monad and is not the Identity monad. Everything else follows. -- Live well, ~wren

Gregg Reynolds
You've defined >>= in such a way that it carries additional semantic weight.
Would it be appropriate to sum up this discussion thusly: 1. What gets and gets not optimized away in a monad depends on the implementation of (>>=) 2. For IO, (>>=) is - must be - implemented in such a way that actions don't get optimized away. For instance as passing around RealWorld as state. 3. While the reason for this should be obvious, any formal specification for this requirement is missing. -k -- If I haven't seen further, it is by standing in the footprints of giants

Gregg Reynolds wrote:
Are you saying that using equations to add a level of indirection prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function?
Perhaps my example doesn't work, so I'll try another example. As you know, (>>=) is just an (overloaded) higher-order function. Let's consider another higher-order function, map. The expression
map (\x -> 42) [1..3] evaluates to [42, 42, 42]. As you can see, the function (\x -> 42) passed to map ignores its first argument. Would you expect the compiler to discard the call to map?
Can you see the analogy? The shapes of the two expressions, map (\x -> 42) [1..3] and (>>=) getChar (\x -> getChar) are similar. (>>=) is a combinator just like map, you cannot optimize it away in general.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gleb Alexeyev wrote: | Perhaps my example doesn't work, so I'll try another example. | As you know, (>>=) is just an (overloaded) higher-order function. | Let's consider another higher-order function, map. The expression | > map (\x -> 42) [1..3] | evaluates to [42, 42, 42]. | As you can see, the function (\x -> 42) passed to map ignores its first | argument. Would you expect the compiler to discard the call to map? In fact, this can even be rewritten with bind: ~ [1..3] >>= \x -> return 42 So if you wouldn't expect the map to be discarded, certainly the bind should not be either. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkmLJQgACgkQye5hVyvIUKlUNQCgp6T0BCs00rYS3ygprs5rs3VM mrgAoMo1AgLTTeosXV8dQa+1BljcpfQy =o3iZ -----END PGP SIGNATURE-----

On Thu, Feb 5, 2009 at 11:22 AM, Gleb Alexeyev
Gregg Reynolds wrote:
Are you saying that using equations to add a level of indirection prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function?
Perhaps my example doesn't work, so I'll try another example. As you know, (>>=) is just an (overloaded) higher-order function. Let's consider another higher-order function, map. The expression
map (\x -> 42) [1..3] evaluates to [42, 42, 42]. As you can see, the function (\x -> 42) passed to map ignores its first argument. Would you expect the compiler to discard the call to map?
No, but I might expect it to discard the function application and just replace each element of the list with 42. Which it may do at compile time. Plus, it will only be evaluated once, no matter how many times the expression occurs, since lazy evaluation memoizes results.
Can you see the analogy? The shapes of the two expressions, map (\x -> 42) [1..3] and (>>=) getChar (\x -> getChar) are similar.
(>>=) is a combinator just like map, you cannot optimize it away in general.
Ok, I see what you're getting at, but I have to think about it some more. Here's a restatement that might make my puzzlement more clear. Suppose we have f = \x -> 42 Then I would expect the compiler to replace each expression of the form f x with 42, for all x. Now suppose we write 7 >>= \x -> 42. It seems to me that a compiler could examine that expression, conclude that it is constant, and replace it with the expression 42. So far as I know, there's nothing in the semantics of Haskell to forbid this, so why not do it? There are no side effects, and an optimizer can always substitute equals for equals, no? (Let's disregard the fact that such optimizations may be hard or expensive.) This is different from map, because unlike map it doesn't need to perform a computation (i.e. execute the function) to arrive at the final value; it only has to analyze the function (which is computation but not computation of the function itself.) Now getChar is a value of type IO Char, by definition. That means mathematical value - "an action that does some IO" won't cut it in a formal semantics, since there is no way to express "does some IO" formally. Values is values; they don't "do" anything. So we should be able to replace 7 and 42 above by getChar, and do the same optimization. Do you see why this is problematic for me? In any case, you're responses (and the others) have been quite useful and have helped me work through some problems with a rather radical idea I have to address this that I hope to write up this weekend. Thanks, gregg

Perhaps you meant the type correct expression
return 7 >>= \ x -> return 42
which the compiler will probably optimize to return 42, because by
seeing the definition of return the compiler can see that it has no
effects in the monad. This expression is very different from
getChar >>= \ x -> return 42
Assuming getChar is some kind of primitive operation, the compiler
does not know that it has no effects, so it has to remain.
There's nothing special about IO in this respect. If IO is
implemented as a state monad the compiler can just unfold the
definitions of return and >>= and do its usual job. Some
subexpressions (like return 7) do not touch the RealWord state and can
be optimized away, some operations (like getChar) take and return a
RealWorld, so they cannot be removed.
-- Lennart
On Thu, Feb 5, 2009 at 8:55 PM, Gregg Reynolds
On Thu, Feb 5, 2009 at 11:22 AM, Gleb Alexeyev
wrote: Gregg Reynolds wrote:
Are you saying that using equations to add a level of indirection prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function?
Perhaps my example doesn't work, so I'll try another example. As you know, (>>=) is just an (overloaded) higher-order function. Let's consider another higher-order function, map. The expression
map (\x -> 42) [1..3] evaluates to [42, 42, 42]. As you can see, the function (\x -> 42) passed to map ignores its first argument. Would you expect the compiler to discard the call to map?
No, but I might expect it to discard the function application and just replace each element of the list with 42. Which it may do at compile time. Plus, it will only be evaluated once, no matter how many times the expression occurs, since lazy evaluation memoizes results.
Can you see the analogy? The shapes of the two expressions, map (\x -> 42) [1..3] and (>>=) getChar (\x -> getChar) are similar.
(>>=) is a combinator just like map, you cannot optimize it away in general.
Ok, I see what you're getting at, but I have to think about it some more.
Here's a restatement that might make my puzzlement more clear. Suppose we have
f = \x -> 42
Then I would expect the compiler to replace each expression of the form f x with 42, for all x.
Now suppose we write 7 >>= \x -> 42. It seems to me that a compiler could examine that expression, conclude that it is constant, and replace it with the expression 42. So far as I know, there's nothing in the semantics of Haskell to forbid this, so why not do it? There are no side effects, and an optimizer can always substitute equals for equals, no? (Let's disregard the fact that such optimizations may be hard or expensive.)
This is different from map, because unlike map it doesn't need to perform a computation (i.e. execute the function) to arrive at the final value; it only has to analyze the function (which is computation but not computation of the function itself.)
Now getChar is a value of type IO Char, by definition. That means mathematical value - "an action that does some IO" won't cut it in a formal semantics, since there is no way to express "does some IO" formally. Values is values; they don't "do" anything. So we should be able to replace 7 and 42 above by getChar, and do the same optimization.
Do you see why this is problematic for me? In any case, you're responses (and the others) have been quite useful and have helped me work through some problems with a rather radical idea I have to address this that I hope to write up this weekend.
Thanks,
gregg _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

At Thu, 5 Feb 2009 11:06:22 -0600, Gregg Reynolds wrote:
Are you saying that using equations to add a level of indirection prevents optimization? I still don't see it - discarding x doesn't change the semantics, so a good compiler /should/ do this. How is this different from optimizing out application of a constant function?
Let rewrite this: 'getChar >>= \x -> getChar' in a slightly different looking form, where bind = (>>=) so that we have: bind getChar (\x -> getChar) If I understand your argument, you think that the compiler should be able to look at that expression, and instead of calling 'bind' with the two arguments 'getChar' and '(\x -> getChar)' it should just optimize it to 'getChar'? By comparision, consider this function: op :: Maybe a -> (a -> Maybe b) -> Maybe b op x f = Nothing I believe that by your argument we should be able to rewrite this: (Just "foo") `op` \x -> (Just "bar") as: (Just "bar") because the 'x' is ignored? But, 'op' *always* returns Nothing, and clearly Nothing /= (Just "bar")? Maybe I misunderstanding what you mean when you say that 'discarding x does not change the semantics'. Thanks! - jeremy

Let's put it this way: suppose you have two data types, say, Int and String; a value s of type String and a function f :: String -> (Int -> String) -> String This could be anything - may be, a function which looks for the first character '#' in it's first argument and replaces it with the second argument applied to the position where it's found; so f "abc#" (\n -> replicate n 'q') = "abcqqq" It could be anything else, of course. Now, would you expect an optimizer to transform f s (\x -> s) to s? I don't think so. f s (\x -> s) and s are clearly distinct and there is no reason to transform one to the other. Now, let's change notation a bit. First of all, let's denote our string s by getChar. Well, it's our string and we can name it with what name we want - especially if we forget for a moment that getChar is already defined. So, for a moment we assume that getChar is defined like this: getChar = "abc#" Therefore, f getChar (\x -> getChar) is NOT equivalent to getChar. Right? Let's change notation even more. Let's denote our function by (>>=): (>>=) getChar (\x -> getChar) is NOT equal to getChar By Haskell rules we can use >>= as infix operator: getChar >>= (\x -> getChar) is NOT equal to getChar Now, in your example, instead of Int and String we have Char and IO Char. Does that matter? In all the above we didn't use the fact that our types are Int and String; the very same applies to Char and (IO Char) as well. On 5 Feb 2009, at 19:18, Gregg Reynolds wrote:
On Thu, Feb 5, 2009 at 9:53 AM, Gleb Alexeyev
wrote: Let's imagine that IO datatype is defined thus: {-# LANGUAGE GADTs #-} {-# LANGUAGE NoImplicitPrelude #-}
import Prelude(Monad, Char) data IO a where GetChar :: IO Char Bind :: IO a -> (a -> IO b) -> IO b
getChar = GetChar (>>=) = Bind
It is perfectly possible to construct IO actions as values of this data type and execute them by some function evalIO :: IO -> Prelude.IO with the obvious definition. Now the question arises: do you think getChar >>= \x -> getChar would be optimized to getChar by compiler?
I must be misunderstanding something. I don't know if it would be optimized out, but I see no reason why it couldn't be. There's no data dependency, right?
-g _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gregg Reynolds wrote: | As I understand it, a monad is a kind of programming trick the uses data dependency to force evaluation order. Actually, a monad does nothing more than supply some combinators. These combinators (and some trivial laws) are all you really need to know to understand monads. Specific monads are really just a case-by-case issue. Using some function names that are not used in the standard Haskell libraries, here are the important functions you have access to when you have a monad, all of which return monadic values: ~ point :: a -> f a ~ map :: (a -> b) -> f a -> f b ~ apply :: f (a -> b) -> f a -> f b ~ bind :: (a -> f b) -> f a -> f b I aligned the type signatures the way I did so you could see their similarities and differences more easily. point is the same as the return and pure functions. It lifts a pure value. map is the same as the (<$>), fmap, liftA, and liftM functions. It applies a pure function to a lifted value. apply is the same as the (<*>) and ap functions. It applies a lifted function to a lifted value. The type signature for bind is reversed from that of (>>=). It's the same as (=<<). I think it is more clear and better parallels the other functions I've already shown. It applies a "lifting" function to a lifted value. | since the value of f x depends on the value of x, the evaluator must evaluate x before f x This is actually not true. Remember, Haskell is a lazy language. x need not be evaluated before applying f to it. | However, consider: | | getChar >>= \x -> getChar | | An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics. Consider how this expression would actually look in a Haskell program: ~ main :: IO () ~ main = getChar >>= \x -> getChar Remember, a type coincides with a value. That is, the entire main function is a _value_ of type IO (). The value, as it so happens, is an imperative program which the Haskell runtime performs. That is, the reduction semantics of IO expressions _produce_, rather than perform, actions. The actions are only performed by the runtime. Every monad has a "run" function. The IO monad's run function is the Haskell runtime. If you think of the IO monad as some sort of Writer monad, it is easy to see that Haskell's reduction semantics actually guarantee correct order of operations, and a semantics-preserving optimizer will not change it. In GHC, the IO monad is actually more like a State monad (over RealWorld). It is also easy to see that this preserves correct ordering of side-effects as well. I just wanted to present a different point of view. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkmLDpMACgkQye5hVyvIUKnUpwCfdAycTBv29wVt+J5VHZbNEQ3H 80kAnj7u7HS+5S1qxgzB90I7v+ftuazo =poXT -----END PGP SIGNATURE-----

Hello, The type IO (in many Haskell implemenations) is essentially:
type IO a = RealWorld -> (a, RealWorld)
And >> would be implemented like:
(>>) :: IO a -> IO b -> IO b action1 >>= action2 = \world0 -> let (a, world1) = action1 world0 (b, world2) = action2 world1 in (b, world2)
So, even though action2 does not depend on the 'a' value returned by action1, it does depend on the world value. Hence, the compiler will not optimize away the call to action1, because then it would not have a 'world1' value to pass to action2. It is the passing around of these 'world*' values that causes the IO operations to happen in the right order. Of course, this only works if the programmer is careful to ensure that each world variable is used exactly once, and that no functions are accidently skipped, etc. For example, lets say that (>>) was accidently defined like:
(>>) :: IO a -> IO b -> IO b action1 >>= action2 = \world0 -> let (a, world1) = action1 world0 (b, world2) = action2 world0 -- oops, should be world1 in (b, world2)
now action1 would not be run. Since it is so easy to accidently screw up the passing around of world variables, we don't do it 'by hand'. We get it right once in the Monad instance, and then we don't have to worry about it anymore. - jeremy At Thu, 5 Feb 2009 09:20:06 -0600, Gregg Reynolds wrote:
[1
] [1.1 ] I think I've just about got monads figured out, but there's one detail that still escapes me. As I understand it, a monad is a kind of programming trick the uses data dependency to force evaluation order. x >>= f means apply f to x; since the value of f x depends on the value of x, the evaluator must evaluate x before f x. However, consider: getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics. But that would change the behavior, so to get the desired behavior, there must be some principle that prevents this from happening, ensuring that x >>= f always evaluates f x.
I can see that the monad laws ensure this But I haven't found anything that states this. Am I missing something?
Thanks,
gregg [1.2
] [2
] _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Gregg Reynolds
I think I've just about got monads figured out, but there's one detail that still escapes me. As I understand it, a monad is a kind of programming trick the uses data dependency to force evaluation order. x >>= f means apply f to x; since the value of f x depends on the value of x, the evaluator must evaluate x before f x. However, consider:
getChar >>= \x -> getChar
x >>= f does not mean "apply f to x", it means "do x, and then do f with the result of x". Bind is a sequencing operator rather than an application operator.

On Thu, Feb 5, 2009 at 10:26 AM,
x >>= f does not mean "apply f to x", it means "do x, and then do f with the result of x". Bind is a sequencing operator rather than an application operator.
Sequencing is a side effect of data dependency. What I should have said is x >>= f means "evaluate x and f (in any order), /then/ apply f to x". In a non-IO monad, x can be discarded if f does not depend on it. -g

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 mail@justinbogner.com wrote: | Bind is a sequencing operator rather than an application operator. In my opinion, this is a common misconception. I think that bind would be nicer if its arguments were reversed. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkmLF0oACgkQye5hVyvIUKnTeACfW3Kvlz2PLTQ6RzPGdF+aDRPa IKoAn2X7STrrKDUO3s9yUP5weTHcaG3Z =bBXE -----END PGP SIGNATURE-----

Oops, sent this off list the first time, here it is again.
Jake McArthur
mail@justinbogner.com wrote: | Bind is a sequencing operator rather than an application operator.
In my opinion, this is a common misconception. I think that bind would be nicer if its arguments were reversed.
If this is a misconception, why does thinking of it this way work so well? This idea is reinforced by the do notation syntactic sugar: bind can be represented by going into imperative land and "do"ing one thing before another. The fact that `x' may not actually have to happen before `f' is merely the typical sort of optimization we do in compilers for imperative languages: instructions that do not modify non-local state can be re-ordered, but IO cannot because it jumps elsewhere, no?

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
mail@justinbogner.com wrote:
| Oops, sent this off list the first time, here it is again.
|
| Jake McArthur

Jake McArthur
mail@justinbogner.com wrote: | Oops, sent this off list the first time, here it is again. | | Jake McArthur
writes: |> mail@justinbogner.com wrote: |> | Bind is a sequencing operator rather than an application operator. |> |> In my opinion, this is a common misconception. I think that bind would |> be nicer if its arguments were reversed. | | If this is a misconception, why does thinking of it this way work so | well? This idea is reinforced by the do notation syntactic sugar: bind | can be represented by going into imperative land and "do"ing one thing | before another. An imperative-looking notation does not make something imperative.
Thinking of bind as sequencing really *doesn't* work very well. What does bind have to do with sequencing at all in the list monad, for example? What about the reader monad?
- Jake
What doesn't bind have to do with sequencing in the list monad? Consider: [1..2] >>=

Jake McArthur
mail@justinbogner.com wrote: | Oops, sent this off list the first time, here it is again. | | Jake McArthur
writes: |> mail@justinbogner.com wrote: |> | Bind is a sequencing operator rather than an application operator. |> |> In my opinion, this is a common misconception. I think that bind would |> be nicer if its arguments were reversed. | | If this is a misconception, why does thinking of it this way work so | well? This idea is reinforced by the do notation syntactic sugar: bind | can be represented by going into imperative land and "do"ing one thing | before another. An imperative-looking notation does not make something imperative.
Thinking of bind as sequencing really *doesn't* work very well. What does bind have to do with sequencing at all in the list monad, for example? What about the reader monad?
- Jake
What doesn't bind have to do with sequencing in the list monad? Consider: [1..2] >>= return . (^2) This says "generate the list [1..2] and then use it to generate a list of squares". It's more than just application, it's a description of a sequence of actions. The whole point of list comprehensions (which is the only reason to have a list monad, as far as I know) is to think of it this way rather than as an application of concatMap. As for Reader, I don't know enough about it to say anything.

On Thu, 2009-02-05 at 11:47 -0700, mail@justinbogner.com wrote:
Jake McArthur
writes: mail@justinbogner.com wrote: | Oops, sent this off list the first time, here it is again. | | Jake McArthur
writes: |> mail@justinbogner.com wrote: |> | Bind is a sequencing operator rather than an application operator. |> |> In my opinion, this is a common misconception. I think that bind would |> be nicer if its arguments were reversed. | | If this is a misconception, why does thinking of it this way work so | well? This idea is reinforced by the do notation syntactic sugar: bind | can be represented by going into imperative land and "do"ing one thing | before another. An imperative-looking notation does not make something imperative.
Thinking of bind as sequencing really *doesn't* work very well. What does bind have to do with sequencing at all in the list monad, for example? What about the reader monad?
- Jake
What doesn't bind have to do with sequencing in the list monad? Consider:
[1..2] >>= return . (^2)
This says "generate the list [1..2] and then use it to generate a list of squares". It's more than just application, it's a description of a sequence of actions.
But not a temporal sequence. (>>=) in IO is about temporal sequencing (modulo unsafeInterleaveIO, forkIO, etc.)
The whole point of list comprehensions (which is the only reason to have a list monad, as far as I know)
Huh? I thought newtype Parser s alpha = Parser { unParser :: StateT s [] alpha } deriving (Functor, Applicative, Alternative, Monad, MonadPlus) was an entirely sufficient reason to have a list monad. jcc

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 mail@justinbogner.com wrote: | [1..2] >>= return . (^2) | | This says "generate the list [1..2] and then use it to generate a list | of squares". It's more than just application, it's a description of a | sequence of actions. The whole point of list comprehensions (which is | the only reason to have a list monad, as far as I know) is to think | of it this way rather than as an application of concatMap. The problem with your description is that you said "and then." The result will be generated lazily. There is no sequencing here. Consider: ~ do x <- [0..] ~ y <- [0..9] ~ return (x, y) Which list is generated first? | As for Reader, I don't know enough about it to say anything. Reader provides an immutable value that can be retrieved at any point in the monad. There are no monadic side effects, so it doesn't really mean much to say that anything happens in any particular order. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkmLPEoACgkQye5hVyvIUKlaJACePGL6FdevDAmREsl/H9u7jjxS bKsAoMzEkpw6XjOVg4xql8d4NN47lg8d =jNtS -----END PGP SIGNATURE-----

Jake McArthur
The problem with your description is that you said "and then." The result will be generated lazily. There is no sequencing here. Consider:
~ do x <- [0..] ~ y <- [0..9] ~ return (x, y)
Which list is generated first?
This is an implementation detail, since y has no dependency on x the compiler's free to reorder instructions, as it would be an imperative language. Semantically this means do x and then y, since if y is changed to depend on x, this is still valid, but if x is changed to depend on y, this sequence is not valid. Just because the compiler is allowed (and even encouraged) to change the sequence where that won't change the results, considering this a sequence is still valid and meaningful.
| As for Reader, I don't know enough about it to say anything.
Reader provides an immutable value that can be retrieved at any point in the monad. There are no monadic side effects, so it doesn't really mean much to say that anything happens in any particular order.
It still needs to be retrieved (logically, not necessarily temporally) before it's used, doesn't it?
- Jake

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
mail@justinbogner.com wrote:
| Jake McArthur

bind is no more a sequencing operator than function composition is. Just as
the order in which you pass two functions (ignoring the type issue for the
moment) matters, the order in which you pass things to bind matters. The
sense in which the order _doesn't_ matter is that of the order in which what
you pass gets evaluated.
On Thu, Feb 5, 2009 at 3:32 PM, Jake McArthur
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
mail@justinbogner.com wrote: | Jake McArthur
writes: |> The problem with your description is that you said "and then." The |> result will be generated lazily. There is no sequencing here. Consider: |> |> ~ do x <- [0..] |> ~ y <- [0..9] |> ~ return (x, y) |> |> Which list is generated first? |> | | This is an implementation detail, since y has no dependency on x the | compiler's free to reorder instructions, as it would be an imperative | language. Semantically this means do x and then y That the order does not matter is definitely not an implementation detail. Order not mattering is exactly what declarative programming is about. The semantics of the above expression are the same as this:
~ (<$> [0..9]) . (,) =<< [0..]
There is no "do X and then do Y," only "this is what I want." Declarative semantics. As I said, just because do notation makes it look imperative doesn't mean it actually is.
| Just because the compiler is allowed (and even encouraged) to change the | sequence where that won't change the results, considering this a | sequence is still valid and meaningful.
It can be helpful sometimes, but I don't think it should be the standard way to think of bind. There are too many cases when it makes little sense as a sequencing operator.
- - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkmLTPEACgkQye5hVyvIUKkoCACgz/IYvxa6PoEvuqgxljGAwZ8+ TXQAn30MyLDwhLyZV3+dRuJvttx93ZNh =P9em -----END PGP SIGNATURE----- _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Jake McArthur
| Just because the compiler is allowed (and even encouraged) to change the | sequence where that won't change the results, considering this a | sequence is still valid and meaningful.
It can be helpful sometimes, but I don't think it should be the standard way to think of bind. There are too many cases when it makes little sense as a sequencing operator.
Fair enough, thanks for your thoughts!

On Thu, 2009-02-05 at 11:47 -0700, mail@justinbogner.com wrote:
Jake McArthur
writes: mail@justinbogner.com wrote: | Oops, sent this off list the first time, here it is again. | | Jake McArthur
writes: |> mail@justinbogner.com wrote: |> | Bind is a sequencing operator rather than an application operator. |> |> In my opinion, this is a common misconception. I think that bind would |> be nicer if its arguments were reversed. | | If this is a misconception, why does thinking of it this way work so | well? This idea is reinforced by the do notation syntactic sugar: bind | can be represented by going into imperative land and "do"ing one thing | before another. An imperative-looking notation does not make something imperative.
Thinking of bind as sequencing really *doesn't* work very well. What does bind have to do with sequencing at all in the list monad, for example? What about the reader monad?
- Jake
What doesn't bind have to do with sequencing in the list monad? Consider:
[1..2] >>= return . (^2)
This says "generate the list [1..2] and then use it to generate a list of squares". It's more than just application, it's a description of a sequence of actions. The whole point of list comprehensions (which is the only reason to have a list monad, as far as I know) is to think of it this way rather than as an application of concatMap.
As for Reader, I don't know enough about it to say anything.
Jake is more or less right. The Monad interface does nothing to enforce any particular evaluation order. The interface is, however, too narrow for you to express "do these two things in an indeterminate order" so you are forced to choose a particular linearization. Commutative monads, such as Reader, could relax that constraint, not that it would really mean much in many of those cases. Now, if we wanted to give a semantics for a call-by-value programming language, which was exactly the sort of thing Moggi was thinking about when he was talking about monads, then application would indeed translate into exactly (flipped) (>>=). So the expression (f x) in, say, SML would be translated to (f =<< x) using some appropriate monad to model the side-effects that ML supports. Actually, a 'let' like notation is often preferred as it matches better with lists of statements more prettily than the equivalent of treating ; as const. This is the source of the name 'bind' as, e.g. the ML, (let a = M in let b = N in a+b) translates to (M >>= \a -> N >>= \b -> return (a+b)) and, of course, do-notation is just a syntactic variant of this 'let' notation.

Gregg Reynolds
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded
It isn't discarded. The first getChar results in a value of type IO Char, always and ever. Whether or not the Char inside the IO Char gets evaluated or not is another question... >>= could not care less about what you do to the values that are _inside_ the monad. If you look at the type (>>=) :: (Monad m) => m a -> (a -> m b) -> m b , you will find it hard to do much of significance to "a" and/or "b"[1] Whether or not a monad behaves strictly or lazyily wrt. to "m" depends on the definition of the functions that come with it: IO happens to be strict, [] to be lazy. This is nothing that messes with semantics, though, it celebrates them. [1] Well, you can pass them to id, seq or par, and replace them with undefined... -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Gregg Reynolds wrote:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded
True, so 'x' is not used, and it can be garbage collected, and may not even be created. But that data dependency is simple not the data dependency that make IO sequential. Here is code from IOBase.lhs for GHC:
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))
The # are unboxed types and thus strict, but here we can erase them for clarity: newtype IO a = IO (State RealWorld -> (State RealWorld, a)) getChar is of type IO Char so that is constructor IO applied to a function from the "State RealWorld" to a strict pair of "State RealWorld" and Char. Since this is strict there is no laziness and the code must evaluate the input and output "State RealWorld" to ensure they are not bottom or error. Here is the rest of the plumbing in GHC:
unIO :: IO a -> (State# RealWorld -> (# State# RealWorld, a #)) unIO (IO a) = a
instance Functor IO where fmap f x = x >>= (return . f)
instance Monad IO where {-# INLINE return #-} {-# INLINE (>>) #-} {-# INLINE (>>=) #-} m >> k = m >>= \ _ -> k return x = returnIO x
m >>= k = bindIO m k fail s = failIO s
failIO :: String -> IO a failIO s = ioError (userError s)
liftIO :: IO a -> State# RealWorld -> STret RealWorld a liftIO (IO m) = \s -> case m s of (# s', r #) -> STret s' r
bindIO :: IO a -> (a -> IO b) -> IO b bindIO (IO m) k = IO ( \ s -> case m s of (# new_s, a #) -> unIO (k a) new_s )
thenIO :: IO a -> IO b -> IO b thenIO (IO m) k = IO ( \ s -> case m s of (# new_s, _ #) -> unIO k new_s )
returnIO :: a -> IO a returnIO x = IO (\ s -> (# s, x #))
The "bind" operation's case statement forces the unboxed "new_s :: State# RealWorld" to be strictly evaluated, and this depends on the input strict "s :: State# RealWorld". This data dependency of new_s on s is what forces IO statements to evaluate sequentially. Cheers, Chris

On Thu, Feb 5, 2009 at 8:20 PM, ChrisK
Since this is strict there is no laziness and the code must evaluate the input and output "State RealWorld" to ensure they are not bottom or error.
Interesting. I also thought it was the passing of the RealWorld that caused the sequencing, I never realized that the strictness played an important role here. So what would happen if it would be lazy instead of strict? What kind of craziness would occur?

On Fri, 2009-02-06 at 00:51 +0100, Peter Verswyvelen wrote:
On Thu, Feb 5, 2009 at 8:20 PM, ChrisK
wrote: Since this is strict there is no laziness and the code must evaluate the input and output "State RealWorld" to ensure they are not bottom or error.
Interesting. I also thought it was the passing of the RealWorld that caused the sequencing, I never realized that the strictness played an important role here.
So what would happen if it would be lazy instead of strict? What kind of craziness would occur?
The order of side effects would be demand-driven, rather than order-of-statement driven. So if I said: do x <- getChar y <- getChar z <- getChar then in the sequel, the first character I evaluated would be the first character read. Of course, Haskell's order of evaluation is undefined, so whether the string read from STDIN was [x, y, z] or [y, x, z] or [z, x, y] or some other permutation would be undefined. And, of course, if I never evaluated y at all, y would never be read --- there would be only two characters of input. Essentially, the program would act as if every statement was wrapped up in an unsafeInterleaveIO. jcc

Jonathan Cast wrote:
On Fri, 2009-02-06 at 00:51 +0100, Peter Verswyvelen wrote:
On Thu, Feb 5, 2009 at 8:20 PM, ChrisK
wrote: Since this is strict there is no laziness and the code must evaluate the input and output "State RealWorld" to ensure they are not bottom or error. Interesting. I also thought it was the passing of the RealWorld that caused the sequencing, I never realized that the strictness played an important role here.
So what would happen if it would be lazy instead of strict? What kind of craziness would occur?
The order of side effects would be demand-driven, rather than order-of-statement driven. So if I said: [snip] Essentially, the program would act as if every statement was wrapped up in an unsafeInterleaveIO.
jcc
I do not think so. Consider http://darcs.haskell.org/packages/base/GHC/IOBase.lhs to see unsafeInterleaveIO:
unsafeInterleaveIO m = unsafeDupableInterleaveIO (noDuplicate >> m) unsafeDupableInterleaveIO (IO m) = IO ( \ s -> let r = case m s of (# _, res #) -> res in (# s, r #))
And Control.Monad.State.Lazy which is the non-strict State monad:
newtype State s a = State { runState :: s -> (a, s) }
instance Monad (State s) where return a = State $ \s -> (a, s) m >>= k = State $ \s -> let (a, s') = runState m s in runState (k a) s'
And you can see that the data dependence is broken in unsafeInterleaveIO by _, and not broken in State.Lazy. But neither s nor s' are not forced to WHNF. What you can do State.Lazy is put an bottom into the state. Or you can put an unevaluated thunk into the state, and if it gets replaced later then you never paid to evaluate it. But the sequencing is retained. One could make an unsafeInterleaveIO for state:
interleaveState :: State s a -> State s a interleaveState (State m) = State ( \ s -> let (a,_) = runState m s in (a,s) )
Now the dependency chain is broken in the above, due to the _ being ignored. Cheers, Chris

On 2009-02-05 15:20, Gregg Reynolds wrote:
I think I've just about got monads figured out, but [...]
I don't think anyone has mentioned Simon's "Tackling the awkward squad" paper in this thread. This tutorial, which contains a semantics for a subset of IO, should answer some of the questions raised. http://research.microsoft.com/en-us/um/people/simonpj/Papers/marktoberdorf/ -- /NAD This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

On 6 Feb 2009, at 4:20 am, Gregg Reynolds wrote:
However, consider:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
But the result of the first getChar is *NOT* discarded. **As an analogy**, think of the type IO t as (World -> (t,World)) for some hidden type World, and getChar w = (c, w') -- get a character c out of world w somehow, -- changing w to w' as you go (f >>= g) w = let (v,w') = f w in (g v) w' In this analogy, you see that the result of getChar is a value of type IO Char (not of type Char), and that while the character part of the result of performing the result of getChar may be discarded, the "changed world" part is NOT.

On Sun, Feb 8, 2009 at 6:25 PM, Richard O'Keefe
On 6 Feb 2009, at 4:20 am, Gregg Reynolds wrote:
However, consider:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
But the result of the first getChar is *NOT* discarded. **As an analogy**, think of the type IO t as (World -> (t,World)) for some hidden type World, and getChar w = (c, w') -- get a character c out of world w somehow, -- changing w to w' as you go (f >>= g) w = let (v,w') = f w in (g v) w'
In this analogy, you see that the result of getChar is a value of type IO Char (not of type Char), and that while the character part of the result of performing the result of getChar may be discarded, the "changed world" part is NOT.
That's an implementation detail. It doesn't account for other possible IO implementations. My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e. So can a compiler, unless IO expressions are involved, in which case such optimizations are forbidden. I wondered if that was due to the semantics of >>= or the semantics of IO. To summarize what I've concluded thanks to the helpful people on haskell-cafe: The compiler can optimize e >>= f except for any IO expressions in e and f. IO expressions must be evaluated, due to the semantics of IO. The may not be disregarded, memoized, or substituted. IO semantics may be implemented in different ways by different compilers; these implementation techniques are not part of the formal semantics of the language, which may be expressed as above: IO expressions must be evaluated wherever and whenever they occur. The bind operator >>= enforces sequencing on arguments containing IO expressions, but does not force evaluation. Even bind expressions involving IO may be optimized. For example: getChar >>= \x -> ...<monster computation>... putChar 'c' The compiler may discard <monster computation> (assuming it contains no IO expressions), but it must evaluate getChar and putChar (due to IO semantics) in the correct order (due to bind semantics). Thanks all, gregg

Just to clarify a little.
If you implement the IO monad in a sane way (as some kind of state
monad or continuation monad) then the compiler can optimize e>>=f even
for the IO monad. The implementation of >>= will ensure the
sequencing of effects in e before effects in f.
The IO monad is less magic than you seem to think it is. :)
-- Lennart
2009/2/9 Gregg Reynolds
On Sun, Feb 8, 2009 at 6:25 PM, Richard O'Keefe
wrote: On 6 Feb 2009, at 4:20 am, Gregg Reynolds wrote:
However, consider:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
But the result of the first getChar is *NOT* discarded. **As an analogy**, think of the type IO t as (World -> (t,World)) for some hidden type World, and getChar w = (c, w') -- get a character c out of world w somehow, -- changing w to w' as you go (f >>= g) w = let (v,w') = f w in (g v) w'
In this analogy, you see that the result of getChar is a value of type IO Char (not of type Char), and that while the character part of the result of performing the result of getChar may be discarded, the "changed world" part is NOT.
That's an implementation detail. It doesn't account for other possible IO implementations.
My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e. So can a compiler, unless IO expressions are involved, in which case such optimizations are forbidden. I wondered if that was due to the semantics of >>= or the semantics of IO.
To summarize what I've concluded thanks to the helpful people on haskell-cafe:
The compiler can optimize e >>= f except for any IO expressions in e and f. IO expressions must be evaluated, due to the semantics of IO. The may not be disregarded, memoized, or substituted. IO semantics may be implemented in different ways by different compilers; these implementation techniques are not part of the formal semantics of the language, which may be expressed as above: IO expressions must be evaluated wherever and whenever they occur.
The bind operator >>= enforces sequencing on arguments containing IO expressions, but does not force evaluation. Even bind expressions involving IO may be optimized. For example:
getChar >>= \x -> ...<monster computation>... putChar 'c'
The compiler may discard <monster computation> (assuming it contains no IO expressions), but it must evaluate getChar and putChar (due to IO semantics) in the correct order (due to bind semantics).
Thanks all,
gregg
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Mon, Feb 9, 2009 at 7:17 AM, Lennart Augustsson
Just to clarify a little. If you implement the IO monad in a sane way (as some kind of state monad or continuation monad) then the compiler can optimize e>>=f even for the IO monad. The implementation of >>= will ensure the sequencing of effects in e before effects in f.
I think this answers one of my questions about the relation of category theory to Haskell. Bind is an implementation of the Kleisli star, but the latter, being abstract, may encode data dependency but not sequence. The IO implementation of >>= must ensure sequence, regardless of data dependency (e.g. even for putChar 'a' >>= \x -> putChar 'c'). So if we wanted to write a Haskell specification with more formality and detail than the Report, we could say that the IO monad must implement the Kleisli star operator, but that would not be enough, we would also have to require that the implementation ensure sequencing. IOW, Kleisli star implementation plus a constraint on the implementation. Does that sound right?
The IO monad is less magic than you seem to think it is. :)
Any sufficiently advanced technology is isomorphic to magic. ;) (http://www.quotationspage.com/quote/776.html) -gregg

Gregg Reynolds wrote::
My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e.
While a human reader may well do that, but it would be correct or wrong depending on the definition of >>=. The same is of course true for compilers. By the way, there is no "application f e". An example where it would be wrong to ignore e: sum ([1, 2] >>= const [21]) This expression should evaluate to sum [21, 21] = 42, not sum [21] = 21. There is nothing special with IO or >>=, so there is no need to introduce special cases for IO or >>= in a formal or informal semantics of Haskell. Tillmann

On Mon, Feb 9, 2009 at 11:06 AM, Tillmann Rendel
Gregg Reynolds wrote::
My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e.
While a human reader may well do that, but it would be correct or wrong depending on the definition of >>=. The same is of course true for compilers. By the way, there is no "application f e".
I guess it would help if I got the notation right. My intended meaning was f* e, where * is the Kleisli star. Sorry about that.
An example where it would be wrong to ignore e:
sum ([1, 2] >>= const [21])
This expression should evaluate to sum [21, 21] = 42, not sum [21] = 21.
Sigh. I hate it when this happens. Just when I thought I had it figured out, it turns out I'm clueless. This is very enlightening and should definitely be included in any monad tutorial. Actually you don't even need "sum" and "const" to demo the point, "[1,2] >>= \x -> [21]" evals to "[21, 21]" in ghci. And I have absolutely no idea why. Very mysterious, the Kleisli star. :( Back to the drawing board! -gregg

2009/2/9 Gregg Reynolds
On Mon, Feb 9, 2009 at 11:06 AM, Tillmann Rendel
wrote: Gregg Reynolds wrote::
My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e.
While a human reader may well do that, but it would be correct or wrong depending on the definition of >>=. The same is of course true for compilers. By the way, there is no "application f e".
I guess it would help if I got the notation right. My intended meaning was f* e, where * is the Kleisli star. Sorry about that.
You can't assume f* is a constant function just because f is. In fact, in most monads (excluding Identity and Reader) f* is never constant.
An example where it would be wrong to ignore e:
sum ([1, 2] >>= const [21])
This expression should evaluate to sum [21, 21] = 42, not sum [21] = 21.
Sigh. I hate it when this happens. Just when I thought I had it figured out, it turns out I'm clueless. This is very enlightening and should definitely be included in any monad tutorial. Actually you don't even need "sum" and "const" to demo the point, "[1,2] >>= \x -> [21]" evals to "[21, 21]" in ghci. And I have absolutely no idea why. Very mysterious, the Kleisli star. :(
Here are two ways to think about it.
First, you can decompose the Kleisli star into two operations. That
is, for a monad T with multiplication mu,
f* = mu . T f
Or in Haskell notation,
(f =<<) = join . fmap f
For the list monad, join is concat and fmap is map. So we have,
[1,2] >>= \x -> [21]
= concat (map (\x -> [21])) [1,2]
= concat [[21],[21]]
= [21,21]
Second, in the list monad, we have a distributive law relating mplus and >>=.
mplus x y >>= f = mplus (x >>= f) (y >>= f)
We can rewrite [1,2] >>= \x -> [21] as
mplus (return 1) (return 2) >>= \x -> return 21
then we can distribute >>=,
mplus (return 1 >>= \x -> return 21) (return 2 >>= \x -> return 21)
then by the monad laws,
mplus (return 21) (return 21)
--
Dave Menendez

Gregg Reynolds wrote:
Tillmann Rendel wrote:
An example where it would be wrong to ignore e:
sum ([1, 2] >>= const [21])
This expression should evaluate to sum [21, 21] = 42, not sum [21] = 21.
Sigh. I hate it when this happens. Just when I thought I had it figured out, it turns out I'm clueless. This is very enlightening and should definitely be included in any monad tutorial. Actually you don't even need "sum" and "const" to demo the point, "[1,2] >>= \x -> [21]" evals to "[21, 21]" in ghci. And I have absolutely no idea why. Very mysterious, the Kleisli star. :(
Just walk through the evaluation step by step, it's not so mysterious. [1,2] >>= \x -> [21] == {CT definition of bind} join . fmap (\x -> [21]) $ [1,2] == join [(\x -> [21]) 1, (\x -> [21]) 2] == join [[21],[21]] == [21,21] Or if you prefer to use the Haskell function definitions instead of the category theory definitions (it's all the same): [1,2] >>= \x -> [21] == concatMap (\x -> [21]) [1,2] == concat . map (\x -> [21]) $ [1,2] == concat [(\x -> [21]) 1, (\x -> [21]) 2] == concat [[21],[21]] == [21,21] This is exactly the point I was raising earlier. The "statefullness" of monads has nothing to do with IO, but every monad has some of it. In general it is wrong to throw it away just because the function passed to bind happens to ignore the value associated with it. There are some cases where that can be correct, but in general it is not. For lists, we can envision the statefullness as a path through a decision tree. To get the intuition right, it's easiest to pretend that we never call join (or that join does nothing). If we don't call join, eventually after a number of binds or maps we'll end up with some value of type [[...[a]...]]. We can draw that value out as a B-tree where each level has a branch of whatever arity it needs. In this B-tree, every leaf is associated with a unique path through the tree and therefore they can be distinguished. The reason the above example works the way it does is that the initial list has two leaves each associated with their unique paths through this tree of choices. The function passed into bind is a continuation of sorts. So, given that we can non-deterministically choose either of the paths in the original tree, for each choice we must then continue with (\x -> [21]) applied to the seed value for that choice (1 or 2, as appropriate). Because we had two choices initially, and from there we have only one choice, we will have 2*1 choices in the end: /\ / \ (1) (2) | | | | 21 21 If we imagine a finite state automaton, we might think that the two 21s could be collapsed together since they represent the same "state". But that is not so: the list monad is for *paths* through an FSA not for states in an FSA. (For states in an FSA we'd want the set monad instead.) Of course for the list monad we don't actually keep around the decision tree. In fact lists generalize over all possible decision trees which could yield the given distribution of path-endpoints, so it's not quite the way envisioned above. -- Live well, ~wren

2009/2/9 Gregg Reynolds
On Sun, Feb 8, 2009 at 6:25 PM, Richard O'Keefe
wrote: On 6 Feb 2009, at 4:20 am, Gregg Reynolds wrote:
However, consider:
getChar >>= \x -> getChar
An optimizer can see that the result of the first getChar is discarded and replace the entire expression with one getChar without changing the formal semantics.
But the result of the first getChar is *NOT* discarded. **As an analogy**, think of the type IO t as (World -> (t,World)) for some hidden type World, and getChar w = (c, w') -- get a character c out of world w somehow, -- changing w to w' as you go (f >>= g) w = let (v,w') = f w in (g v) w'
In this analogy, you see that the result of getChar is a value of type IO Char (not of type Char), and that while the character part of the result of performing the result of getChar may be discarded, the "changed world" part is NOT.
That's an implementation detail. It doesn't account for other possible IO implementations.
My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e. So can a compiler, unless IO expressions are involved, in which case such optimizations are forbidden. I wondered if that was due to the semantics of >>= or the semantics of IO.
Neither. It's because the expression "e >>= f" is not "f e". As far as
Haskell is concerned, >>= is just a higher-order function. You can't
arbitrarily replace "foo bar (const baz)" with "baz", unless it turns
out that foo = \x y -> y x.
Perhaps you're thinking of the monad law,
forall x f. return x >>= f = f x
The presence of "return" is important. Among other things, there is no
x such that getChar = return x. That's because getChar has (or,
rather, causes when interpreted by the RTS) side-effects, whereas
"return x" is pure.
Here's some code you can try on your own:
data IO a = Return a | Get (Char -> IO a) | Put Char (IO a)
instance Monad IO where
return = Return
Return a >>= f = f a
Get k >>= f = Get (\c -> k c >>= f)
Put c k >>= f = Put c (k >>= f)
getChar :: IO Char
getChar = Get (\c -> Return c)
putChar :: Char -> IO ()
putChar c = Put c (Return ())
Now, if the compiler sees "getChar >>= \_ -> getChar", it *can*
optimize out the >>=. But the result would be "Get (\_ -> Get (\c ->
Return c))", which is not equivalent to getChar. Neither IO semantics
nor monad semantics are involved.
--
Dave Menendez

On 10 Feb 2009, at 1:35 am, Gregg Reynolds wrote:
My original question was motivated by the observation that a human reader of an expression of the form "e >>= f" , on seeing that f is constant, may pull the constant value out of f, disregard e and dispense with the application f e.
There isn't any "application f e". Any human reader who does that is simply WRONG to do so.
So can a compiler, unless IO expressions are involved, in which case such optimizations are forbidden.
Remember that (e >>= f) means (>>=) e f. When/whether it is sound to replace (>>=) e f by (f undefined) depends on the semantics of >>=. But >>= is an operation of a type class (the Monad type class). If we come across g :: Monad m => m a -> b -> m b g e r = e >>= \x -> return r then the compiler doesn't know what m is, so it doesn't know which, if any, of its arguments >>= depends on. data Trivial x = Trivial x instance Monad Trivial where return = Trivial (>>=) (Trivial a) f = f a If we now try h :: Trivial a -> b -> Trivial b h e r = g e r then the compiler can inline the call to g h e r = e >>= \x -> return r and inline the calls to >>= and return h (Trivial a) r = (\x -> Trivial r) a then simplify to h (Trivial _) r = Trivial r You might have thought that the result doesn't depend on the first argument to h, but as written, it does. This version of h is strict in its first argument, so even though there are NO side effects whatever, the first argument will be evaluated to weak head normal form. Of course, since there is only one constructor for Trivial, it could be a newtype. If I change that declaration to newtype Trivial x = Trivial x then the pattern match is implicitly irrefutable, h ~(Trivial _) r = Trivial r and if the compiler doesn't simplify ~(NTC _) to _ when NTC is a newtype constructor, I'll be surprised, so it's like h _ r = Trivial r and the function will *NOT* be strict in its first argument. Here you see that the soundness or otherwise of eliminating the first argument of >>= when the second argument doesn't depend on the eventual result has nothing to do with IO as such and certainly nothing to do with side effects. It's really all about whether the version of >>= used is strict in its first argument or not.
I wondered if that was due to the semantics of >>= or the semantics of IO.
It's about the semantics of IO's implementation of >>=.
To summarize what I've concluded thanks to the helpful people on haskell-cafe:
The compiler can optimize e >>= f except for any IO expressions in e and f.
False. There is nothing unusual about IO here.
IO expressions must be evaluated, due to the semantics of IO.
False.
The may not be disregarded, memoized, or substituted.
False. In Haskell there is nothing whatever unusual about expressions of type IO something. They *MAY* be disregarded: let n = length [getChar, putChar 'f'] can be optimised to let n = 2. They MAY be memoised. They MAY be substituted.
IO semantics may be implemented in different ways by different compilers;
True. But then, so may Int semantics.
these implementation techniques are not part of the formal semantics of the language, which may be expressed as above: IO expressions must be evaluated wherever and whenever they occur.
False. Utterly false.
The bind operator >>= enforces sequencing on arguments containing IO expressions, but does not force evaluation.
False. For the special case of IO (and for other similar monads)
= enforced sequence BY forcing evaluation (more precisely, by being strict in its first argument).
Even bind expressions involving IO may be optimized. For example:
getChar >>= \x -> ...<monster computation>... putChar 'c'
The compiler may discard <monster computation> (assuming it contains no IO expressions), but it must evaluate getChar and putChar (due to IO semantics) in the correct order (due to bind semantics).
You are conflating *evaluating* (by the Haskell evaluator) with *performance* (by the Haskell environment). IO *values* are determined by the Haskell evaluator exactly like any other values; IO *actions* are performed by the Haskell environment as part of the process of forcing the lazy evaluator to do some work. In your fragmentary example, <monster computation> may be discarded EVEN IF it contains IO expressions, it's only if they are linked into the IO chain using >> and/or >>= that the environment will perform their values.
Thanks all,
gregg

On Mon, Feb 9, 2009 at 8:37 PM, Richard O'Keefe
There isn't any "application f e". Any human reader who does that is simply WRONG to do so.
Sorry, should have written f*
In your fragmentary example, <monster computation> may be discarded EVEN IF it contains IO expressions, it's only if they are linked into the IO chain using >> and/or >>= that the environment will perform their values.
Thanks. I see the error of my ways. So, IO expressions must be evaluated if they are in the chain leading to main. -gregg

On 10 Feb 2009, at 5:07 pm, Gregg Reynolds wrote:
Thanks. I see the error of my ways. So, IO expressions must be evaluated if they are in the chain leading to main.
We need some standard terminology to distinguish between *evaluating* an expression and *performing* the result of an expression of type IO something. An IO expression that is passed to a function at a strict position must be evaluated whether the result is performed or not. An IO expression whose result will be performed must be evaluated before that performance can take place. (Dash it, this really is getting us into White Knight land.)

On Mon, Feb 9, 2009 at 10:17 PM, Richard O'Keefe
On 10 Feb 2009, at 5:07 pm, Gregg Reynolds wrote:
Thanks. I see the error of my ways. So, IO expressions must be evaluated if they are in the chain leading to main.
We need some standard terminology to distinguish between *evaluating* an expression and *performing* the result of an expression of type IO something.
Indeed. Wrote a blog article about this proposing a semiotic approach: http://syntax.wikidot.com/blog:4
An IO expression that is passed to a function at a strict position must be evaluated whether the result is performed or not.
An IO expression whose result will be performed must be evaluated before that performance can take place.
Is the result of evaluation a thunk/suspension/closer?
(Dash it, this really is getting us into White Knight land.)
What's White Knight land?

The result of an evaluation is always in WHNF (weak head normal form).
So if it's a function, it's been evaluated to \ x -> ..., but no
evaluation under lambda.
Similarely, if it's a data type it has been evaluated so the outermost
form is a constructor, but no evaluation inside the constructor.
The terms thunk/suspension/closure usually refer to implementation
rather than semantics.
But in terms of an implementation, the answer is no. After evaluation
you will have none of those as the outmost thing.
-- Lennart
2009/2/10 Gregg Reynolds
Is the result of evaluation a thunk/suspension/closer?

On 11 Feb 2009, at 2:22 am, Gregg Reynolds wrote:
Is the result of evaluation a thunk/suspension/closer?
As T. S. Eliot wrote, "Teach us to care and not to care Teach us to sit still." There are some things it is better not to care about and this is one of them.
(Dash it, this really is getting us into White Knight land.)
What's White Knight land?
From Alice in Wonderland. ``You are sad,'' the Knight said in an anxious tone: ``let me sing you a song to comfort you.'' ``Is it very long?'' Alice asked, for she had heard a good deal of poetry that day. ``It's long,'' said the Knight, ``but it's very, {\it very} beautiful. Everybody that hears me sing it---either it brings tears to their eyes, or else---'' ``Or else what?'' said Alice, for the Knight had made a sudden pause. ``Or else it doesn't, you know. The name of the song is called {\it `Haddocks' Eyes.'\/}{}'' ``Oh, that's the name of the song, is it?'' Alice said, trying to feel interested. ``No, you don't understand,'' the Knight said, looking a little vexed. ``That's what the name is {\it called}. The name really {\it is `The Aged Aged Man.'\/}{}'' ``Then I ought to have said, `That's what the {\it song} is called?'' Alice corrected herself. ``No, you oughtn't: that's quite another thing! The {\it song} is called {\it `Ways and Means'\/}: but that's only what it's {\it called}, you know.'' ``Well, what {\it is} the song, then?'' said Alice, who was by this time completely bewildered. ``I was coming to that,'' the Knight said. ``The song really {\it is `A-sitting On a Gate'\/}: and the tune's my own invention.''
participants (21)
-
Achim Schneider
-
Andrew Wagner
-
Bulat Ziganshin
-
ChrisK
-
David Menendez
-
Derek Elkins
-
Gleb Alexeyev
-
Gregg Reynolds
-
Jake McArthur
-
Jeremy Shaw
-
Jonathan Cast
-
Ketil Malde
-
Lennart Augustsson
-
mail@justinbogner.com
-
Miguel Mitrofanov
-
Nils Anders Danielsson
-
Peter Verswyvelen
-
Richard O'Keefe
-
Roman Cheplyaka
-
Tillmann Rendel
-
wren ng thornton