>> and sequencing [newbie]

so... this is likely a question based on serious misunderstandings, but can anyone help me understand the exact mechanism by which monads enforce sequencing? Specifically, I'm confused by the >> operator. If I understand things properly f a >> g expands to something like: f >>= \_ -> g What I'm missing is how the expansion of f is ever forced under lazy evaluation. Since the result is never used, doesn't it just stay as a completely unevaluated thunk? Come to think of it... how is it that some IO actions sequence completely, while others manage to work in a lazy manner? My suspicion is that somehow the order of evaluation in Haskell gives the outermost expression a "first crack" at evaluation in all circumstances (e.g. http://users.aber.ac.uk/afc/stricthaskell.html#cps) , but that it somehow stops short of a forced deep sequencing... Which is all to say, I have no idea how the magic happens. -David

On Sun, Apr 15, 2007 at 08:04:41PM -0400, David Powers wrote:
so... this is likely a question based on serious misunderstandings, but can anyone help me understand the exact mechanism by which monads enforce sequencing? Specifically, I'm confused by the >> operator. If I understand things properly f a >> g expands to something like:
f >>= \_ -> g
What I'm missing is how the expansion of f is ever forced under lazy evaluation. Since the result is never used, doesn't it just stay as a completely unevaluated thunk? Come to think of it... how is it that some IO actions sequence completely, while others manage to work in a lazy manner? My suspicion is that somehow the order of evaluation in Haskell gives the outermost expression a "first crack" at evaluation in all circumstances (e.g. http://users.aber.ac.uk/afc/stricthaskell.html#cps) , but that it somehow stops short of a forced deep sequencing... Which is all to say, I have no idea how the magic happens.
The values that are passed around monadically aren't the whole story. EG, in the GHC implementation, we have: (slightly simplified) newtype IO a = IO (Thread -> ( Thread, a )) So even though you throw away the a, the Thread *is* demanded. Stefan

Hello,
On 4/15/07, David Powers
so... this is likely a question based on serious misunderstandings, but can anyone help me understand the exact mechanism by which monads enforce sequencing?
Monads do not enforce sequencing. In general, data dependencies enforce sequencing (i.e. if expression x depends upon expression y then expression y will have to be evaluated first). Consider: let x = case y of Just y' -> f y' Nothing -> g y = some code in more stuff Here y must be evaluated before x because x needs to look at y in order to compute.
Specifically, I'm confused by the >> operator. If I understand things properly f a >> g expands to something like:
f >>= \_ -> g
What I'm missing is how the expansion of f is ever forced under lazy evaluation. Since the result is never used, doesn't it just stay as a completely unevaluated thunk?
(>>=) is an overloaded function. Some instances of it will cause f to be evaluated, others won't. Consider the State monad: instance Monad (State s) where return a = State $ \s -> (a, s) m >>= k = State $ \s -> case runState m s of (a, s') -> runState (k a) s' Note that (>>=) causes m to be evaluated (up to a pair) before evaluating k because (>>=) needs to look at the result of m. An example of a monad in which (>>=) doesn't force evaluation of the first argument before the second is the Identity monad: instance Monad Identity where return a = Identity a m >>= k = k (runIdentity m) Note that (>>=) actually forces the evaluation of k before m. -Jeff

Ah... so the secret is in the hidden variables. On some level I am
beginning to fear that Monads resurrect some of the scariest aspects of
method overriding from my OO programming days. Do you (all) ever find that
the ever changing nature of >>= makes code hard to read?
On 4/15/07, jeff p
Hello,
On 4/15/07, David Powers
wrote: so... this is likely a question based on serious misunderstandings, but can anyone help me understand the exact mechanism by which monads enforce sequencing?
Monads do not enforce sequencing.
In general, data dependencies enforce sequencing (i.e. if expression x depends upon expression y then expression y will have to be evaluated first). Consider:
let x = case y of Just y' -> f y' Nothing -> g y = some code in more stuff
Here y must be evaluated before x because x needs to look at y in order to compute.
Specifically, I'm confused by the >> operator. If I understand things properly f a >> g expands to something like:
f >>= \_ -> g
What I'm missing is how the expansion of f is ever forced under lazy evaluation. Since the result is never used, doesn't it just stay as a completely unevaluated thunk?
(>>=) is an overloaded function. Some instances of it will cause f to be evaluated, others won't. Consider the State monad:
instance Monad (State s) where return a = State $ \s -> (a, s) m >>= k = State $ \s -> case runState m s of (a, s') -> runState (k a) s'
Note that (>>=) causes m to be evaluated (up to a pair) before evaluating k because (>>=) needs to look at the result of m.
An example of a monad in which (>>=) doesn't force evaluation of the first argument before the second is the Identity monad:
instance Monad Identity where return a = Identity a m >>= k = k (runIdentity m)
Note that (>>=) actually forces the evaluation of k before m.
-Jeff

david:
Ah... so the secret is in the hidden variables. On some level I am beginning to fear that Monads resurrect some of the scariest aspects of method overriding from my OO programming days. Do you (all) ever find that the ever changing nature of >>= makes code hard to read?
You always know which monad you're in though, since its in the type. And the scary monads aren't terribly common anyway. -- Don

Donald Bruce Stewart wrote:
david:
Ah... so the secret is in the hidden variables. On some level I am beginning to fear that Monads resurrect some of the scariest aspects of method overriding from my OO programming days. Do you (all) ever find that the ever changing nature of >>= makes code hard to read?
You always know which monad you're in though, since its in the type. And the scary monads aren't terribly common anyway.
Also, the monad laws impose a level of sanity that most OO frameworks do not, right?

clifford.beshers:
Donald Bruce Stewart wrote:
david:
Ah... so the secret is in the hidden variables. On some level I am beginning to fear that Monads resurrect some of the scariest aspects of method overriding from my OO programming days. Do you (all) ever find that the ever changing nature of >>= makes code hard to read?
You always know which monad you're in though, since its in the type. And the scary monads aren't terribly common anyway.
Also, the monad laws impose a level of sanity that most OO frameworks do not, right?
Ah yes, and we have the 3 laws of monads. If you break these , the monad police will come and lock you up. -- Don

Like point free notation, I worry about what somebody somewhere is doing to
it.... :)
The existence of a well understood community standard (add a type signature
to your functions and only use monad operators with the laws) helps a lot -
but both pieces are optional. I suppose the shorter and more declarative
nature of Haskell functions makes it a less urgent point though.
On 4/16/07, Donald Bruce Stewart
clifford.beshers:
Donald Bruce Stewart wrote:
david:
Ah... so the secret is in the hidden variables. On some level I am beginning to fear that Monads resurrect some of the scariest aspects of method overriding from my OO programming days. Do you (all) ever find that the ever changing nature of >>= makes code hard to read?
You always know which monad you're in though, since its in the type. And the scary monads aren't terribly common anyway.
Also, the monad laws impose a level of sanity that most OO frameworks do not, right?
Ah yes, and we have the 3 laws of monads. If you break these , the monad police will come and lock you up.
-- Don
participants (5)
-
Clifford Beshers
-
David Powers
-
dons@cse.unsw.edu.au
-
jeff p
-
Stefan O'Rear