A question about State Monad and Monad in general

From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic
Hi, I looked at State Monad yesterday and this question popped into my mind. style fetch us more than syntactic convenience? Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency? -- Regards, Kashyap

On Thursday 15 July 2010 18:02:47, C K Kashyap wrote:
Hi, I looked at State Monad yesterday and this question popped into my mind.
From what I gather State Monad essentially allows the use of Haskell's do
notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
Better refactorability. If you're using monadic style, changing from, say, State Thing to StateT Thing OtherMonad or from StateT Thing FirstMonad to StateT Thing SecondMonad typically requires only few changes. Explicit state-passing usually requires more changes.
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really?
Yes. If you write to a mutable array, you really write to the memory location without extra copying.
If not, what is the real reason for better efficiency?

Thanks Daniel, Better refactorability.
If you're using monadic style, changing from, say, State Thing to StateT Thing OtherMonad
or from StateT Thing FirstMonad to StateT Thing SecondMonad
typically requires only few changes. Explicit state-passing usually requires more changes.
So, performance gain (runtime/memory) is not a side effect of Monadic style right?
Yes. If you write to a mutable array, you really write to the memory location without extra copying.
How's this done? Can it be done in Haskell without FFI? -- Regards, Kashyap

C K Kashyap wrote:
Thanks Daniel,
Better refactorability.
If you're using monadic style, changing from, say, State Thing to StateT Thing OtherMonad
or from StateT Thing FirstMonad to StateT Thing SecondMonad
typically requires only few changes. Explicit state-passing usually requires more changes.
So, performance gain (runtime/memory) is not a side effect of Monadic style right?
Generally speaking, right: monadic style won't improve performance. However, using monadic notation may allow you to change the backing monad to a different representation of "the same" computation, thereby giving asymptotic improvements[1]. However, the improvements themselves are coming from the different representation; the use of monadic notation just allows you to switch the representation without altering the code. [1] http://www.iai.uni-bonn.de/~jv/mpc08.pdf -- Live well, ~wren

Thanks Wren,
Thanks Dave ... a quick question though could you point me to an example
where I could build up my own in place modifiable data structure in Haskell
without using any standard library stuff?
For example, if I wanted an image representation such as this
[[(Int,Int.Int)]] - basically a list of lists of 3 tuples (rgb) and wanted
to do in place replacement to set the pixel values, how could I go about it.
On Fri, Jul 16, 2010 at 9:47 AM, wren ng thornton
C K Kashyap wrote:
Thanks Daniel,
Better refactorability.
If you're using monadic style, changing from, say, State Thing to StateT Thing OtherMonad
or from StateT Thing FirstMonad to StateT Thing SecondMonad
typically requires only few changes. Explicit state-passing usually requires more changes.
So, performance gain (runtime/memory) is not a side effect of Monadic style right?
Generally speaking, right: monadic style won't improve performance.
However, using monadic notation may allow you to change the backing monad to a different representation of "the same" computation, thereby giving asymptotic improvements[1]. However, the improvements themselves are coming from the different representation; the use of monadic notation just allows you to switch the representation without altering the code.
[1] http://www.iai.uni-bonn.de/~jv/mpc08.pdfhttp://www.iai.uni-bonn.de/%7Ejv/mpc08.pdf
-- Live well, ~wren
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap

Hi, On 16/07/10 07:35, C K Kashyap wrote:
Haskell without using any standard library stuff?
For example, if I wanted an image representation such as this [[(Int,Int.Int)]] - basically a list of lists of 3 tuples (rgb) and wanted to do in place replacement to set the pixel values, how could I go about it.
Break the problem down into parts: 1. replace a single pixel 2. modify an element in a list at a given index using a given modification function 3. modify an element in a list of lists at a pair of given indices using a given replacement function I had a stab at it. Without any standard library stuff I couldn't figure out how to print any output, though - so who knows if the code I wrote does what I intended. The point is, it's libraries all the way down - so use them, study them where necessary for understanding, and write them and share them when you find something missing. Claude -- http://claudiusmaximus.goto10.org

Hi Claude, Thanks a lot for the example. Btw, is this where you are trying in-place replacement? modifyAtIndex :: (a -> a) -> Nat -> List a -> List a modifyAtIndex f i as = let ias = zip nats as g (Tuple2 j a) = case i `eq` j of False -> a True -> f a in map g ias modifyAtIndex2 :: (a -> a) -> Nat -> Nat -> List (List a) -> List (List a) modifyAtIndex2 f i j = modifyAtIndex (modifyAtIndex f i) j Doesn't modfiyAtIndex return a new list? On Fri, Jul 16, 2010 at 2:28 PM, Claude Heiland-Allen < claudiusmaximus@goto10.org> wrote:
Hi,
On 16/07/10 07:35, C K Kashyap wrote:
Haskell without using any standard library stuff?
For example, if I wanted an image representation such as this
[[(Int,Int.Int)]] - basically a list of lists of 3 tuples (rgb) and wanted to do in place replacement to set the pixel values, how could I go about it.
Break the problem down into parts:
1. replace a single pixel 2. modify an element in a list at a given index using a given modification function 3. modify an element in a list of lists at a pair of given indices using a given replacement function
I had a stab at it. Without any standard library stuff I couldn't figure out how to print any output, though - so who knows if the code I wrote does what I intended.
The point is, it's libraries all the way down - so use them, study them where necessary for understanding, and write them and share them when you find something missing.
Claude -- http://claudiusmaximus.goto10.org
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap

On Thu, Jul 15, 2010 at 9:02 AM, C K Kashyap
Hi, I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience? Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
Syntactic convenience is important, and allows for the separation of logic into different modular pieces. The do notation is totally independent of the Monad in question's behavior. You can even roll your own Monad if you wish, and the do notation will work. Consider if you had a big data structure that you had to pass around all the different versions of in a series of functions. Then consider what happens when you decide that some of your data has to change later as your program evolves over time. Having to change the state that's being threaded around is quite a pain when it can be done transparently at the Monad definition level. Monads let you define the stuff that's going on between statements of do syntax. Some say it's the equivalent of overriding a fictional ";" operator for sequencing imperative looking code statements. There's much power to be had here, and because of this analogy, it's easy to see why Monads are a good place to implement an embedded specific sublanguage in Haskell. I also think that because you're writing the glue between statements when you implement a Monad that it could be why some people (myself included) sometimes have a difficult time thinking of how to implement a particular Monad. Monads also allow you to package up pure data values with some computational activities that might have side effects and logically separate them. This allows one to unwrap a value from a monadic environment and pass it to pure functions for computation, then re-inject it back into the monadic environment. Monads are a good place to store side-effectful code, because they allow you to get away with causing a side effect and using some of those unwrapped monadic values in your pure code. They are an interface between two worlds in this respect. Monads are a good place, therefore, to implement code that does do in-place updates of values because they help the functional programmer deal with the issues of sequencing as well as interfacing side-effect-having and pure code and how to express dependencies between these two worlds. Dave
-- Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thanks David for the detailed explanation.
A couple of quick clarifications -
1. Even the "invisible" state that gets modified during the monadic
evaluation is referred to as side effect right?
2. I am a little unclear about "in-place" - does pure Haskell let one do
such a thing- or does it need to be done using FFI only?
On Thu, Jul 15, 2010 at 10:48 PM, David Leimbach
On Thu, Jul 15, 2010 at 9:02 AM, C K Kashyap
wrote: Hi, I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience? Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
Syntactic convenience is important, and allows for the separation of logic into different modular pieces. The do notation is totally independent of the Monad in question's behavior. You can even roll your own Monad if you wish, and the do notation will work. Consider if you had a big data structure that you had to pass around all the different versions of in a series of functions. Then consider what happens when you decide that some of your data has to change later as your program evolves over time. Having to change the state that's being threaded around is quite a pain when it can be done transparently at the Monad definition level.
Monads let you define the stuff that's going on between statements of do syntax. Some say it's the equivalent of overriding a fictional ";" operator for sequencing imperative looking code statements. There's much power to be had here, and because of this analogy, it's easy to see why Monads are a good place to implement an embedded specific sublanguage in Haskell.
I also think that because you're writing the glue between statements when you implement a Monad that it could be why some people (myself included) sometimes have a difficult time thinking of how to implement a particular Monad.
Monads also allow you to package up pure data values with some computational activities that might have side effects and logically separate them. This allows one to unwrap a value from a monadic environment and pass it to pure functions for computation, then re-inject it back into the monadic environment.
Monads are a good place to store side-effectful code, because they allow you to get away with causing a side effect and using some of those unwrapped monadic values in your pure code. They are an interface between two worlds in this respect. Monads are a good place, therefore, to implement code that does do in-place updates of values because they help the functional programmer deal with the issues of sequencing as well as interfacing side-effect-having and pure code and how to express dependencies between these two worlds.
Dave
-- Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap

On Thu, Jul 15, 2010 at 10:34 AM, C K Kashyap
Thanks David for the detailed explanation.
A couple of quick clarifications -
1. Even the "invisible" state that gets modified during the monadic evaluation is referred to as side effect right?
If the state is free of types that allow for side effects then the state threaded through a state monad is also free from side effects. If the state contains some IO type, which allows for side effects, for example, that state is affected by outside influences and is no longer pure. Non-monadic example: foo :: Int foo = let state1 = 1 state2 = state1 + 3 state3 = state2 + state1 in state3 + 9 Each of those states are really just labels on a stage of computation that's been done so far up to a final expression which is the result of the function foo being called. In the end, this function is just a fancy way of saying 14 is an Int and foo could have been replaced with a let foo=14 in some other expression. This is a pure function with what looks like states updating a value to produce a new state. That's more akin to what happen in the state monad. At no time is a previous state truly "overwritten" as that could be considered a side effect. Here's a state monad like example of the same: foo :: Int foo = (flip execState) 1 $ do { state1 <- get; modify (+3); state2 <- get; put (state2 + state1); modify (+9) }
2. I am a little unclear about "in-place" - does pure Haskell let one do such a thing- or does it need to be done using FFI only?
Pure haskell does not allow for in place update of values because that would violate the definition of purity. That said, there are ways to update values in place with Haskell, ideally with Monads to control and sequence those side effects. Example here: http://www.haskell.org/haskellwiki/Monad/ST The ST monad allows one to describe a thread of computation which can update some mutable state and then exchange it with the "pure world" of normal Haskell computation. Dave
On Thu, Jul 15, 2010 at 10:48 PM, David Leimbach
wrote: On Thu, Jul 15, 2010 at 9:02 AM, C K Kashyap
wrote: Hi, I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience? Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
Syntactic convenience is important, and allows for the separation of logic into different modular pieces. The do notation is totally independent of the Monad in question's behavior. You can even roll your own Monad if you wish, and the do notation will work. Consider if you had a big data structure that you had to pass around all the different versions of in a series of functions. Then consider what happens when you decide that some of your data has to change later as your program evolves over time. Having to change the state that's being threaded around is quite a pain when it can be done transparently at the Monad definition level.
Monads let you define the stuff that's going on between statements of do syntax. Some say it's the equivalent of overriding a fictional ";" operator for sequencing imperative looking code statements. There's much power to be had here, and because of this analogy, it's easy to see why Monads are a good place to implement an embedded specific sublanguage in Haskell.
I also think that because you're writing the glue between statements when you implement a Monad that it could be why some people (myself included) sometimes have a difficult time thinking of how to implement a particular Monad.
Monads also allow you to package up pure data values with some computational activities that might have side effects and logically separate them. This allows one to unwrap a value from a monadic environment and pass it to pure functions for computation, then re-inject it back into the monadic environment.
Monads are a good place to store side-effectful code, because they allow you to get away with causing a side effect and using some of those unwrapped monadic values in your pure code. They are an interface between two worlds in this respect. Monads are a good place, therefore, to implement code that does do in-place updates of values because they help the functional programmer deal with the issues of sequencing as well as interfacing side-effect-having and pure code and how to express dependencies between these two worlds.
Dave
-- Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap

C K Kashyap
I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
STArray and IOArrays are "magic", and uses monads to ensure a sequence of execution to allow (and implement) in-place modification. So this gives you better performance in many cases. Don't expect this from generic monads. -k -- If I haven't seen further, it is by standing in the footprints of giants

Okay...I think I am beginning to understand.
Is it right to assume that "magic" is backed by FFI and cannot be done in
"pure" Haskell?
On Mon, Jul 19, 2010 at 1:47 PM, Ketil Malde
C K Kashyap
writes: I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
STArray and IOArrays are "magic", and uses monads to ensure a sequence of execution to allow (and implement) in-place modification. So this gives you better performance in many cases. Don't expect this from generic monads.
-k -- If I haven't seen further, it is by standing in the footprints of giants _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap

Also, Claude ... If I am correct, in your example, there is no in-place
replacement happening.
On Mon, Jul 19, 2010 at 2:36 PM, C K Kashyap
Okay...I think I am beginning to understand. Is it right to assume that "magic" is backed by FFI and cannot be done in "pure" Haskell?
On Mon, Jul 19, 2010 at 1:47 PM, Ketil Malde
wrote: C K Kashyap
writes: I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
STArray and IOArrays are "magic", and uses monads to ensure a sequence of execution to allow (and implement) in-place modification. So this gives you better performance in many cases. Don't expect this from generic monads.
-k -- If I haven't seen further, it is by standing in the footprints of giants _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap
-- Regards, Kashyap

Do you want a solution like this?
import Data.IORef
replace :: Int -> [IORef (Int,Int,Int)] -> (Int,Int,Int) -> IO ()
replace index pixels new_val = do
old_val <- return $ pixels !! index
writeIORef old_val new_val
print_pixels = mapM (\p -> readIORef p >>= print)
test_data :: [(Int,Int,Int)]
test_data = [(1,2,3),(4,5,6),(7,8,9)]
test_replace :: IO ()
test_replace = do
pixels <- mapM (newIORef) test_data
replace 1 pixels (10,11,12)
print_pixels pixels
GHCI Output:
*Main> test_replace
(1,2,3)
(10,11,12)
(7,8,9)
[(),(),()]
This code takes a list of pixels and replaces the second pixel with
the given value. In this case every pixel is of type IORef which is
mutated in-place.
-deech
On Mon, Jul 19, 2010 at 4:07 AM, C K Kashyap
Also, Claude ... If I am correct, in your example, there is no in-place replacement happening.
On Mon, Jul 19, 2010 at 2:36 PM, C K Kashyap
wrote: Okay...I think I am beginning to understand. Is it right to assume that "magic" is backed by FFI and cannot be done in "pure" Haskell?
On Mon, Jul 19, 2010 at 1:47 PM, Ketil Malde
wrote: C K Kashyap
writes: I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
STArray and IOArrays are "magic", and uses monads to ensure a sequence of execution to allow (and implement) in-place modification. So this gives you better performance in many cases. Don't expect this from generic monads.
-k -- If I haven't seen further, it is by standing in the footprints of giants _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap
-- Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Sorry, the previous code does not compile. It should be:
replace :: Int -> [IORef (Int,Int,Int)] -> (Int,Int,Int) -> IO ()
replace index pixels new_val = do
old_val <- return $ pixels !! index
writeIORef old_val new_val
print_pixels = mapM_ (\p -> readIORef p >>= print)
test_data :: [(Int,Int,Int)]
test_data = [(1,2,3),(4,5,6),(7,8,9)]
test_replace :: IO ()
test_replace = do
pixels <- mapM (newIORef) test_data
replace 1 pixels (10,11,12)
print_pixels pixels
in "print_pixels" "mapM" has been changed to "mapM_"
-deech
On Mon, Jul 19, 2010 at 11:36 AM, aditya siram
Do you want a solution like this?
import Data.IORef
replace :: Int -> [IORef (Int,Int,Int)] -> (Int,Int,Int) -> IO () replace index pixels new_val = do old_val <- return $ pixels !! index writeIORef old_val new_val
print_pixels = mapM (\p -> readIORef p >>= print)
test_data :: [(Int,Int,Int)] test_data = [(1,2,3),(4,5,6),(7,8,9)]
test_replace :: IO () test_replace = do pixels <- mapM (newIORef) test_data replace 1 pixels (10,11,12) print_pixels pixels
GHCI Output: *Main> test_replace (1,2,3) (10,11,12) (7,8,9) [(),(),()]
This code takes a list of pixels and replaces the second pixel with the given value. In this case every pixel is of type IORef which is mutated in-place. -deech
On Mon, Jul 19, 2010 at 4:07 AM, C K Kashyap
wrote: Also, Claude ... If I am correct, in your example, there is no in-place replacement happening.
On Mon, Jul 19, 2010 at 2:36 PM, C K Kashyap
wrote: Okay...I think I am beginning to understand. Is it right to assume that "magic" is backed by FFI and cannot be done in "pure" Haskell?
On Mon, Jul 19, 2010 at 1:47 PM, Ketil Malde
wrote: C K Kashyap
writes: I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of Haskell's do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
STArray and IOArrays are "magic", and uses monads to ensure a sequence of execution to allow (and implement) in-place modification. So this gives you better performance in many cases. Don't expect this from generic monads.
-k -- If I haven't seen further, it is by standing in the footprints of giants _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap
-- Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thanks Aditya...I think that is what I am looking for.
On Mon, Jul 19, 2010 at 10:40 PM, aditya siram
Sorry, the previous code does not compile. It should be: replace :: Int -> [IORef (Int,Int,Int)] -> (Int,Int,Int) -> IO () replace index pixels new_val = do old_val <- return $ pixels !! index writeIORef old_val new_val
print_pixels = mapM_ (\p -> readIORef p >>= print)
test_data :: [(Int,Int,Int)] test_data = [(1,2,3),(4,5,6),(7,8,9)]
test_replace :: IO () test_replace = do pixels <- mapM (newIORef) test_data replace 1 pixels (10,11,12) print_pixels pixels
in "print_pixels" "mapM" has been changed to "mapM_" -deech
On Mon, Jul 19, 2010 at 11:36 AM, aditya siram
wrote: Do you want a solution like this?
import Data.IORef
replace :: Int -> [IORef (Int,Int,Int)] -> (Int,Int,Int) -> IO () replace index pixels new_val = do old_val <- return $ pixels !! index writeIORef old_val new_val
print_pixels = mapM (\p -> readIORef p >>= print)
test_data :: [(Int,Int,Int)] test_data = [(1,2,3),(4,5,6),(7,8,9)]
test_replace :: IO () test_replace = do pixels <- mapM (newIORef) test_data replace 1 pixels (10,11,12) print_pixels pixels
GHCI Output: *Main> test_replace (1,2,3) (10,11,12) (7,8,9) [(),(),()]
This code takes a list of pixels and replaces the second pixel with the given value. In this case every pixel is of type IORef which is mutated in-place. -deech
On Mon, Jul 19, 2010 at 4:07 AM, C K Kashyap
wrote: Also, Claude ... If I am correct, in your example, there is no in-place replacement happening.
On Mon, Jul 19, 2010 at 2:36 PM, C K Kashyap
wrote: Okay...I think I am beginning to understand. Is it right to assume that "magic" is backed by FFI and cannot be done
in
"pure" Haskell?
On Mon, Jul 19, 2010 at 1:47 PM, Ketil Malde
wrote: C K Kashyap
writes: I looked at State Monad yesterday and this question popped into my mind. From what I gather State Monad essentially allows the use of
Haskell's
do notation to "invisibly" pass around a state. So, does the use of Monadic style fetch us more than syntactic convenience?
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Again, if I understand correctly, in Mutable Arrays also, is anything getting modified in place really? If not, what is the real reason for better efficiency?
STArray and IOArrays are "magic", and uses monads to ensure a sequence of execution to allow (and implement) in-place modification. So this gives you better performance in many cases. Don't expect this from generic monads.
-k -- If I haven't seen further, it is by standing in the footprints of giants _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap
-- Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Regards, Kashyap

I'm giving some lectures this week about how to _read_ programs, and I've had some things to say about JavaDoc and wondered whether to show some examples of Haddock. I took a certain library that has been mentioned recently in this mailing list. (I shall not identify it. The author deserves praise for his/her contribution, not a public mauling, and the reason for the message is that that package is not unusual.) The reason I chose it was that I thought "actually, >>I<< should learn how to use that, never mind the students." Upon looking at the Haddock web page, - reaction one "this is pretty flashy". - reaction two, "but WHERE IS THE DOCUMENTATION?" What's missing? I've been using Haskell off and on for years now. The package is dauntingly abstract. ("If it wasn't arcane, it wouldn't be UNIX. If it wasn't abstract, it wouldn't be Haskell.") I can see vast drifts of type classes and functional dependencies and so on. What I don't see is "HOW DO I USE THIS STUFF?" Being a bear of very little brain, I need to be told what the big picture is, what the merits of this approach are, links to any papers I should read, and above all, I need some examples of how to use it. It doesn't really matter what "it" is. The package I was looking at is not alone. You people who contribute packages are heroes, and I thank you. But if you could bring your documentation _at least_ up to the standard of say Data.Vector.Generic, that would be wonderful. One of the really nice ideas in the R statistics system is that documentation pages can contain executable examples, and when you wrap up a package for distribution, the system checks that the examples run as advertised. Haskell has a *perfect* fit for this idea in QuickCheck.

On 21 July 2010 15:28, Richard O'Keefe
I'm giving some lectures this week about how to _read_ programs, and I've had some things to say about JavaDoc and wondered whether to show some examples of Haddock.
I took a certain library that has been mentioned recently in this mailing list. (I shall not identify it. The author deserves praise for his/her contribution, not a public mauling, and the reason for the message is that that package is not unusual.) The reason I chose it was that I thought "actually, >>I<< should learn how to use that, never mind the students."
Upon looking at the Haddock web page, - reaction one "this is pretty flashy". - reaction two, "but WHERE IS THE DOCUMENTATION?"
These are good points and I agree with the rest of your email as well (which I've removed for the sake of brevity). However, I would like to offer two qualifiers that I myself have found when writing documentation for my own libraries: * When writing the code, it's obvious what it does; as such you may think any documentation you may offer is trivial (down the track, however...). * The author is familiar with a library; as such it may not be obvious what extra documentation could be needed. As such, if you're a user of a library and you think it's documentation could be improved, maybe try telling the maintainer that (this kind of prompting is why I bothered to make a website for my graphviz library). As a kind of a wishlist, having more markup support would possibly improve the quality of documentation (rather than being limited to normal text, monospace text and italic text; I've often times wanted to make something bold in my Haddock documentation to emphasise it but alas Haddock doesn't support it). -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

Ivan Miljenovic wrote:
* When writing the code, it's obvious what it does; as such you may think any documentation you may offer is trivial (down the track, however...).
* The author is familiar with a library; as such it may not be obvious what extra documentation could be needed.
This is the inherant problem with any kind of documentation. (And it's by no means limited to Haskell...) The person most qualified to explain stuff is the one least qualified to know what needs explaining! ;-) I would also like to draw attention to something else: Haddock offers solid support for writing the "this function does X, that function does Y" type of documentation. It has really very weak support for writing general overviews, tutorials, examples, etc. Yes, you can put example code into the documentation for a specific module. But look at, say, the Parsec documentation at http://legacy.cs.uu.nl/daan/download/parsec/parsec.html This tells a new user *far* more than any API listing. And yet, Haddock doesn't really support writing this kind of thing properly...

On Wed, 21 Jul 2010 18:15:08 +0100
"Andrew" == Andrew Coppin
wrote:
Andrew> It has really very weak support for writing general Andrew> overviews, tutorials, examples, etc. +1 Sincerely, Gour -- Gour | Hlapicina, Croatia | GPG key: F96FF5F6 ----------------------------------------------------------------

On Jul 20, 2010, at 10:28 PM, Richard O'Keefe wrote:
What I don't see is "HOW DO I USE THIS STUFF?"
I think tutorials are the best way to do that (i.e., example normal forms for the computations the library intends to expose). Perl's package archive (the cpan) traditionally uses a "Synopsis" section that exposes "representative" functions at each layer/step: http://search.cpan.org/~lds/GD-2.45/GD.pm for example. After all, the source is always structured in more-or-less the same way. Fragments of text with opaque -- unless/until you understand them -- combinators "join" two distinct concepts/types into functions. A chain of functions (potentially at various levels of abstraction) is a computation. You "use" these things by finding a chain of types (Start -> A), (A -> B), (B -> C), ... (N -> Goal) and composing, filling in additional details as necessary. Building that chain means doing depth first searches on a tree/graph of possibilities, and usually isn't so much fun. The library developer is in the best position to do exactly that, having already done it while constructing the library.

Alexander Solla wrote:
After all, the source is always structured in more-or-less the same way. Fragments of text with opaque -- unless/until you understand them -- combinators "join" two distinct concepts/types into functions. A chain of functions (potentially at various levels of abstraction) is a computation. You "use" these things by finding a chain of types (Start -> A), (A -> B), (B -> C), ... (N -> Goal) and composing, filling in additional details as necessary. Building that chain means doing depth first searches on a tree/graph of possibilities, and usually isn't so much fun. The library developer is in the best position to do exactly that, having already done it while constructing the library.
In Haskell even learning to use a library has an algebraic structure. ;^) Actually, I was thinking just this afternoon... If you're writing in an OO language, you can use UML to produce diagrams that give you a kind of at-a-glance overview of the saliant parts of something. (Depending on how much detail you choose to include in the diagram.) I wander if anybody has a standardised notation that might be applicable to FP in general or Haskell specifically...

2010/7/21 Richard O'Keefe
One of the really nice ideas in the R statistics system is that documentation pages can contain executable examples, and when you wrap up a package for distribution, the system checks that the examples run as advertised.
The next version of Haddock will support something like this, implemented by SImon Hengel. Here's an example of how to use the new markup: -- | An example: -- -- ghci> fib 10 -- 55 --
Haskell has a *perfect* fit for this idea in QuickCheck.
We currently only support concrete examples (i.e. unit tests), but the plan is to add support for QuickCheck properties. David

David Waern wrote:
2010/7/21 Richard O'Keefe
: One of the really nice ideas in the R statistics system is that documentation pages can contain executable examples, and when you wrap up a package for distribution, the system checks that the examples run as advertised.
The next version of Haddock will support something like this, implemented by SImon Hengel.
Here's an example of how to use the new markup:
-- | An example: -- -- ghci> fib 10 -- 55
Awesome. This is similar to Python's thing.
Haskell has a *perfect* fit for this idea in QuickCheck.
We currently only support concrete examples (i.e. unit tests), but the plan is to add support for QuickCheck properties.
Also support SmallCheck/LazySmallCheck, pretty please? :) -- Live well, ~wren

On 22 July 2010 18:33, David Waern
We currently only support concrete examples (i.e. unit tests), but the plan is to add support for QuickCheck properties.
Would you have some kind of inbuilt time limit (similar to what mueval has) for very long/complex QC tests? I have some that take quite a while to run...
David _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

2010/7/23 Ivan Miljenovic
On 22 July 2010 18:33, David Waern
wrote: [snip]
We currently only support concrete examples (i.e. unit tests), but the plan is to add support for QuickCheck properties.
Would you have some kind of inbuilt time limit (similar to what mueval has) for very long/complex QC tests? I have some that take quite a while to run...
The testing is carried out by a separate program: DocTest. There is no support for QC properties yet. Some kind of time limit would probably be useful to have, yes. Thanks for the suggestion. David

On Mon, Jul 19, 2010 at 10:17 AM, Ketil Malde
At it's heart, monads are "just" syntactic convenience, but like many other syntactic conveniences, allows you to structure your code better. Thus it's more about programmer efficiency than program efficiency. (The "do notation" is syntactic sugar for >>= and >>).
Well, in a sense yes, but there's more to it than "do" notation -- that's just syntactic sugar. The real power is like any typeclass: algorithms (in this case sequenceM, forever, liftM, etc.) that work on all monads without having to write code for each instance. --Max
participants (14)
-
aditya siram
-
Alexander Solla
-
Andrew Coppin
-
C K Kashyap
-
Claude Heiland-Allen
-
Daniel Fischer
-
David Leimbach
-
David Waern
-
Gour
-
Ivan Miljenovic
-
Ketil Malde
-
Max Rabkin
-
Richard O'Keefe
-
wren ng thornton