
Hello fellow Haskellers, When I read questions from Haskell beginners, it somehow seems like they try to avoid monads and view them as a last resort, if there is no easy non-monadic way. I'm really sure that the cause for this is that most tutorials deal with monads very sparingly and mostly in the context of input/output. Also usually monads are associated with the do-notation, which makes them appear even more special, although there is really nothing special about them. I appeal to all experienced Haskell programmers, especially to tutorial writers, to try to focus more on how monads are nothing special, when talking to beginners. Let me tell you that usually 90% of my code is monadic and there is really nothing wrong with that. I use especially State monads and StateT transformers very often, because they are convenient and are just a clean combinator frontend to what you would do manually without them: passing state. Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Ertugrul Soeylemez wrote:
Hello fellow Haskellers,
When I read questions from Haskell beginners, it somehow seems like they try to avoid monads and view them as a last resort, if there is no easy non-monadic way. I'm really sure that the cause for this is that most tutorials deal with monads very sparingly and mostly in the context of input/output. Also usually monads are associated with the do-notation, which makes them appear even more special, although there is really nothing special about them.
Yea, i was the same way when i started learning Haskell. I understood how Monads worked, and what the motivation was for them, but not why i would want to restructure code to use them in specific instances. "Why should i care about monads when i can use Arrows or (.)" was also a factor. Its kinda like getting advice from an adult as a child. You have no particular reason to distrust the advice, but the value of it doesn't set in until something happens to you first hand to verify it. For me the turning point was writing some code that needed to handle running code locally/remotely in a transparent manner. Maybe having a list of real-world usage scenarios or exercises on the wiki may help.

Related to this issue, I have a question here.
I might be wrong, but it seems to me that some Haskellers don't like
writing monads (with do notation) or arrows (with proc sugar) because of the
fact they have to abandon the typical applicative syntax, which is so close
to the beautiful lambda calculus core. Or is it maybe because some people
choose monads were the less linear applicative style could be used instead,
so the choice of monads is not always appropriate.
Haskell is full of little hardcoded syntax extensions: list notation
syntactic, list comprehensions, and even operator precedence that reduces
the need for parentheses, etc...
Of course IMHO the syntactic sugar is needed, e.g. a Yampa game written
without the arrows syntax would be incomprehensible for the average
programmer. But one could claim that this syntactic sugar hides what is
really going on under the hood, so for newcomers these extensions might make
it harder. It could also feel like a hack, a failure to keep things as
simple as possible yet elegant.
Some people I talked with like that about the Scheme/ & LISP languages: the
syntax remains ultimately close to the core, with very little hardcoded
syntactic extensions. And when one wants to add syntactic extensions, one
uses syntactic macros.
I'm not sure what others feel about the hardcoded syntax extensions in
Haskell...
Personally I'm not that much of a purist, I'm an old school hacker that
mainly needs to get the job done. I like the syntax extensions in Haskell
(and even C#/F# ;) because they let me write code shorter and clearer...
On Fri, Jan 9, 2009 at 4:07 AM, Neal Alexander
Ertugrul Soeylemez wrote:
Hello fellow Haskellers,
When I read questions from Haskell beginners, it somehow seems like they try to avoid monads and view them as a last resort, if there is no easy non-monadic way. I'm really sure that the cause for this is that most tutorials deal with monads very sparingly and mostly in the context of input/output. Also usually monads are associated with the do-notation, which makes them appear even more special, although there is really nothing special about them.
Yea, i was the same way when i started learning Haskell. I understood how Monads worked, and what the motivation was for them, but not why i would want to restructure code to use them in specific instances.
"Why should i care about monads when i can use Arrows or (.)" was also a factor.
Its kinda like getting advice from an adult as a child. You have no particular reason to distrust the advice, but the value of it doesn't set in until something happens to you first hand to verify it.
For me the turning point was writing some code that needed to handle running code locally/remotely in a transparent manner.
Maybe having a list of real-world usage scenarios or exercises on the wiki may help.
______________________________ _________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

My issue is that there seem to be many cases where the syntax
extension does *almost* what I want, but not quite. And there isn't
any method to extend it, so you are left with two choices:
(1) Go back to unsugared syntax
(2) Hack your technique into the constraints of the existing syntax extension.
For example, "fail" is in the Monad class only because binding in "do"
needs to handle pattern-match failure. But a better syntax extension
wouldn't tie "do" to Monad only; it would allow pattern match failure
to be handled by a separate class if needed, and allow additional
extensions to deal with other cases where the existing syntax isn't
quite good enough. This is where lisp-style syntactic macros show
their real power.
I'd go to TH but there's no way to make TH transparent. I almost wish
that do ... and similar notations would just call a TH function to
decode the expression! Then, if I wanted a different decoding I could
just put a different "sugarDoNotation" function in scope.
On a related note, I hate the explosion of <*>, <$>, liftM, etc. when
working in monadic/applicative code. Often the code only typechecks
with these lifting operators present, so why am I explaining to the
compiler that I want it to use the applicative apply operator? When
is strong typing going to help me write *less* code instead of *more*?
We've solved the "type annotation" problem to my satisfaction via
inference, but there is still a lot of what I would call "source
annotation".
For example, which of these is easier to read?
f,g :: Int -> [Int]
h1 :: Int -> [Int]
h1 x = do
fx <- f x
gx <- g x
return (fx + gx)
h2 :: Int -> [Int]
h2 x = (+) <$> f x <*> g x
h3 :: Int -> [Int]
h3 x = f x + g x -- not legal, of course, but wouldn't it be nice if it was?
Of course this raises problems of order of evaluation, etc, but as
long as such things were well-defined, that seems fine. If you want
finer control, you can always go back to more verbose syntax. These
cases are dominated by the cases where you simply don't care!
This said, I don't want to sound overly negative; all of this pain is
*worth* the correctness guarantees that I get when writing in Haskell.
I just wish I could get the same guarantees with less pain!
-- ryan
2009/1/10 Peter Verswyvelen
Related to this issue, I have a question here. I might be wrong, but it seems to me that some Haskellers don't like writing monads (with do notation) or arrows (with proc sugar) because of the fact they have to abandon the typical applicative syntax, which is so close to the beautiful lambda calculus core. Or is it maybe because some people choose monads were the less linear applicative style could be used instead, so the choice of monads is not always appropriate. Haskell is full of little hardcoded syntax extensions: list notation syntactic, list comprehensions, and even operator precedence that reduces the need for parentheses, etc...
Of course IMHO the syntactic sugar is needed, e.g. a Yampa game written without the arrows syntax would be incomprehensible for the average programmer. But one could claim that this syntactic sugar hides what is really going on under the hood, so for newcomers these extensions might make it harder. It could also feel like a hack, a failure to keep things as simple as possible yet elegant. Some people I talked with like that about the Scheme/ & LISP languages: the syntax remains ultimately close to the core, with very little hardcoded syntactic extensions. And when one wants to add syntactic extensions, one uses syntactic macros. I'm not sure what others feel about the hardcoded syntax extensions in Haskell...
Personally I'm not that much of a purist, I'm an old school hacker that mainly needs to get the job done. I like the syntax extensions in Haskell (and even C#/F# ;) because they let me write code shorter and clearer... On Fri, Jan 9, 2009 at 4:07 AM, Neal Alexander
wrote: Ertugrul Soeylemez wrote:
Hello fellow Haskellers,
When I read questions from Haskell beginners, it somehow seems like they try to avoid monads and view them as a last resort, if there is no easy non-monadic way. I'm really sure that the cause for this is that most tutorials deal with monads very sparingly and mostly in the context of input/output. Also usually monads are associated with the do-notation, which makes them appear even more special, although there is really nothing special about them.
Yea, i was the same way when i started learning Haskell. I understood how Monads worked, and what the motivation was for them, but not why i would want to restructure code to use them in specific instances.
"Why should i care about monads when i can use Arrows or (.)" was also a factor.
Its kinda like getting advice from an adult as a child. You have no particular reason to distrust the advice, but the value of it doesn't set in until something happens to you first hand to verify it.
For me the turning point was writing some code that needed to handle running code locally/remotely in a transparent manner.
Maybe having a list of real-world usage scenarios or exercises on the wiki may help.
______________________________ _________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

For example, which of these is easier to read?
f,g :: Int -> [Int]
h1 :: Int -> [Int] h1 x = do fx <- f x gx <- g x return (fx + gx)
h2 :: Int -> [Int] h2 x = (+) <$> f x <*> g x
h3 :: Int -> [Int] h3 x = f x + g x -- not legal, of course, but wouldn't it be nice if it was?
Yes, all that lifting is something that takes away lot of the beauty and simplicity of Haskell, but as far as my limited knowledge goes, I don't think this problem is easily solved :) Anyway, for your particular example, for newbies I guess the clearest would be: h0 x = [ fx+gx | fx <- f x, gx <- g x ] since one must recognize that a list monad exists and what it does... Now, for binary operators, Thomas Davie made a nice pair of combinators on Hackage (InfixApplicative) that would allow this to become: h3 x = f x <^(+)^> g x But in general, I guess you have a good point...
Of course this raises problems of order of evaluation, etc, but as long as such things were well-defined, that seems fine. If you want finer control, you can always go back to more verbose syntax. These cases are dominated by the cases where you simply don't care!
This said, I don't want to sound overly negative; all of this pain is *worth* the correctness guarantees that I get when writing in Haskell. I just wish I could get the same guarantees with less pain!
-- ryan
Related to this issue, I have a question here. I might be wrong, but it seems to me that some Haskellers don't like writing monads (with do notation) or arrows (with proc sugar) because of
fact they have to abandon the typical applicative syntax, which is so close to the beautiful lambda calculus core. Or is it maybe because some people choose monads were the less linear applicative style could be used instead, so the choice of monads is not always appropriate. Haskell is full of little hardcoded syntax extensions: list notation syntactic, list comprehensions, and even operator precedence that reduces the need for parentheses, etc...
Of course IMHO the syntactic sugar is needed, e.g. a Yampa game written without the arrows syntax would be incomprehensible for the average programmer. But one could claim that this syntactic sugar hides what is really going on under the hood, so for newcomers these extensions might make it harder. It could also feel like a hack, a failure to keep things as simple as possible yet elegant. Some people I talked with like that about the Scheme/ & LISP languages:
syntax remains ultimately close to the core, with very little hardcoded syntactic extensions. And when one wants to add syntactic extensions, one uses syntactic macros. I'm not sure what others feel about the hardcoded syntax extensions in Haskell...
Personally I'm not that much of a purist, I'm an old school hacker that mainly needs to get the job done. I like the syntax extensions in Haskell (and even C#/F# ;) because they let me write code shorter and clearer... On Fri, Jan 9, 2009 at 4:07 AM, Neal Alexander
wrote: Ertugrul Soeylemez wrote:
Hello fellow Haskellers,
When I read questions from Haskell beginners, it somehow seems like
2009/1/10 Peter Verswyvelen < bugfact@gmail.com>: the the they
try to avoid monads and view them as a last resort, if there is no easy non-monadic way. I'm really sure that the cause for this is that most tutorials deal with monads very sparingly and mostly in the context of input/output. Also usually monads are associated with the do-notation, which makes them appear even more special, although there is really nothing special about them.
Yea, i was the same way when i started learning Haskell. I understood how Monads worked, and what the motivation was for them, but not why i would want to restructure code to use them in specific instances.
"Why should i care about monads when i can use Arrows or (.)" was also a factor.
Its kinda like getting advice from an adult as a child. You have no particular reason to distrust the advice, but the value of it doesn't set in until something happens to you first hand to verify it.
For me the turning point was writing some code that needed to handle running code locally/remotely in a transparent manner.
Maybe having a list of real-world usage scenarios or exercises on the wiki may help.
______________________________ _________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 2009 Jan 10, at 15:19, Peter Verswyvelen wrote:
h3 x = f x <^(+)^> g x
Is that an operator or Batman? (yes, I know, 3 operators) -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Holy concatenated operators, Batman! Is that an operator or Batman?
(yes, I know, 3 operators)
-- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

It is really too bad we can not define the operators
>_> ^_^ <_<
These are significant from an internationalization standpoint;
and they'd make the language so much more competitive
vis-a-vis LOLCode.
--
Jason Dusek
2009/1/10 Brandon S. Allbery KF8NH
On 2009 Jan 10, at 15:19, Peter Verswyvelen wrote:
h3 x = f x <^(+)^> g x
Is that an operator or Batman? (yes, I know, 3 operators) -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I always thought that instead of having two classes of characters, one
for variable (and function) names and the
other for operators, only the first charater of the identifier could
mean it's one or the others,
so *vec or +point would be valid operator names and thus >_> too. And
C++ could be a Data type name...
Cheers,
Thu
2009/1/11 Jason Dusek
It is really too bad we can not define the operators
_> ^_^ <_<
These are significant from an internationalization standpoint; and they'd make the language so much more competitiveI vis-a-vis LOLCode.
-- Jason Dusek
2009/1/10 Brandon S. Allbery KF8NH
: On 2009 Jan 10, at 15:19, Peter Verswyvelen wrote:
h3 x = f x <^(+)^> g x
Is that an operator or Batman? (yes, I know, 3 operators) -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

minh thu wrote:
I always thought that instead of having two classes of characters, one for variable (and function) names and the other for operators, only the first charater of the identifier could mean it's one or the others, so *vec or +point would be valid operator names and thus >_> too. And C++ could be a Data type name...
No, operators have to consist entirely of punctuation, and functions entirely of alphanumerical characters. If what you said were the case, then x+y would be an identifier instead of the sum of x and y. Kind regards, Martijn.

2009/1/11 Martijn van Steenbergen
minh thu wrote:
I always thought that instead of having two classes of characters, one for variable (and function) names and the other for operators, only the first charater of the identifier could mean it's one or the others, so *vec or +point would be valid operator names and thus >_> too. And C++ could be a Data type name...
No, operators have to consist entirely of punctuation, and functions entirely of alphanumerical characters.
If what you said were the case, then x+y would be an identifier instead of the sum of x and y.
Indeed but what's wrong in writing x + y (with additional spaces) ? Having the possiblity to write for instance +blah instead of inventing things such as $>* is neat in my opinion (in code or for typesetting)... Cheers Thu

Indeed but what's wrong in writing x + y (with additional spaces) ? Having the possiblity to write for instance +blah instead of inventing things such as $>* is neat in my opinion (in code or for typesetting)...
I believe it was Bulat Ziganshin who once proposed parsing x + y*z + t as x + (y * z) + t and x + y * z + t as (x + y) * (z + t)

Hello Miguel, Sunday, January 11, 2009, 7:06:54 PM, you wrote:
I believe it was Bulat Ziganshin who once proposed parsing
x + y*z + t as x + (y * z) + t
and
x + y * z + t as (x + y) * (z + t)
x+y * z+t should be here -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Agda has made the choice that you can have (almost) any sequence of
characters in identifiers. It works fine, but forces you to use white
space (which I do anyway).
-- Lennart
On Sun, Jan 11, 2009 at 4:28 PM, Martijn van Steenbergen
minh thu wrote:
I always thought that instead of having two classes of characters, one for variable (and function) names and the other for operators, only the first charater of the identifier could mean it's one or the others, so *vec or +point would be valid operator names and thus >_> too. And C++ could be a Data type name...
No, operators have to consist entirely of punctuation, and functions entirely of alphanumerical characters.
If what you said were the case, then x+y would be an identifier instead of the sum of x and y.
Kind regards,
Martijn.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Peter Verswyvelen schrieb:
Now, for binary operators, Thomas Davie made a nice pair of combinators on Hackage (InfixApplicative) that would allow this to become:
h3 x = f x <^(+)^> g x

Peter Verswyvelen schrieb:
Related to this issue, I have a question here.
I might be wrong, but it seems to me that some Haskellers don't like writing monads (with do notation) or arrows (with proc sugar) because of the fact they have to abandon the typical applicative syntax, which is so close to the beautiful lambda calculus core. Or is it maybe because some people choose monads were the less linear applicative style could be used instead, so the choice of monads is not always appropriate.
Haskell is full of little hardcoded syntax extensions: list notation syntactic, list comprehensions, and even operator precedence that reduces the need for parentheses, etc...

As a physicist, I think that programming, like any design in general, is all
about making as little use of brain resources as possible at the time of
solving problems and to transmit the solution to others. This is the reason
why it is pervasive in all kinds of engineering the concepts of modularity,
isolation, clean interfacing (for which referential transparency is part of)
etc. To decompose a problem in parts each one with simple solutions and with
as little uncontrolled interaction with other parts is a rule of good
design simply because we can not think in nothing but around seven
concepts at the same time (By the way, an evolutionary psychologist would
say that the beatifulness of simplicity is related with this release of
brain resources). As a matter of fact, these rules of "good" design do not
apply to the design of living beings simply because the process of Natural
Selection has not these limitations. that is indeed the reason because
natural designs are so difficult for reverse engineering.
Because such kid of human brain limitations and our lack of knowledge,
design has a lot of trial and error. The danger of this is to get lost in
the explosion of unfruitful alternatives due to low level issues outside of
our high level problem, because the limitation of the tools we are using for
it. In this sense, things like the strong type inference is superb for
cutting the explosion of erroneous paths that the process of software
development can generate. If designing solutions is sculpting order from
chaos and against chaos, intelligent tools are the thing needed to keep us
concentrated in fruitful courses of action. A physicist would say that
design is to lower the entropic balance by progressively lowering the number
of microstates until the only ones permitted correspond with the desired
outcomes, called "solutions" and a few bugs, of course.
For me, syntactic sugar is one more of this features that make haskell so
great. Once we discover that a solution general enough has a correspondence
with something already know, such are monads for imperative languages, then,
why not make this similarity explicit ,with the do notation, in order to
communicate it better with other people making use of this common
knowledge?.
I have to say also that, without Haskell, I never dream to have the
confidence to play simultaneously with concurrence, transactions, internet
incommunications, parsing and, at the time, keeping the code clean enough to
understand it after a month of inactivity. This is for me the big picture
that matters for real programming.
2009/1/10 Peter Verswyvelen
Related to this issue, I have a question here.
I might be wrong, but it seems to me that some Haskellers don't like writing monads (with do notation) or arrows (with proc sugar) because of the fact they have to abandon the typical applicative syntax, which is so close to the beautiful lambda calculus core. Or is it maybe because some people choose monads were the less linear applicative style could be used instead, so the choice of monads is not always appropriate.
Haskell is full of little hardcoded syntax extensions: list notation syntactic, list comprehensions, and even operator precedence that reduces the need for parentheses, etc...
Of course IMHO the syntactic sugar is needed, e.g. a Yampa game written without the arrows syntax would be incomprehensible for the average programmer. But one could claim that this syntactic sugar hides what is really going on under the hood, so for newcomers these extensions might make it harder. It could also feel like a hack, a failure to keep things as simple as possible yet elegant.
Some people I talked with like that about the Scheme/ & LISP languages: the syntax remains ultimately close to the core, with very little hardcoded syntactic extensions. And when one wants to add syntactic extensions, one uses syntactic macros.
I'm not sure what others feel about the hardcoded syntax extensions in Haskell...
Personally I'm not that much of a purist, I'm an old school hacker that mainly needs to get the job done. I like the syntax extensions in Haskell (and even C#/F# ;) because they let me write code shorter and clearer...
On Fri, Jan 9, 2009 at 4:07 AM, Neal Alexander
wrote: Ertugrul Soeylemez wrote:
Hello fellow Haskellers,
When I read questions from Haskell beginners, it somehow seems like they try to avoid monads and view them as a last resort, if there is no easy non-monadic way. I'm really sure that the cause for this is that most tutorials deal with monads very sparingly and mostly in the context of input/output. Also usually monads are associated with the do-notation, which makes them appear even more special, although there is really nothing special about them.
Yea, i was the same way when i started learning Haskell. I understood how Monads worked, and what the motivation was for them, but not why i would want to restructure code to use them in specific instances.
"Why should i care about monads when i can use Arrows or (.)" was also a factor.
Its kinda like getting advice from an adult as a child. You have no particular reason to distrust the advice, but the value of it doesn't set in until something happens to you first hand to verify it.
For me the turning point was writing some code that needed to handle running code locally/remotely in a transparent manner.
Maybe having a list of real-world usage scenarios or exercises on the wiki may help.
______________________________ _________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Ertugrul Soeylemez wrote:
Let me tell you that usually 90% of my code is monadic and there is really nothing wrong with that. I use especially State monads and StateT transformers very often, because they are convenient and are just a clean combinator frontend to what you would do manually without them: passing state.
The insistence on avoiding monads by experienced Haskellers, in particular on avoiding the IO monad, is motivated by the quest for elegance. The IO and other monads make it easy to fall back to imperative programming patterns to "get the job done". But do you really need to pass state around? Or is there a more elegant solution, an abstraction that makes everything fall into place automatically? Passing state is a valid implementation tool, but it's not a design principle. A good example is probably the HGL (Haskell Graphics Library), a small vector graphics library which once shipped with Hugs. The core is the type Graphic which represents a drawing and whose semantics are roughly Graphic = Pixel -> Color There are primitive graphics like empty :: Graphic polygon :: [Point] -> Graphic and you can combine graphics by laying them on top of each other over :: Graphic -> Graphic -> Graphic This is an elegant and pure interface for describing graphics. After having constructed a graphic, you'll also want to draw it on screen, which can be done with the function drawInWindow :: Graphic -> Window -> IO () This function is in the IO monad because it has the side-effect of changing the current window contents. But just because drawing on a screen involves IO does not mean that using it for describing graphics is a good idea. However, using IO for *implementing* the graphics type is fine type Graphics = Window -> IO () empty = \w -> return () polygon (p:ps) = \w -> moveTo p w >> mapM_ (\p -> lineTo p w) ps over g1 g2 = \w -> g1 w >> g2 w drawInWindow = id Consciously excluding monads and restricting the design space to pure functions is the basic tool of thought for finding such elegant abstractions. As Paul Hudak said in his message "A regressive view of support for imperative programming in Haskell" In my opinion one of the key principles in the design of Haskell has been the insistence on purity. It is arguably what led the Haskell designers to "discover" the monadic solution to IO, and is more generally what inspired many researchers to "discover" purely functional solutions to many seemingly imperative problems. http://article.gmane.org/gmane.comp.lang.haskell.cafe/27214 The philosophy of Haskell is that searching for purely functional solution is well worth it. Regards, H. Apfelmus

"Apfelmus, Heinrich"
Ertugrul Soeylemez wrote:
Let me tell you that usually 90% of my code is monadic and there is really nothing wrong with that. I use especially State monads and StateT transformers very often, because they are convenient and are just a clean combinator frontend to what you would do manually without them: passing state.
The insistence on avoiding monads by experienced Haskellers, in particular on avoiding the IO monad, is motivated by the quest for elegance.
The IO and other monads make it easy to fall back to imperative programming patterns to "get the job done". [...]
Often, the monadic solution _is_ the elegant solution. Please don't confuse monads with impure operations. I use the monadic properties of lists, often together with monad transformers, to find elegant solutions. As long as you're not abusing monads to program imperatively, I think, they are an excellent and elegant structure. I said that 90% of my code is monadic, not that 90% of it is in IO. I do use state monads where there is no more elegant solution than passing state around. It's simply that: you have a structure, which you modify continuously in a complex fashion, such as a neural network or an automaton. Monads are the way to go here, unless you want to do research and find a better way to express this. Personally I prefer this: somethingWithRandomsM :: (Monad m, Random a) => m a -> Something a over these: somethingWithRandoms1 :: [a] -> Something a somethingWithRandoms2 :: RandomGen g => g -> Something a Also I use monads a lot for displaying progress: lengthyComputation :: Monad m => (Progress -> m ()) -> m Result
Consciously excluding monads and restricting the design space to pure functions is the basic tool of thought for finding such elegant abstractions. [...]
You don't need to exclude monads to restrict the design space to pure functions. Everything except IO and ST (and some related monads) is pure. As said, often monads _are_ the elegant solutions. Just look at parser monads. Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Ertugrul Soeylemez
Personally I prefer this:
somethingWithRandomsM :: (Monad m, Random a) => m a -> Something a
Of course, there is something missing here: somethingWithRandomsM :: (Monad m, Random a) => m a -> m (Something a) Sorry. Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Ertugrul Soeylemez schrieb:
"Apfelmus, Heinrich"
wrote: The insistence on avoiding monads by experienced Haskellers, in particular on avoiding the IO monad, is motivated by the quest for elegance.
The IO and other monads make it easy to fall back to imperative programming patterns to "get the job done". [...]
Often, the monadic solution _is_ the elegant solution. Please don't confuse monads with impure operations. I use the monadic properties of lists, often together with monad transformers, to find elegant solutions. As long as you're not abusing monads to program imperatively, I think, they are an excellent and elegant structure.
I have seen several libraries where all functions of a monad have the monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.

Henning Thielemann wrote:
I have seen several libraries where all functions of a monad have the monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
I completely disagree with that example. The Put monad is, mainly, a specialized State monad. The internal state being the current fixed-size bytestring memory buffer that has been allocated and is being filled. The monad make the execution sequential so that there is only one memory buffer being filled at a time. In Put, when one memory buffer has been filled it allocates the next one to create a Lazy Bytestring. This is not to say that all M () are really monads, but just that Put () is. -- Chris

On Tue, Jan 13, 2009 at 10:16:32AM +0000, ChrisK wrote:
Henning Thielemann wrote:
I have seen several libraries where all functions of a monad have the monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
I completely disagree with that example. The Put monad is, mainly, a specialized State monad. The internal state being the current fixed-size bytestring memory buffer that has been allocated and is being filled. The monad make the execution sequential so that there is only one memory buffer being filled at a time.
No, Put is a specialized Writer monad. The sequencing is imposed by the mappend operation of the Builder monoid. The monadic interface is indeed there just to access do notation. And Henning's general point also holds: a monad that is always applied to () is just a monoid.

I have seen several libraries where all functions of a monad have the monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
Maybe that was the initial reason, but I've actually found the Binary.Put.PutM (where Put = PutM ()) to be useful. Sometimes your putter does need to propogate a result... Tim Newsham http://www.thenewsh.com/~newsham/

On Tue, Jan 13, 2009 at 11:21 AM, Tim Newsham
I have seen several libraries where all functions of a monad have the
monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
Maybe that was the initial reason, but I've actually found the Binary.Put.PutM (where Put = PutM ()) to be useful. Sometimes your putter does need to propogate a result...
But that's the whole point of Writer! Take a monoid, make it into a monad. Put as a monad is silly. Luke

On Tuesday 13 January 2009 5:51:09 pm Luke Palmer wrote:
On Tue, Jan 13, 2009 at 11:21 AM, Tim Newsham
wrote: I have seen several libraries where all functions of a monad have the
monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
Maybe that was the initial reason, but I've actually found the Binary.Put.PutM (where Put = PutM ()) to be useful. Sometimes your putter does need to propogate a result...
But that's the whole point of Writer! Take a monoid, make it into a monad. Put as a monad is silly.
You mean it should be Writer instead? When GHC starts optimizing (Writer Builder) as well as it optimizes PutM, then that will be a cogent argument. Until then, one might argue that it misses "the whole point of Put". -- Dan

On Tue, Jan 13, 2009 at 5:19 PM, Dan Doel
On Tuesday 13 January 2009 5:51:09 pm Luke Palmer wrote:
On Tue, Jan 13, 2009 at 11:21 AM, Tim Newsham
wrote: I have seen several libraries where all functions of a monad have the
monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
Maybe that was the initial reason, but I've actually found the Binary.Put.PutM (where Put = PutM ()) to be useful. Sometimes your putter does need to propogate a result...
But that's the whole point of Writer! Take a monoid, make it into a monad. Put as a monad is silly.
You mean it should be Writer instead?
Or rather, PutM should not exist (or be exposed), and Put should just be a monoid.
When GHC starts optimizing (Writer Builder) as well as it optimizes PutM, then that will be a cogent argument. Until then, one might argue that it misses "the whole point of Put".
Well it can still serve as an optimization over bytestrings using whatever trickery it uses (I am assuming here -- I am not familiar with its trickery), the same way DList is an optimization over List. It's just that its monadiness is superfluous. Surely PutM and Writer Put have almost the same performance?! (I am worried if not -- if not, can you give an indication why?) Luke

On Tuesday 13 January 2009 7:27:10 pm Luke Palmer wrote:
When GHC starts optimizing (Writer Builder) as well as it optimizes PutM, then that will be a cogent argument. Until then, one might argue that it misses "the whole point of Put".
Well it can still serve as an optimization over bytestrings using whatever trickery it uses (I am assuming here -- I am not familiar with its trickery), the same way DList is an optimization over List. It's just that its monadiness is superfluous.
Surely PutM and Writer Put have almost the same performance?! (I am worried if not -- if not, can you give an indication why?)
The underlying monoid is Builder. The point of PutM is to be a version of Writer that's specialized to the Builder monoid for maximum performance. It looks like: data PairS a = PairS a {-# UNPACK #-} !Builder newtype PutM a = Put { unPut :: PairS a } I'm not sure why it's split up like that. Anyhow, the strict, unpacked Builder gets optimized better than Writer Builder. Even if you change Writer to: data Writer w a = Writer a !w it still won't match up, because polymorphic components don't get unpacked and such. That's, for instance, why Data.Sequence uses a specialized version of the finger tree type, instead of using the general version in Data.FingerTree. Only exposing Put as a monoid is kind of redundant. You might as well work straight with Builder. -- Dan

On Tue, Jan 13, 2009 at 07:44:17PM -0500, Dan Doel wrote:
On Tuesday 13 January 2009 7:27:10 pm Luke Palmer wrote:
Surely PutM and Writer Put have almost the same performance?! (I am worried if not -- if not, can you give an indication why?)
The underlying monoid is Builder. The point of PutM is to be a version of Writer that's specialized to the Builder monoid for maximum performance. It looks like:
data PairS a = PairS a {-# UNPACK #-} !Builder
newtype PutM a = Put { unPut :: PairS a }
I'm not sure why it's split up like that. Anyhow, the strict, unpacked Builder gets optimized better than Writer Builder.
But the only reason you want this monad optimized is so that you can use it in do-notation. Otherwise you'd just use Builder directly.

On Tue, 2009-01-13 at 19:44 -0500, Dan Doel wrote:
On Tuesday 13 January 2009 7:27:10 pm Luke Palmer wrote:
When GHC starts optimizing (Writer Builder) as well as it optimizes PutM, then that will be a cogent argument. Until then, one might argue that it misses "the whole point of Put".
Well it can still serve as an optimization over bytestrings using whatever trickery it uses (I am assuming here -- I am not familiar with its trickery), the same way DList is an optimization over List. It's just that its monadiness is superfluous.
Surely PutM and Writer Put have almost the same performance?! (I am worried if not -- if not, can you give an indication why?)
The underlying monoid is Builder. The point of PutM is to be a version of Writer that's specialized to the Builder monoid for maximum performance. It looks like:
data PairS a = PairS a {-# UNPACK #-} !Builder
newtype PutM a = Put { unPut :: PairS a }
I'm not sure why it's split up like that. Anyhow, the strict, unpacked Builder gets optimized better than Writer Builder.
Oops, I walked into this conversation without reading enough context. Sorry, I see what you mean now. Yes, it's specialised to get decent performance. As you say, the lifted (,) in the Writer would get in the way otherwise. There's an interesting project in optimising parametrised monads and stacks of monad transformers. Duncan

On Tue, 2009-01-13 at 19:19 -0500, Dan Doel wrote:
On Tuesday 13 January 2009 5:51:09 pm Luke Palmer wrote:
On Tue, Jan 13, 2009 at 11:21 AM, Tim Newsham
wrote: I have seen several libraries where all functions of a monad have the
monadic result (), e.g. Binary.Put and other writing functions. This is a clear indicator, that the Monad instance is artificial and was only chosen because of the 'do' notation.
Maybe that was the initial reason, but I've actually found the Binary.Put.PutM (where Put = PutM ()) to be useful. Sometimes your putter does need to propogate a result...
But that's the whole point of Writer! Take a monoid, make it into a monad. Put as a monad is silly.
You mean it should be Writer instead?
When GHC starts optimizing (Writer Builder) as well as it optimizes PutM, then that will be a cogent argument.
In that case it's a cogent argument now. :-) You may be interested to note that PutM really is implemented as a writer monad over the Builder monoid: -- | The PutM type. A Writer monad over the efficient Builder monoid. newtype PutM a = Put { unPut :: PairS a } data PairS a = PairS a {-# UNPACK #-}!Builder -- | Put merely lifts Builder into a Writer monad, applied to (). type Put = PutM ()
Until then, one might argue that it misses "the whole point of Put".
Back when we were first writing the binary library, Ross converted our original Put to be a monoid called Builder with Put left as a Writer. GHC optimises it perfectly, we checked. The reason we provide Put as well as Builder is purely for symmetry with code written using Get. Also `mappend` is not so pretty. Another argument for redefining (++) == mappend :-) Get doesn't need to be a Monad either, it only needs to be an applicative functor. Indeed the rules to eliminate adjacent bounds checks only fire if it is used in this way (using >> also works). Duncan

Ertugrul Soeylemez wrote:
[...]
Thank you for your reply, I think I can refine my thoughts. And make them much longer... ;) The elegance I have in mind comes from abstraction, that is when a type takes a meaning on its own, independent of its implementation. Let's take the example of vector graphics again data Graphic empty :: Graphic polygon :: [Point] -> Graphic over :: Graphic -> Graphic -> Graphic All primitives can be explained in terms of our intuition on pictures alone; it is completely unnecessary to know that Graphic is implemented as type Graphics = Window -> IO () empty = \w -> return () polygon (p:ps) = \w -> moveTo p w >> mapM_ (\p -> lineTo p w) ps over g1 g2 = \w -> g1 w >> g2 w Furthermore, this independence is often exemplified by the existence of many different implementations. For instance, Graphics can as well be written as type Graphics = Pixel -> Color empty = const Transparent polygon (p:ps) = foldr over empty $ zipWith line (p:ps) ps over g1 g2 = \p -> if g1 p == Transparent then g2 p else g1 p Incidentally, this representation also makes a nice formalization of the intuitive notion of pictures, making it possible to verify the correctness of other implementations. Of course, taking it as definition for Graphic would still fall short of the original goal of creating meaning independent of any implementation. But this can be achieved by stating the laws that relate the available operations. For instance, we have g = empty `over` g = g `over` empty (identity element) g `over` (h `over` j) = (g `over` h) `over` j (associativity) g `over` g = g (idempotence) The first two equations say that Graphics is a monoid. Abstraction and equational laws are the cornerstones of functional programming. For more, see also the following classics John Hughes. The Design of a Pretty-printing Library. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.8777 Philip Wadler. A prettier printer. http://homepages.inf.ed.ac.uk/wadler/topics/ language-design.html#prettier Richard Bird. A program to solve Sudoku Slides: http://icfp06.cs.uchicago.edu/bird-talk.pdf (From this point of view, the feature of non-pure languages to allow side effects in every function is useless and distracting. Why on earth would I want over to potentially have side effects? That would just invalidate the laws while offering nothing in return.)
Often, the monadic solution _is_ the elegant solution. Please don't confuse monads with impure operations. I use the monadic properties of lists, often together with monad transformers, to find elegant solutions. As long as you're not abusing monads to program imperatively, I think, they are an excellent and elegant structure.
I do use state monads where there is no more elegant solution than passing state around. It's simply that: you have a structure, which you modify continuously in a complex fashion, such as a neural network or an automaton. Monads are the way to go here, unless you want to do research and find a better way to express this.
In the light of the discussion above, the state monad for a particular state is an implementation, not an abstraction. There is no independent meaning in "stateful computation with an automaton as state", it is defined by its sole implementation. Sure, it does reduce boilerplate and simplifies the implementation, but it doesn't offer any insights. In other words, "passing state" is not an abstraction and it's a good idea to consciously exclude it from the design space when searching for a good abstraction. Similar for the other monads, maybe except the nondeterminism monad to some extend. Of course, a good abstraction depends on the problem domain. For automata, in particular finite state automata, I can imagine that the operations on corresponding regular expressions like concatenation, alternation and Kleene star are viable candidates. I have no clue about neural networks. On a side note, not every function that involves "state" does need the state monad. For instance, an imperative language might accumulate a value with a while-loop and updating a state variable, but in Haskell we simply pass a parameter to the recursive call foldl f z [] = z foldl f z (x:xs) = foldl f (f z x) xs Another example is "modifying" a value where a simple function of type s -> s like insert 1 'a' :: Map k v -> Map k v will do the trick.
Personally I prefer this:
somethingWithRandomsM :: (Monad m, Random a) => m a -> Something a
over these:
somethingWithRandoms1 :: [a] -> Something a somethingWithRandoms2 :: RandomGen g => g -> Something a
Consciously excluding monads and restricting the design space to pure functions is the basic tool of thought for finding such elegant abstractions. [...]
You don't need to exclude monads to restrict the design space to pure functions. Everything except IO and ST (and some related monads) is pure. As said, often monads _are_ the elegant solutions. Just look at parser monads.
Thanks for the reminder, there is indeed a portion of designs that I overlooked in my previous post, namely when the abstraction involves a parameter. For instance, data Parser a parse :: Parser a -> String -> Maybe a is a thing that can parse values of some type a . Here, the abstraction solves a whole class of problems, one for every type. Similarly data Random a denotes a value that "wiggles randomly". For instance, we can sample them with a random seed sample :: RandomGen g => Random a -> g -> [a] or inspect its distribution distribution :: Eq a => Random a -> [(a,Probability)] The former can be implemented with a state monad, for the latter see also Martin Erwig, Steve Kollmansberger. Probabilistic functional programming in Haskell. http://web.engr.oregonstate.edu/~erwig/papers/PFP_JFP06.pdf In these cases, it is a good idea to check whether the abstraction can be made a monad, just like it is good to realize that Graphic is a monoid. The same goes for applicative functors. These "abstractions about abstractions" are useful design guides, but this is very different from using a particular monad like the state monad and hoping that using it somehow gives an insight into the problem domain. Regards, H. Apfelmus

"Apfelmus, Heinrich"
[...] but this is very different from using a particular monad like the state monad and hoping that using it somehow gives an insight into the problem domain.
You're right, mostly. However, there are a lot of problems, where you cannot provide any useful abstraction, or the abstraction would destroy the convenience and clarity of expressing the problem as something as simple as a stateful computation. The 'insight' into a problem often comes from expressing its solution, not the problem itself. Please consider that I'm talking about real-world applications, so my problems are things like internal database servers. Of course, there may be better ways to model such a thing, but a 'StateT (Map a b) IO' computation is the way to go for someone, who wants to get the job done rather than doing research, and in fact I think this is a very beautiful and elegant approach exploiting Haskell's (or at least GHC's) great RTS features. Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Apfelmus, Heinrich schrieb:
Ertugrul Soeylemez wrote:
Let me tell you that usually 90% of my code is monadic and there is really nothing wrong with that. I use especially State monads and StateT transformers very often, because they are convenient and are just a clean combinator frontend to what you would do manually without them: passing state.
The insistence on avoiding monads by experienced Haskellers, in particular on avoiding the IO monad, is motivated by the quest for elegance.
The IO and other monads make it easy to fall back to imperative programming patterns to "get the job done". But do you really need to pass state around? Or is there a more elegant solution, an abstraction that makes everything fall into place automatically? Passing state is a valid implementation tool, but it's not a design principle.
I collected some hints, on how to avoid at least the IO monad: http://www.haskell.org/haskellwiki/Avoiding_IO

On Sun, 2009-01-11 at 10:44 +0100, Apfelmus, Heinrich wrote:
Ertugrul Soeylemez wrote:
Let me tell you that usually 90% of my code is monadic and there is really nothing wrong with that. I use especially State monads and StateT transformers very often, because they are convenient and are just a clean combinator frontend to what you would do manually without them: passing state.
The insistence on avoiding monads by experienced Haskellers, in particular on avoiding the IO monad, is motivated by the quest for elegance.
By some experienced Haskellers. Others pile them on where they feel it's appropriate, though avoiding IO where possible is still a good principle. I often find that less is essentially stateful than it looks. However, I also find that as I decompose tasks - especially if I'm willing to 'de-fuse' things - then state-like dataflows crop up again in places where they had been eliminated. Especially if I want to eg instrument or quietly abstract some code. Spotting particular sub-cases like Reader and Writer is a gain, of course!
A good example is probably the HGL (Haskell Graphics Library), a small vector graphics library which once shipped with Hugs. The core is the type
Graphic
which represents a drawing and whose semantics are roughly
Graphic = Pixel -> Color
<snip>
After having constructed a graphic, you'll also want to draw it on screen, which can be done with the function
drawInWindow :: Graphic -> Window -> IO ()
Note that there are two different things going on here. The principle of
building up 'programs' in pure code to execute via IO is good - though
ironically enough, plenty of combinator libraries for such tasks form
monads themselves. Finding the right domain for DSL programs is also
important, but this is not necessarily as neatly functional. If you
start with a deep embedding rather than a shallow one then this isn't
much of a problem even if you find your first attempt was fatally flawed
- the DSL code's just another piece of data.
--
Philippa Cowderoy

The question of imperative versus pure declarative coding has brought to my
mind some may be off-topic speculations. (so please don´t read it if you
have no time to waste): I´m interested in the misterious relation bentween
mathematics, algoritms and reality (see
thishttp://arxiv.org/pdf/0704.0646for example). Functional
programming is declarative, you can model the
entire world functionally with no concern for the order of calculations. The
world is mathematical. The laws of physics have no concern for sequencing.
But CPUs and communications are basically sequential. Life is sequential,
and programs run along the time coordinate. Whenever you have to run a
program, you or your compiler must sequence it. The sequentiation must be
done by you or your compiler or both. The functional declarative code can be
sequenced on-the-run by the compiler in the CPU by the runtime. but IO is
different.
You have to create, explicitly or implicitly the sequence of IO actions
because the external events in the external world are not controlled by you
or the compiler. So you, the programmer are the responsible of sequencing
effects in coordination with the external world. so every language must give
you ways to express sequencing of actions. that is why, to interact with
the external world you must think in terms of algorithms, that is ,
imperatively, no matter if you make the imperative-sequence (relatively)
explicit with monads or if you make it trough pairs (state, value) or
unsafePerformIO or whatever. You have to think imperatively either way,
because yo HAVE TO construct a sequence. I think that the elegant option is
to recognize this different algorithmic nature of IO by using the IO monad.
In other terms, the appearance of input-output in a problem means that you
modelize just a part of the whole system. the interaction with the part of
the system that the program do not control appears as input output. if the
program includes the model of the environment that give the input output
(including perhaps a model of yourself), then, all the code may be side
effects free and unobservable. Thus, input output is a measure of the lack
of a priori information. Because this information is given and demanded at
precide points in the time dimension with a quantitative (real time) or
ordered (sequential) measure, then these impure considerations must be taken
into account in the program. However, the above is nonsensical, because if
you know everithing a priory, then you don´t have a problem, so you don´t
need a program. Because problem solving is to cope with unknow data that
appears AFTER the program has been created, in oder to produce output at the
corrrect time, then the program must have timing on it. has an algoritmic
nature, is not pure mathemathical. This applies indeed to all living beings,
that respond to the environment, and this creates the notion of time.
Concerning monadic code with no side effects, In mathematical terms,
sequenced (monadic) code are mathematically different from declarative
code: A function describes what in mathematics is called a "manifold" with
a number of dimensions equal to the number of parameters. In the other
side, a sequence describe a particular trayectory in a manifold, a ordered
set of points in a wider manyfold surface. For this reason the latter must
be described algorithmically. The former can be said that include all
possible trajectories, and can be expressed declaratively. The latter is a
sequence . You can use the former to construct the later, but you must
express the sequence because you are defining the concrete trajectory in the
general manifold that solve your concrete problem, not other infinite sets
of related problems. This essentially applies also to IO.
Well this does not imply that you must use monads for it. For example, a
way to express a sequence is a list where each element is a function of the
previous. The complier is forced to sequence it in the way you whant, but
this happens also with monad evaluation.
This can be exemplified with the laws of Newton: they are declarative. Like
any phisical formula. has no concern for sequencing. But when you need to
simulate the behaviour of a ballistic weapon, you must use a sequence of
instructions( that include the l newton laws). (well, in this case the
trajectory is continuous integrable and can be expressed in a single
function. In this case, the manifold includes a unique trajectory, but this
is not the case in ordinary discrete problems,) . So any language need
declarative as well as imperative elements to program mathematical models as
well as algorithms.
Cheers
Alberto.
2009/1/11 Apfelmus, Heinrich
Ertugrul Soeylemez wrote:
Let me tell you that usually 90% of my code is monadic and there is really nothing wrong with that. I use especially State monads and StateT transformers very often, because they are convenient and are just a clean combinator frontend to what you would do manually without them: passing state.
The insistence on avoiding monads by experienced Haskellers, in particular on avoiding the IO monad, is motivated by the quest for elegance.
The IO and other monads make it easy to fall back to imperative programming patterns to "get the job done". But do you really need to pass state around? Or is there a more elegant solution, an abstraction that makes everything fall into place automatically? Passing state is a valid implementation tool, but it's not a design principle.
A good example is probably the HGL (Haskell Graphics Library), a small vector graphics library which once shipped with Hugs. The core is the type
Graphic
which represents a drawing and whose semantics are roughly
Graphic = Pixel -> Color
There are primitive graphics like
empty :: Graphic polygon :: [Point] -> Graphic
and you can combine graphics by laying them on top of each other
over :: Graphic -> Graphic -> Graphic
This is an elegant and pure interface for describing graphics.
After having constructed a graphic, you'll also want to draw it on screen, which can be done with the function
drawInWindow :: Graphic -> Window -> IO ()
This function is in the IO monad because it has the side-effect of changing the current window contents. But just because drawing on a screen involves IO does not mean that using it for describing graphics is a good idea. However, using IO for *implementing* the graphics type is fine
type Graphics = Window -> IO ()
empty = \w -> return () polygon (p:ps) = \w -> moveTo p w >> mapM_ (\p -> lineTo p w) ps over g1 g2 = \w -> g1 w >> g2 w drawInWindow = id
Consciously excluding monads and restricting the design space to pure functions is the basic tool of thought for finding such elegant abstractions. As Paul Hudak said in his message "A regressive view of support for imperative programming in Haskell"
In my opinion one of the key principles in the design of Haskell has been the insistence on purity. It is arguably what led the Haskell designers to "discover" the monadic solution to IO, and is more generally what inspired many researchers to "discover" purely functional solutions to many seemingly imperative problems.
http://article.gmane.org/gmane.comp.lang.haskell.cafe/27214
The philosophy of Haskell is that searching for purely functional solution is well worth it.
Regards, H. Apfelmus
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
participants (22)
-
Alberto G. Corona
-
Andrew Wagner
-
Apfelmus, Heinrich
-
Brandon S. Allbery KF8NH
-
Bulat Ziganshin
-
ChrisK
-
Dan Doel
-
Duncan Coutts
-
Ertugrul Soeylemez
-
Henning Thielemann
-
Jason Dusek
-
Lennart Augustsson
-
Luke Palmer
-
Martijn van Steenbergen
-
Miguel Mitrofanov
-
minh thu
-
Neal Alexander
-
Peter Verswyvelen
-
Philippa Cowderoy
-
Ross Paterson
-
Ryan Ingram
-
Tim Newsham