
Hi folks, Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker: http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/ It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph: One thing that does annoy me about Haskell- naming. Say you've noticed a common pattern, a lot of data structures are similar to the difference list I described above, in that they have an empty state and the ability to append things onto the end. Now, for various reasons, you want to give this pattern a name using on Haskell's tools for expressing common idioms as general patterns (type classes, in this case). What name do you give it? I'd be inclined to call it something like "Appendable". But no, Haskell calls this pattern a "Monoid". Yep, that's all a monoid is- something with an empty state and the ability to append things to the end. Well, it's a little more general than that, but not much. Simon Peyton Jones once commented that the biggest mistake Haskell made was to call them "monads" instead of "warm, fluffy things". Well, Haskell is exacerbating that mistake. Haskell developers, stop letting the category theorists name things. Please. I beg of you. I'd like to echo that sentiment! He went on to add: If you?re not a category theorists, and you're learning (or thinking of learning) Haskell, don't get scared off by names like "monoid" or "functor". And ignore anyone who starts their explanation with references to category theory- you don't need to know category theory, and I don't think it helps. I'd echo that one too. -- John

I have replied on his blog, but I'll repeat the gist of it here.
Why is there a fear of using existing terminology that is exact?
Why do people want to invent new words when there are already existing
ones with the exact meaning that you want?
If I see Monoid I know what it is, if I didn't know I could just look
on Wikipedia.
If I see Appendable I can guess what it might be, but exactly what does it mean?
-- Lennart
On Thu, Jan 15, 2009 at 3:34 PM, John Goerzen
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
One thing that does annoy me about Haskell- naming. Say you've noticed a common pattern, a lot of data structures are similar to the difference list I described above, in that they have an empty state and the ability to append things onto the end. Now, for various reasons, you want to give this pattern a name using on Haskell's tools for expressing common idioms as general patterns (type classes, in this case). What name do you give it? I'd be inclined to call it something like "Appendable". But no, Haskell calls this pattern a "Monoid". Yep, that's all a monoid is- something with an empty state and the ability to append things to the end. Well, it's a little more general than that, but not much. Simon Peyton Jones once commented that the biggest mistake Haskell made was to call them "monads" instead of "warm, fluffy things". Well, Haskell is exacerbating that mistake. Haskell developers, stop letting the category theorists name things. Please. I beg of you.
I'd like to echo that sentiment!
He went on to add:
If you?re not a category theorists, and you're learning (or thinking of learning) Haskell, don't get scared off by names like "monoid" or "functor". And ignore anyone who starts their explanation with references to category theory- you don't need to know category theory, and I don't think it helps.
I'd echo that one too.
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language. Ganesh ============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==============================================================================

On Thu, Jan 15, 2009 at 4:12 PM, Sittampalam, Ganesh
Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language.
Exactly. For example, the entry for monoid on Wikipedia starts: "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element." I've had some set theory, but most programmers I know have not.

Most people don't understand pure functional programming either. Does
that mean we should introduce unrestricted side effects in Haskell?
-- Lennart
On Thu, Jan 15, 2009 at 4:22 PM, Thomas DuBuisson
On Thu, Jan 15, 2009 at 4:12 PM, Sittampalam, Ganesh
wrote: Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language.
Exactly. For example, the entry for monoid on Wikipedia starts: "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element."
I've had some set theory, but most programmers I know have not.

For what it's worth, many (most/all?) programmers I know in person don't have the slightest clue about Category Theory and they may have known about abstract algebra once upon a time but certainly don't remember any of it now. They usually understand the concepts perfectly well enough but by "lay terms" or by no particular name at all. Personally, I don't mind it too much if the generic typeclasses are named using extremely accurate terms like Monoid, but saying that someone should then look up the abstract math concept and try to map this to something very concrete and simple such as a string seems like wasted effort. Usually when encountering something like "Monoid" (if I didn't already know it), I'd look it up in the library docs. The problem I've had with this tactic is twofold: First, the docs for the typeclass usually don't give any practical examples, so sometimes it's hard to be sure that the "append" in "mappend" means what you think it means. Second is that there appears to be no way to document an _instance_. It would be really handy if there were even a single line under "Instances > Monoid ([] a)" that explained how the type class was implemented for the list type. As it is, if you know what a Monoid is already, it's easy to figure out how it would be implemented. If you don't, you're either stuck reading a bunch of pages on the generic math term monoid and then finally realizing that it means "appendable" (and other similar things), or grovelling through the library source code seeing how the instance is implemented. My 2 cents, -Ross On Jan 15, 2009, at 11:36 AM, Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
-- Lennart
On Thu, Jan 15, 2009 at 4:22 PM, Thomas DuBuisson
wrote: On Thu, Jan 15, 2009 at 4:12 PM, Sittampalam, Ganesh
wrote: Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language.
Exactly. For example, the entry for monoid on Wikipedia starts: "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element."
I've had some set theory, but most programmers I know have not.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

By no means do I suggest that Wikipedia should replace Haskell library
documentation.
I think the libraries should be documented in a mostly stand-alone way
(i.e., no references to old papers etc.). In the case of Monoid, a
few lines of text is enough to convey the meaning of it and gives an
example.
-- Lennart
On Thu, Jan 15, 2009 at 4:46 PM, Ross Mellgren
For what it's worth, many (most/all?) programmers I know in person don't have the slightest clue about Category Theory and they may have known about abstract algebra once upon a time but certainly don't remember any of it now. They usually understand the concepts perfectly well enough but by "lay terms" or by no particular name at all.
Personally, I don't mind it too much if the generic typeclasses are named using extremely accurate terms like Monoid, but saying that someone should then look up the abstract math concept and try to map this to something very concrete and simple such as a string seems like wasted effort.
Usually when encountering something like "Monoid" (if I didn't already know it), I'd look it up in the library docs. The problem I've had with this tactic is twofold:
First, the docs for the typeclass usually don't give any practical examples, so sometimes it's hard to be sure that the "append" in "mappend" means what you think it means.
Second is that there appears to be no way to document an _instance_. It would be really handy if there were even a single line under "Instances > Monoid ([] a)" that explained how the type class was implemented for the list type. As it is, if you know what a Monoid is already, it's easy to figure out how it would be implemented. If you don't, you're either stuck reading a bunch of pages on the generic math term monoid and then finally realizing that it means "appendable" (and other similar things), or grovelling through the library source code seeing how the instance is implemented.
My 2 cents,
-Ross
On Jan 15, 2009, at 11:36 AM, Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
-- Lennart
On Thu, Jan 15, 2009 at 4:22 PM, Thomas DuBuisson
wrote: On Thu, Jan 15, 2009 at 4:12 PM, Sittampalam, Ganesh
wrote: Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language.
Exactly. For example, the entry for monoid on Wikipedia starts: "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element."
I've had some set theory, but most programmers I know have not.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Of course not, the wikipedians would probably have your head for notability guidelines or something ;-) But seriously, I would have saved many hours of my life and probably many future ones if type class instances were documented and showed up in the haddock docs. -Ross On Jan 15, 2009, at 11:53 AM, Lennart Augustsson wrote:
By no means do I suggest that Wikipedia should replace Haskell library documentation. I think the libraries should be documented in a mostly stand-alone way (i.e., no references to old papers etc.). In the case of Monoid, a few lines of text is enough to convey the meaning of it and gives an example.
-- Lennart
On Thu, Jan 15, 2009 at 4:46 PM, Ross Mellgren
wrote: For what it's worth, many (most/all?) programmers I know in person don't have the slightest clue about Category Theory and they may have known about abstract algebra once upon a time but certainly don't remember any of it now. They usually understand the concepts perfectly well enough but by "lay terms" or by no particular name at all.
Personally, I don't mind it too much if the generic typeclasses are named using extremely accurate terms like Monoid, but saying that someone should then look up the abstract math concept and try to map this to something very concrete and simple such as a string seems like wasted effort.
Usually when encountering something like "Monoid" (if I didn't already know it), I'd look it up in the library docs. The problem I've had with this tactic is twofold:
First, the docs for the typeclass usually don't give any practical examples, so sometimes it's hard to be sure that the "append" in "mappend" means what you think it means.
Second is that there appears to be no way to document an _instance_. It would be really handy if there were even a single line under "Instances > Monoid ([] a)" that explained how the type class was implemented for the list type. As it is, if you know what a Monoid is already, it's easy to figure out how it would be implemented. If you don't, you're either stuck reading a bunch of pages on the generic math term monoid and then finally realizing that it means "appendable" (and other similar things), or grovelling through the library source code seeing how the instance is implemented.
My 2 cents,
-Ross
On Jan 15, 2009, at 11:36 AM, Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
-- Lennart
On Thu, Jan 15, 2009 at 4:22 PM, Thomas DuBuisson
wrote: On Thu, Jan 15, 2009 at 4:12 PM, Sittampalam, Ganesh
wrote: Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language.
Exactly. For example, the entry for monoid on Wikipedia starts: "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element."
I've had some set theory, but most programmers I know have not.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I'm totally with you on the instance documentation. I wish haddock allowed it.
On Thu, Jan 15, 2009 at 4:56 PM, Ross Mellgren
Of course not, the wikipedians would probably have your head for notability guidelines or something ;-)
But seriously, I would have saved many hours of my life and probably many future ones if type class instances were documented and showed up in the haddock docs.
-Ross

"Lennart Augustsson"
By no means do I suggest that Wikipedia should replace Haskell library documentation. I think the libraries should be documented in a mostly stand-alone way (i.e., no references to old papers etc.). In the case of Monoid, a few lines of text is enough to convey the meaning of it and gives an example.
I don't think references to old papers are a bad thing (they might be good papers), but such references should certainly not be a replacement for a brief explanation and helpful example!

On Thu, Jan 15, 2009 at 10:46 AM, Ross Mellgren
Usually when encountering something like "Monoid" (if I didn't already know it), I'd look it up in the library docs. The problem I've had with this tactic is twofold:
First, the docs for the typeclass usually don't give any practical examples, so sometimes it's hard to be sure that the "append" in "mappend" means what you think it means.
Second is that there appears to be no way to document an _instance_. It would be really handy if there were even a single line under "Instances > Monoid ([] a)" that explained how the type class was implemented for the list type. As it is, if you know what a Monoid is already, it's easy to figure out how it would be implemented. If you don't, you're either stuck reading a bunch of pages on the generic math term monoid and then finally realizing that it means "appendable" (and other similar things), or grovelling through the library source code seeing how the instance is implemented.
I think you have a good point regarding documentation. Usually what I end up doing is just going into ghci & testing out the instances with some trivial cases to make sure I have good intuition for how it's going to work. I don't think this a problem with the term 'monoid' though, but just a very generic problem with documentation. I have to do the same thing to understand an instance of Foldable despite how literal the name is. I don't know if it's very practical, but I like the idea of haddock generating either links to the source of the instance or some kind of expandable block that will show you the literal code. Cheers, Creighton

Ross Mellgren
Usually when encountering something like "Monoid" (if I didn't already know it), I'd look it up in the library docs. The problem I've had with this tactic is twofold:
First, the docs for the typeclass usually don't give any practical examples, so sometimes it's hard to be sure that the "append" in "mappend" means what you think it means.
I second this, many of the docs are sorely lacking examples (and there are of course docs that simply reference a paper, which is usually too long to be helpful in the short term...)

On Thu, Jan 15, 2009 at 11:46 AM, Ross Mellgren
Usually when encountering something like "Monoid" (if I didn't already know it), I'd look it up in the library docs. The problem I've had with this tactic is twofold:
First, the docs for the typeclass usually don't give any practical examples, so sometimes it's hard to be sure that the "append" in "mappend" means what you think it means.
The documentation for Monoid is embarrassingly brief. "The monoid class. A minimal complete definition must supply mempty and mappend, and these should satisfy the monoid laws." It doesn't even list the monoid laws!
Second is that there appears to be no way to document an _instance_. It would be really handy if there were even a single line under "Instances > Monoid ([] a)" that explained how the type class was implemented for the list type. As it is, if you know what a Monoid is already, it's easy to figure out how it would be implemented.
Not necessarily. Any instance of MonadPlus (or Alternative) has at
least two reasonable Monoid instances: (mplus, mzero) and (liftM2
mappend, return mempty). [] uses the first and Maybe uses the second.
I recommend not creating direct instances of Monoid for this reason.
If you want to use Monoid with Int, you have to use Sum Int or Product
Int.
--
Dave Menendez

On Jan 15, 2009, at 1:21 PM, David Menendez wrote:
On Thu, Jan 15, 2009 at 11:46 AM, Ross Mellgren
wrote: Second is that there appears to be no way to document an _instance_. It would be really handy if there were even a single line under "Instances > Monoid ([] a)" that explained how the type class was implemented for the list type. As it is, if you know what a Monoid is already, it's easy to figure out how it would be implemented.
Not necessarily. Any instance of MonadPlus (or Alternative) has at least two reasonable Monoid instances: (mplus, mzero) and (liftM2 mappend, return mempty). [] uses the first and Maybe uses the second.
Sorry my brain apparently misfired writing the original email. What I meant to say is that for the Monoid instance on [a] it's fairly easy (knowing what a Monoid is) to figure out how it's implemented, but that's not true for other classes or instances. That is to say, I agree with you, and intended to up front ;-) -Ross

For what it's worth, many (most/all?) programmers I know in person don't have the slightest clue about Category Theory and they may have known about abstract algebra once upon a time but certainly don't remember any of it now. They usually understand the concepts perfectly well enough but by "lay terms" or by no particular name at all.
One of my friend once said "... and by 'programmer' I mean 'category theory specialist'".

Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
The key is to introduce concepts to them in terms they can understand. You introduce it one way to experienced abstract mathematicians, and a completely different way to experienced Perl hackers. I wouldn't expect a mathematician to grok Perl, and I wouldn't expect $PERL_HACKER to grok abstract math. People have different backgrounds to draw upon, and we are under-serving one community. -- John

On Thu, 2009-01-15 at 10:56 -0600, John Goerzen wrote:
Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
The key is to introduce concepts to them in terms they can understand.
You introduce it one way to experienced abstract mathematicians, and a completely different way to experienced Perl hackers. I wouldn't expect a mathematician to grok Perl, and I wouldn't expect $PERL_HACKER to grok abstract math. People have different backgrounds to draw upon, and we are under-serving one community.
False. We are failing to meet the un-realistic expectations of advanced Perl/Python/Ruby/C/C++/Java/any other imperative language programmers as to the ease with which they should be able to learn Haskell. jcc

On Thu, Jan 15, 2009 at 01:50:11PM -0800, Jonathan Cast wrote:
On Thu, 2009-01-15 at 10:56 -0600, John Goerzen wrote:
Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
The key is to introduce concepts to them in terms they can understand.
You introduce it one way to experienced abstract mathematicians, and a completely different way to experienced Perl hackers. I wouldn't expect a mathematician to grok Perl, and I wouldn't expect $PERL_HACKER to grok abstract math. People have different backgrounds to draw upon, and we are under-serving one community.
False. We are failing to meet the un-realistic expectations of advanced Perl/Python/Ruby/C/C++/Java/any other imperative language programmers as to the ease with which they should be able to learn Haskell.
What part of that are you saying is false? That people have different backgrouns and learn differently?
jcc

On Thu, 2009-01-15 at 16:16 -0600, John Goerzen wrote:
On Thu, Jan 15, 2009 at 01:50:11PM -0800, Jonathan Cast wrote:
On Thu, 2009-01-15 at 10:56 -0600, John Goerzen wrote:
Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
The key is to introduce concepts to them in terms they can understand.
You introduce it one way to experienced abstract mathematicians, and a completely different way to experienced Perl hackers. I wouldn't expect a mathematician to grok Perl, and I wouldn't expect $PERL_HACKER to grok abstract math. People have different backgrounds to draw upon, and we are under-serving one community.
False. We are failing to meet the un-realistic expectations of advanced Perl/Python/Ruby/C/C++/Java/any other imperative language programmers as to the ease with which they should be able to learn Haskell.
What part of that are you saying is false? That people have different backgrouns and learn differently?
Not just differently. Some people learn faster than others. These relative speeds also vary across different subjects. I think the implicit assumption in most complaints about learning Haskell is that the ease with which any given developer learns Haskell (or learns a new Haskell library or concept) should be comparable to the ease with which said developers learns conventional languages, e.g. Perl. This assumption is false. In fact, if someone finds Perl particularly easy to learn (relative to other subjects), I would expect that person to find Haskell particularly hard to learn (relative to other subjects). Of course mathematicians find Haskell easier to learn than Perl programmers do; this is a consequence of the nature of Haskell, the nature of Perl, and the nature of mathematics. We are under no obligation to obtain equivalent outcomes from non-interchangeable people. That people who lack natural aptitude, or relevant prior knowledge, for learning Haskell have more difficulty than those with relevant natural aptitude or prior knowledge is in no way a failure of the Haskell community. jcc

Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
No, just that we should seek to minimise the new stuff they have to get to grips with. Ganesh ============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==============================================================================

On Thu, 2009-01-15 at 17:13 +0000, Sittampalam, Ganesh wrote:
Lennart Augustsson wrote:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
No, just that we should seek to minimise the new stuff they have to get to grips with.
How does forcing them to learn proposed terminology such as `Appendable' help here? Learners of Haskell do still need to learn what the new word means. (Or are you suggesting we should aim for programmers to be able to use Haskell (or just certain libraries?) without learning it first?) jcc

How does forcing them to learn proposed terminology such as `Appendable' help here? Learners of Haskell do still need to learn what the new word means.
The contention is that 'Appendable' is an intuitive naming that people will already have a rudimentary grasp of. This as opposed to Monoid, which absolutely requires looking up for the average coder.

On Thu, 2009-01-15 at 21:59 +0000, Thomas DuBuisson wrote:
How does forcing them to learn proposed terminology such as `Appendable' help here? Learners of Haskell do still need to learn what the new word means.
The contention is that 'Appendable' is an intuitive naming that people will already have a rudimentary grasp of.
But this contention is false. jcc

On Thu, 2009-01-15 at 21:59 +0000, Thomas DuBuisson wrote:
How does forcing them to learn proposed terminology such as `Appendable' help here? Learners of Haskell do still need to learn what the new word means.
The contention is that 'Appendable' is an intuitive naming that people will already have a rudimentary grasp of. This as opposed to Monoid, which absolutely requires looking up for the average coder.
It reminds me a bit of my school French classes. Our teacher often brought up the subject of "false friends", that is words in the foreign language that sound superficially familiar to one in the native language but are in fact different in subtle but important ways. In this case Appendable has the wrong connotations for what the Monoid class does. Appendable does not sound symmetric to me and it places too much emphasis on monoids that resemble lists. Perhaps there is a more common word that reflects the meaning without being misleading. But if we cannot find one, then picking a name that is unfamiliar to most people may well be better than picking a name that is misleading or is too narrow. Duncan

Thomas DuBuisson wrote:
How does forcing them to learn proposed terminology such as `Appendable' help here? Learners of Haskell do still need to learn what the new word means.
The contention is that 'Appendable' is an intuitive naming that people will already have a rudimentary grasp of. This as opposed to Monoid, which absolutely requires looking up for the average coder.
Intuition tells me: * 'Appendable' add an element to the back of a (finite) linear collection. * There is a 'Prependable' somewhere that add the element to the front. * There is an inverse 'pop' or 'deque' operation nearby. Absolutely none of those things are true. Let's try for 'Mergeable' * mconcat joins two collections, not a collection and an element. * Is should be a split operation. The above is true for the list instance, but false in general. Look at the instances already given that violate the "collection" idea:
Monoid Any Monoid All Monoid (Last a) Monoid (First a) Num a => Monoid (Product a) Num a => Monoid (Sum a)
And I don't even see an (Ord a)=>(Max a) or a Min instance. So the original article, which coined 'Appendable', did so without much thought in the middle of a long post. But it does show the thinking was about collections and there is one ONE instance of Monoid at http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html#... that is about a collection (Monoid ([] a)) that has a split operation. ONE.

On Thu, Jan 15, 2009 at 11:54 PM, ChrisK
So the original article, which coined 'Appendable', did so without much thought in the middle of a long post. But it does show the thinking was about collections and there is one ONE instance of Monoid at
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html#...
that is about a collection (Monoid ([] a)) that has a split operation.
The blind man at the back takes firm hold of the tail and says, "But why do we need to call it an *elephant*? No-one knows what that is. Everyone knows what a rope is, so we should just call it a rope." And that is how the elephant came to be labelled a rope in all the guide books. :-) Cheers, D

Well-put. Thanks! - Conal
On Fri, Jan 16, 2009 at 1:52 AM, Dougal Stanton
On Thu, Jan 15, 2009 at 11:54 PM, ChrisK
wrote: So the original article, which coined 'Appendable', did so without much thought in the middle of a long post. But it does show the thinking was about collections and there is one ONE instance of Monoid at
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html#...
that is about a collection (Monoid ([] a)) that has a split operation.
The blind man at the back takes firm hold of the tail and says, "But why do we need to call it an *elephant*? No-one knows what that is. Everyone knows what a rope is, so we should just call it a rope."
And that is how the elephant came to be labelled a rope in all the guide books.
:-)
Cheers,
D _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-01-15 at 21:59 +0000, Thomas DuBuisson wrote:
How does forcing them to learn proposed terminology such as `Appendable' help here? Learners of Haskell do still need to learn what the new word means.
The contention is that 'Appendable' is an intuitive naming that people will already have a rudimentary grasp of. This as opposed to Monoid, which absolutely requires looking up for the average coder.
In programming, -every- name requires looking up or some other way of checking meaning. Other than perhaps arithmetic operators (and I have had that bite me), I have -never- in any language written code using a name without having some assurance that it actually meant what I thought it meant. Usually you have to "look something up" to even know a name exists no matter how "intuitive" it turns out to be.

I have to say, I agree with Lennart here. Terms like monoid have had a precise definition for a very long time. Replacing an ill-defined term by a vaguely defined term only serves to avoid facing ones ignorance - IMHO an unwise move for a technical expert. Learning Haskell has often been described as a perspective changing, deeply enlightening process. I believe this is because the language and the community favours drilling down to the core of a problem and exposing its essence in the bright light of mathematical precision. It would be a mistake to give up on that. We could call lambda abstraction, "name binder", and we could call the lambda calculus, "rule system to manipulate name bindings". That would avoid some scary greek. Would it make functional programming any easier? In contrast, even the planned new C++0x standard uses our terminology: http://en.wikipedia.org/wiki/C%2B%2B0x#Lambda_functions_and_expressions Ok, ok, they do mutilate the whole idea quite brutally, but the point is, we got in their heads. That counts. I am all for helping beginners to learn, but I am strongly against diluting what is being learnt. If some of our terminology is a problem, we need to explain it better. Manuel Lennart Augustsson:
Most people don't understand pure functional programming either. Does that mean we should introduce unrestricted side effects in Haskell?
-- Lennart
On Thu, Jan 15, 2009 at 4:22 PM, Thomas DuBuisson
wrote: On Thu, Jan 15, 2009 at 4:12 PM, Sittampalam, Ganesh
wrote: Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
I would suggest that having to look things up slows people down and might distract them from learning other, perhaps more useful, things about the language.
Exactly. For example, the entry for monoid on Wikipedia starts: "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element."
I've had some set theory, but most programmers I know have not.

Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
Picture someone that doesn't yet know Haskell. If I see Appendable I can guess what it might be. If I see "monoid", I have no clue whatsoever, because I've never heard of a monoid before. Using existing terminology isn't helpful if the people using the language have never heard of it. Wikipedia's first sentence about monoids is: In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element. Which is *not* intuitive to someone that comes from a background in.... any other programming language. A lot of communities have the "not invented here" disease -- they don't like to touch things that other people have developed. We seem to have the "not named here" disease -- we don't want to give things a sensible name for a programming language that is actually useful. Here's another, less egregious, example: isInfixOf. I would have called that function "contains" or something. Plenty of other languages have functions that do the same thing, and I can't think of one that names it anything like "isInfixOf". If you're learning Haskell, which communicates the idea more clearly: * Appendable or * Monoid I can immediately figure out what the first one means. With the second, I could refer to the GHC documentation, which does not describe what a Monoid does. Or read a wikipedia article about a branch of mathematics and try to figure out how it applies to Haskell. The GHC docs for something called Appendable could very easily state that it's a monoid. (And the docs for Monoid ought to spell out what it is in simple terms, not by linking to a 14-year-old paper.) I guess the bottom line question is: who is Haskell for? Category theorists, programmers, or both? I'd love it to be for both, but I've got to admit that Brian has a point that it is trending to the first in some areas. -- John

Why do people think that you should be able to understand everything
without ever looking things up?
I'll get back to my example from the comment on the blog post. If I
see 'ghee' in a cook book I'll check what it is (if I don't know). It
has a precise meaning and next time I'll know. Inventing a new word
for it serves no purpose, but to confuse people.
Parts of Computer Science seem to love to invent new words for
existing concepts. Again, I think this just confuses people in the
long run. Or use existing words in a different way (like 'functor' in
C++.)
When it comes to 'isInfixOf', I think that is a particularely stupid
name. As you point out, there are existing names for this function.
I would probably have picked something like isSubstringOf instead of
inventing something that is totally non-standard.
I'm not saying Haskell always gets naming right, all I want is to
reuse words that exist instead of inventing new ones. (And 'monoid'
is not category theory, it's very basic (abstract) algebra.) I don't
know any category theory, but if someone tells me that blah is an
endomorphism I'm happy to call it that, knowing that I have a name
that anyone can figure out with just a little effort.
-- Lennart
On Thu, Jan 15, 2009 at 4:15 PM, John Goerzen
Lennart Augustsson wrote:
I have replied on his blog, but I'll repeat the gist of it here. Why is there a fear of using existing terminology that is exact? Why do people want to invent new words when there are already existing ones with the exact meaning that you want? If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia. If I see Appendable I can guess what it might be, but exactly what does it mean?
Picture someone that doesn't yet know Haskell.
If I see Appendable I can guess what it might be. If I see "monoid", I have no clue whatsoever, because I've never heard of a monoid before.
Using existing terminology isn't helpful if the people using the language have never heard of it.
Wikipedia's first sentence about monoids is:
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element.
Which is *not* intuitive to someone that comes from a background in.... any other programming language.
A lot of communities have the "not invented here" disease -- they don't like to touch things that other people have developed. We seem to have the "not named here" disease -- we don't want to give things a sensible name for a programming language that is actually useful.
Here's another, less egregious, example: isInfixOf. I would have called that function "contains" or something. Plenty of other languages have functions that do the same thing, and I can't think of one that names it anything like "isInfixOf".
If you're learning Haskell, which communicates the idea more clearly:
* Appendable
or
* Monoid
I can immediately figure out what the first one means. With the second, I could refer to the GHC documentation, which does not describe what a Monoid does. Or read a wikipedia article about a branch of mathematics and try to figure out how it applies to Haskell.
The GHC docs for something called Appendable could very easily state that it's a monoid. (And the docs for Monoid ought to spell out what it is in simple terms, not by linking to a 14-year-old paper.)
I guess the bottom line question is: who is Haskell for? Category theorists, programmers, or both? I'd love it to be for both, but I've got to admit that Brian has a point that it is trending to the first in some areas.
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Lennart Augustsson wrote:
a name that anyone can figure out with just a little effort.
I think the problem is that all these pieces of "little effort" soon mount up. It's not just the cost of looking it up, but also of remembering it the next time and so on. It's fine when you only encounter the occasional unfamiliar term, but a barrage of them all at once can be quite disorienting. Ganesh ============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==============================================================================

I won't deny that Haskell has a large number of unfamiliar term if
you've only seen Java before.
But I don't think that giving them all happy fuzzy names will help
people in the long run.
-- Lennart
On Thu, Jan 15, 2009 at 4:40 PM, Sittampalam, Ganesh
Lennart Augustsson wrote:
a name that anyone can figure out with just a little effort.
I think the problem is that all these pieces of "little effort" soon mount up. It's not just the cost of looking it up, but also of remembering it the next time and so on. It's fine when you only encounter the occasional unfamiliar term, but a barrage of them all at once can be quite disorienting.
Ganesh
============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer:
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==============================================================================
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Lennart Augustsson wrote:
Why do people think that you should be able to understand everything without ever looking things up?
I don't. But looking things up has to be helpful. In all to many cases, looking things up means clicking the link to someone's old academic paper or some article about abstract math in Wikipedia. It does not answer the questions: * Why is this in Haskell? * Why would I want to use it? * How does it benefit me? * How do I use it in Haskell? If the docs for things like Monoids were more newbie-friendly, I wouldn't gripe about it as much. Though if all we're talking about is naming, I would still maintain that newbie-friendly naming is a win. We can always say "HEY MATHEMETICIANS: APPENDABLE MEANS MONOID" in the haddock docs ;-) Much as I dislike Java's penchant for 200-character names for things, I'm not sure Monoid is more descriptive than SomeSortOfGenericThingThatYouCanAppendStuffToClassTemplateAbstractInterfaceThingy :-) -- John

I think the documentation should be reasonably newbie-friendly too.
But that doesn't mean we should call Monoid Appendable.
Appendable is just misleading, since Monoid is more general than appending.
-- Lennart
On Thu, Jan 15, 2009 at 4:51 PM, John Goerzen
Lennart Augustsson wrote:
Why do people think that you should be able to understand everything without ever looking things up?
I don't. But looking things up has to be helpful. In all to many cases, looking things up means clicking the link to someone's old academic paper or some article about abstract math in Wikipedia. It does not answer the questions:
* Why is this in Haskell?
* Why would I want to use it?
* How does it benefit me?
* How do I use it in Haskell?
If the docs for things like Monoids were more newbie-friendly, I wouldn't gripe about it as much.
Though if all we're talking about is naming, I would still maintain that newbie-friendly naming is a win. We can always say "HEY MATHEMETICIANS: APPENDABLE MEANS MONOID" in the haddock docs ;-)
Much as I dislike Java's penchant for 200-character names for things, I'm not sure Monoid is more descriptive than SomeSortOfGenericThingThatYouCanAppendStuffToClassTemplateAbstractInterfaceThingy :-)
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Lennart Augustsson wrote:
I think the documentation should be reasonably newbie-friendly too. But that doesn't mean we should call Monoid Appendable. Appendable is just misleading, since Monoid is more general than appending.
Then why does it have a member named 'mappend'? :-) Ganesh ============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==============================================================================

Beats me. As I said, I don't think Haskell gets all the names right. :)
On Thu, Jan 15, 2009 at 5:15 PM, Sittampalam, Ganesh
Lennart Augustsson wrote:
I think the documentation should be reasonably newbie-friendly too. But that doesn't mean we should call Monoid Appendable. Appendable is just misleading, since Monoid is more general than appending.
Then why does it have a member named 'mappend'? :-)
Ganesh
============================================================================== Please access the attached hyperlink for an important electronic communications disclaimer:
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==============================================================================
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Sittampalam, Ganesh wrote:
Lennart Augustsson wrote:
I think the documentation should be reasonably newbie-friendly too. But that doesn't mean we should call Monoid Appendable. Appendable is just misleading, since Monoid is more general than appending.
Then why does it have a member named 'mappend'? :-)
That's a mistake - and in fact, it's a good demonstration of why Monoid should not be named something like Appendable - because it misleads people into thinking that the structure is less general than it really is. Anton

2009/1/15 Sittampalam, Ganesh
Lennart Augustsson wrote:
I think the documentation should be reasonably newbie-friendly too. But that doesn't mean we should call Monoid Appendable. Appendable is just misleading, since Monoid is more general than appending.
Then why does it have a member named 'mappend'? :-)
Ganesh
Good question. The names of the methods of the Monoid class are inappropriate. My personal preference would be: class Monoid m where zero :: m (++) :: m -> m -> m (in the Prelude of course) - Cale

+1 to that Regards, John On Jan 15, 2009, at 4:10 PM, Cale Gibbard wrote:
2009/1/15 Sittampalam, Ganesh
: Lennart Augustsson wrote:
I think the documentation should be reasonably newbie-friendly too. But that doesn't mean we should call Monoid Appendable. Appendable is just misleading, since Monoid is more general than appending.
Then why does it have a member named 'mappend'? :-)
Ganesh
Good question. The names of the methods of the Monoid class are inappropriate.
My personal preference would be:
class Monoid m where zero :: m (++) :: m -> m -> m
(in the Prelude of course)
- Cale _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Perhaps as a math/CS major I'm a bit biased here, but as a haskell neophyte, I think my opinion is at least somewhat relevant... The immediate problem is certainly documentation. No one would groan more than once after hearing a term from abstract math if there was sufficient Haskell-oriented language. I can say that a monad is an endo-functor (in this case on Hask) which is the composition of two natural transformations; the clever parrot I am, I can even understand most of this... but that doesn't mean anything to me in a Haskell context; until a new Haskell programmer can find a Hasktionary with relevant terms in a Haskell context, everything is meaningless-- even though I can understand (i.e. apply meaning to) the definition of a monad on Hask, I can't apply the sort of meaning required to program with a monad without either figuring it out through experience, or seeing it shown to me in a programmer-relevant way (Or rather, both need to happen.) On the other hand, I really, really like getting behind things and understanding the theory. In C, well, C is the theory-- if I want to go deeper, I need to learn as much as I can about the way a computer actually, physically works (why, why, then, even have higher level languages?) or I need to take a history lesson so I can figure out the math that helped out... With Haskell, the theory is right there, staring at me. If the documentation were better, I wouldn't *need* to learn it, but if my curiosity is piqued, it's right there. Naming monoid "appendable" kills that-- by trying to make things "warm and fuzzy", you've weakened one of my strongest motivators for programming (especially in Haskell), namely how much of a direct application of cool math it is. I know I'm not the only one. As far as I know, one of the draws of Haskell is the inherent mathematical nature of it-- in how many other languages do people write proofs of correctness because they don't want to write test-cases? The kind of people who are going to be drawn to a language which allows [(x,y)| x<-somelist, y<-someotherlist] are overall, a mathy set of people, and trying to make our terms fuzzy isn't going to change that. So why not embrace it? This leads to another point: monoids are probably called monoids for the same reason monads are monads: they came directly out of higher math. Someone did sit down trying to name this cool new Haskell idea he had and then say, "Oh, of course, it's just [insert "obscure" math word that's used in Haskell]! I'll keep that name." He sat down, and said, "Oh! Wait if I use [insert "obscure" math word that's used in Haskell] this problem is simpler." It's named for what it is: a monoid, a monad, a functor, existential quantification. But there's a deeper problem here, one that can't be resolved inside the Haskell community. The problem is that the "Math?! Scary! Gross!" attitude that's so pervasive in our society is hardly less pervasive in the computer subculture. I shouldn't be more able to discuss abstract math with a conservatory dropout theater student than with someone who plans, for a living, to put well-formed formulae onto a finite state machine. I am. I don't expect the average programmer to be able to give me a well-ordering of the reals (with choice, of course), or to prove that category C satisfying property P must also satisfy property Q; but for God's sake, they better have a good intuition for basic set theory, basic graph theory, and most importantly mathematical abstraction. What is "good coding style" if not making the exact same types of abstractions that mathematicians make? Again, I don't expect a CS major to write a good proof, to explain rings to a 13 year old, or actually "do" real math; but I expect one to be able to read "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element. " and be able to get a basic intuitive idea for what a monad is; with some thought of course. I would go so far as to say that a programmer should be able to intuitively understand "a system of objects and associative arrows", even if they can't "do math" with that... Programmers shouldn't be allowed to get away with the absolute ignorance of math that at least 75% of the CS majors at my school pass with. This doesn't mean there isn't a serious problem with Haskell documentation; it also doesn't mean that everyone should have a minor in math to be a programmer, but isn't an introductory course in discrete math required for most CS programs? Then why are programmers so afraid of basic mathematical definitions? And why do so many wear their ignorance and fear of math as a badge of honor? Sorry that this came out as a bit of a rant, but I spend enough time trying to convince people that math isn't horrid and disgusting... Cory Knapp

Cory Knapp wrote:
As far as I know, one of the draws of Haskell is the inherent mathematical nature of it.
It's also simultaneously one of the biggest things that puts people off. Perhaps as we can curb this with sufficient documentation, as others have suggested.
But there's a deeper problem here, one that can't be resolved inside the Haskell community. The problem is that the "Math?! Scary! Gross!" attitude that's so pervasive in our society is hardly less pervasive in the computer subculture.
No arguments here! However, that at least *is* completely beyond our power to alter. Unfortunately.

Andrew Coppin wrote:
Cory Knapp wrote:
As far as I know, one of the draws of Haskell is the inherent mathematical nature of it.
It's also simultaneously one of the biggest things that puts people off.
Perhaps as we can curb this with sufficient documentation, as others have suggested.
Actually, that was part of my point: When I mention Haskell to people, and when I start describing it, they're generally frightened enough by the focus on pure code and lazy evaluation-- add to this the inherently abstract nature, and we can name typeclasses "cuddlyKitten", and the language is still going to scare J. R. Programmer. By "inherently mathematical nature", I didn't mean names like "monoid" and "functor", I meant *concepts* like monoid and functor. Not that either of them are actually terribly difficult; the problem is that they are terribly abstract. That draws a lot of people (especially mathematicians), but most people who aren' drawn by that are hugely put off-- whatever the name is. So, I guess my point is that the name is irrelevant: the language is going to intimidate a lot of people who are intimidated by the vocabulary. At the same time, I think everyone is arguing *for* better documentation. And you're probably right: better documentation will bring the abstract nonsense down to earth somewhat.
But there's a deeper problem here, one that can't be resolved inside the Haskell community. The problem is that the "Math?! Scary! Gross!" attitude that's so pervasive in our society is hardly less pervasive in the computer subculture.
No arguments here!
However, that at least *is* completely beyond our power to alter. Unfortunately.
Indeed. Cheers, Cory

Cory Knapp wrote:
Actually, that was part of my point: When I mention Haskell to people, and when I start describing it, they're generally frightened enough by the focus on pure code and lazy evaluation-- add to this the inherently abstract nature, and we can name typeclasses "cuddlyKitten", and the language is still going to scare J. R. Programmer. By "inherently mathematical nature", I didn't mean names like "monoid" and "functor", I meant *concepts* like monoid and functor. Not that either of them are actually terribly difficult; the problem is that they are terribly abstract. That draws a lot of people (especially mathematicians), but most people who aren' drawn by that are hugely put off-- whatever the name is. So, I guess my point is that the name is irrelevant: the language is going to intimidate a lot of people who are intimidated by the vocabulary.
Oh, I don't know. I have no idea what the mathematical definition of "functor" is, but as far as I can tell, the Haskell typeclass merely allows you to apply a function simultaneously to all elements of a collection. That's pretty concrete - and trivial. If it weren't for the seemingly cryptic name, nobody would think twice about it. (Not sure exactly what you'd call it though...) A monoid is a rather more vague concept. (And I'm still not really sure why it's useful on its own. Maybe I just haven't had need of it yet?) I think, as somebody suggested about "monad", the name does tend to inspire a feeling of "hey, this must be really complicated" so that even after you've understood it, you end up wondering whether there's still something more to it than that. But yes, some people are definitely put off by the whole "abstraction of abstractions of abstraction" thing. I think we probably just need some more concrete examples to weight it down and make it seem like something applicable to the real world. (Thus far, I have convinced exactly *one* person to start learning Haskell. This person being something of a maths nerd, their main complaint was not about naming or abstraction, but about the "implicitness" of the language, and the extreme difficulty of visually parsing it. Perhaps not surprising comming from a professional C++ programmer...)
At the same time, I think everyone is arguing *for* better documentation. And you're probably right: better documentation will bring the abstract nonsense down to earth somewhat.
Amen!

2009/1/17 Andrew Coppin
Cory Knapp wrote:
Actually, that was part of my point: When I mention Haskell to people, and when I start describing it, they're generally frightened enough by the focus on pure code and lazy evaluation-- add to this the inherently abstract nature, and we can name typeclasses "cuddlyKitten", and the language is still going to scare J. R. Programmer. By "inherently mathematical nature", I didn't mean names like "monoid" and "functor", I meant *concepts* like monoid and functor. Not that either of them are actually terribly difficult; the problem is that they are terribly abstract. That draws a lot of people (especially mathematicians), but most people who aren' drawn by that are hugely put off-- whatever the name is. So, I guess my point is that the name is irrelevant: the language is going to intimidate a lot of people who are intimidated by the vocabulary.
Oh, I don't know. I have no idea what the mathematical definition of "functor" is, but as far as I can tell, the Haskell typeclass merely allows you to apply a function simultaneously to all elements of a collection. That's pretty concrete - and trivial. If it weren't for the seemingly cryptic name, nobody would think twice about it. (Not sure exactly what you'd call it though...)
No, a functor is a more wide notion than that, it has nothing to do with collections. An explanation more close to truth would be "A structure is a functor if it provides a way to convert a structure over X to a structure over Y, given a function X -> Y, while preserving the underlying 'structure'", where preserving structure means being compatible with composition and identity. Collections are one particular case. Another case is just functions with fixed domain A: given a "structure" of type [A->]X and a function of type X -> Y, you may build an [A->]Y. Yet another case are monads (actually, the example above is the Reader monad): given a monadic computation of type 'm a' and a function a -> b, you may get a computation of type m b: instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a) There are extremely many other examples of functors; they are as ubiquitous as monoids and monads :) -- Eugene Kirpichov

Eugene Kirpichov wrote:
No, a functor is a more wide notion than that, it has nothing to do with collections. An explanation more close to truth would be "A structure is a functor if it provides a way to convert a structure over X to a structure over Y, given a function X -> Y, while preserving the underlying 'structure'", where preserving structure means being compatible with composition and identity.
As far as I'm aware, constraints like "while preserving the underlying structure" are not expressible in Haskell.
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?

2009/1/17 Andrew Coppin
Eugene Kirpichov wrote:
No, a functor is a more wide notion than that, it has nothing to do with collections. An explanation more close to truth would be "A structure is a functor if it provides a way to convert a structure over X to a structure over Y, given a function X -> Y, while preserving the underlying 'structure'", where preserving structure means being compatible with composition and identity.
As far as I'm aware, constraints like "while preserving the underlying structure" are not expressible in Haskell.
Yes, but they are expressible in your mind so that you can recognize a functor and design you program so that it does satisfy this constraint, thus removing a large faulty piece of the design space. Also, you can write a QuickCheck test for fmap (f . g) = fmap f . fmap g and fmap id = id.
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
In the same sense as monoids are, see my previous message. If you mean the usefulness of a Functor typeclass in Haskell, it's in the fact that everywhere where you'd like to convert a structure over X to a structure over Y (for example, the result of a monadic computation), you simply write 'fmap f structure' and it works the right way, if the structure has an instance for Functor (many structures do). I know I'm being a bit abstract, but that's the way I percept it. do filename <- toLowerCase `fmap` readLine ....
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sat, Jan 17, 2009 at 5:04 AM, Andrew Coppin
Eugene Kirpichov wrote:
No, a functor is a more wide notion than that, it has nothing to do with collections. An explanation more close to truth would be "A structure is a functor if it provides a way to convert a structure over X to a structure over Y, given a function X -> Y, while preserving the underlying 'structure'", where preserving structure means being compatible with composition and identity.
As far as I'm aware, constraints like "while preserving the underlying structure" are not expressible in Haskell.
Well, they're expressible *about* Haskell. I.e., for functors we require: fmap id = id fmap (f . g) = fmap f . fmap g The first property is how we write "preserving underlying structure", but this has a precise, well-defined meaning that we can say a given functor obeys or it does not (and if it does not, we say that it's a bad instance). But you are correct that Haskell does not allow us to require proofs of such properties. And indeed, some people break those properties in various ways, which some consider okay if the breakage is not observable from outside a given abstraction barrier. I'm on the fence about that... Luke

Hello Luke, Saturday, January 17, 2009, 3:16:06 PM, you wrote:
fmap id = id fmap (f . g) = fmap f . fmap g
The first property is how we write "preserving underlying structure", but this has a precise, well-defined meaning that we can say a given functor obeys or it does not (and if it does not, we say that it's a bad instance). But you are correct that Haskell does not allow us to require proofs of such properties.
not haskell itself, but QuickCheck allows. we may even consider lifting these properties to the language level -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Saturday 17 January 2009 8:28:05 am Bulat Ziganshin wrote:
Hello Luke,
Saturday, January 17, 2009, 3:16:06 PM, you wrote:
fmap id = id fmap (f . g) = fmap f . fmap g
The first property is how we write "preserving underlying structure", but this has a precise, well-defined meaning that we can say a given functor obeys or it does not (and if it does not, we say that it's a bad instance). But you are correct that Haskell does not allow us to require proofs of such properties.
not haskell itself, but QuickCheck allows. we may even consider lifting these properties to the language level
QuickCheck doesn't allow you to prove that the properties hold, though. It can only prove that they don't hold, and consequently give you confidence that they do hold when the tests fail to prove that they don't. To prove that they hold, you need something more like ESC/Haskell, catch or a fancier type system than the one Haskell (or even GHC) has. -- Dan

On Sat, 2009-01-17 at 12:04 +0000, Andrew Coppin wrote:
Eugene Kirpichov wrote:
No, a functor is a more wide notion than that, it has nothing to do with collections. An explanation more close to truth would be "A structure is a functor if it provides a way to convert a structure over X to a structure over Y, given a function X -> Y, while preserving the underlying 'structure'", where preserving structure means being compatible with composition and identity.
As far as I'm aware, constraints like "while preserving the underlying structure" are not expressible in Haskell.
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Good Lord. fmap (as above) is *at least* useful enough to be in the standard library! (Control.Monad.liftM). Contrary to what some people seem to think, every single function in Haskell's standard library was included because enough people found it actually *useful* enough to add. Here's the first example I found, searching the source code for my macro language interpreter:[1] Macro-call interpolation has syntax §(expression), parsed by do char '§' lexParens $ Interpolation <$> parseExpr = do char '§' lexParens $ fmap Interpolation $ parseExpr = do [2] char '§' fmap Interpolation $ lexParens $ parseExpr Useful enough? jcc [1] The only simplifications applied were (a) ignoring an interpolation syntax substantially more complicated, and (b) re-naming the relevant constructor. [2] Valid because lexParens is a natural transformation.

Jonathan Cast wrote:
On Sat, 2009-01-17 at 12:04 +0000, Andrew Coppin wrote:
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Good Lord. fmap (as above) is *at least* useful enough to be in the standard library! (Control.Monad.liftM).
Given that liftM exists, why is having an identical implementation for fmap useful? The example that leaps out at me is that (>>=) is identical to concatMap within the list monad. But using lists as a monad is a generally useful thing to do, and being able to substitute arbitrary monads has obvious utility. I'm not seeing how being able to treat something that isn't a container as if it was a container is useful.

On Sun, Jan 18, 2009 at 3:23 AM, Andrew Coppin
Given that liftM exists, why is having an identical implementation for fmap useful?
For many structures, it's easier to define (>>=) in terms of fmap and join. For these objects, often the "generic" implementation of liftM is far less efficient than fmap. That is to say, given a monad T and these functions: returnT :: a -> T a fmapT :: (a -> b) -> T a -> T b joinT :: T (T a) -> T a We can create Haskell instances as follows instance Functor T where fmap = fmapT instance Monad T where return = returnT m >>= f = joinT (fmap f m) Then, liftM f m = m >>= \x -> return (f x) = joinT (fmapT (\x -> return (f x)) m) Now, we know (by the monad & functor laws) that this is equivalent to (fmap f m), but it's a lot harder for the compiler to spot that. The list monad is a great example; I'd expect that using fmap (== map) in the list monad is significantly more efficient than liftM which constructs a singleton list for each element of the input and concatenates them all together. -- ryan

On Sun, Jan 18, 2009 at 3:23 AM, Andrew Coppin
Jonathan Cast wrote:
On Sat, 2009-01-17 at 12:04 +0000, Andrew Coppin wrote:
instance (Monad m) => Functor m where
fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Good Lord. fmap (as above) is *at least* useful enough to be in the standard library! (Control.Monad.liftM).
Given that liftM exists, why is having an identical implementation for fmap useful?
Because liftM works on Monads and fmap works on Functors? I believe you can make data that are Functors but are not Monads.
The example that leaps out at me is that (>>=) is identical to concatMap within the list monad. But using lists as a monad is a generally useful thing to do, and being able to substitute arbitrary monads has obvious utility. I'm not seeing how being able to treat something that isn't a container as if it was a container is useful.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, 2009-01-18 at 11:23 +0000, Andrew Coppin wrote:
Jonathan Cast wrote:
On Sat, 2009-01-17 at 12:04 +0000, Andrew Coppin wrote:
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Good Lord. fmap (as above) is *at least* useful enough to be in the standard library! (Control.Monad.liftM).
Given that liftM exists, why is having an identical implementation for fmap useful?
What? jcc

Andrew Coppin wrote:
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Surely, you agree that liftM is "useful"? Because that's the same thing. Regards, apfelmus -- http://apfelmus.nfshost.com

Heinrich Apfelmus wrote:
Andrew Coppin wrote:
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Surely, you agree that liftM is "useful"? Because that's the same thing.
Then why not just use liftM? (That way, you know what it does...)

Andrew Coppin wrote:
Heinrich Apfelmus wrote:
Andrew Coppin wrote:
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Surely, you agree that liftM is "useful"? Because that's the same thing.
Then why not just use liftM? (That way, you know what it does...)
liftM (*1) [1..10] = [2,4,6,8,10,12,14,16,18,20] Regards, apfelmus -- http://apfelmus.nfshost.com

On Sun, 2009-01-18 at 11:11 +0000, Andrew Coppin wrote:
Heinrich Apfelmus wrote:
Andrew Coppin wrote:
instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a)
While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes?
Surely, you agree that liftM is "useful"? Because that's the same thing.
Then why not just use liftM? (That way, you know what it does...)
I'd be willing to say *you* don't `know what it does', if you haven't figured out that it's an acceptable implementation of fmap first. jcc

Thinking that Functor allows you to apply a function to all elements
in a collection is a good intuitive understanding. But fmap also
allows applying a function on "elements" of things that can't really
be called collections, e.g., the continuation monad.
-- Lennart
On Sat, Jan 17, 2009 at 11:17 AM, Andrew Coppin
Cory Knapp wrote:
Actually, that was part of my point: When I mention Haskell to people, and when I start describing it, they're generally frightened enough by the focus on pure code and lazy evaluation-- add to this the inherently abstract nature, and we can name typeclasses "cuddlyKitten", and the language is still going to scare J. R. Programmer. By "inherently mathematical nature", I didn't mean names like "monoid" and "functor", I meant *concepts* like monoid and functor. Not that either of them are actually terribly difficult; the problem is that they are terribly abstract. That draws a lot of people (especially mathematicians), but most people who aren' drawn by that are hugely put off-- whatever the name is. So, I guess my point is that the name is irrelevant: the language is going to intimidate a lot of people who are intimidated by the vocabulary.
Oh, I don't know. I have no idea what the mathematical definition of "functor" is, but as far as I can tell, the Haskell typeclass merely allows you to apply a function simultaneously to all elements of a collection. That's pretty concrete - and trivial. If it weren't for the seemingly cryptic name, nobody would think twice about it. (Not sure exactly what you'd call it though...)
A monoid is a rather more vague concept. (And I'm still not really sure why it's useful on its own. Maybe I just haven't had need of it yet?)
I think, as somebody suggested about "monad", the name does tend to inspire a feeling of "hey, this must be really complicated" so that even after you've understood it, you end up wondering whether there's still something more to it than that.
But yes, some people are definitely put off by the whole "abstraction of abstractions of abstraction" thing. I think we probably just need some more concrete examples to weight it down and make it seem like something applicable to the real world.
(Thus far, I have convinced exactly *one* person to start learning Haskell. This person being something of a maths nerd, their main complaint was not about naming or abstraction, but about the "implicitness" of the language, and the extreme difficulty of visually parsing it. Perhaps not surprising comming from a professional C++ programmer...)
At the same time, I think everyone is arguing *for* better documentation. And you're probably right: better documentation will bring the abstract nonsense down to earth somewhat.
Amen!
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sat, Jan 17, 2009 at 7:33 AM, Lennart Augustsson
Thinking that Functor allows you to apply a function to all elements in a collection is a good intuitive understanding. But fmap also allows applying a function on "elements" of things that can't really be called collections, e.g., the continuation monad.
I hadn't even thought about fmap for continuations... interesting! It falls out of the logic though doesn't it? I'm not one to throw all the cool mathematical and logical thinking out for "simpler terms" or not covering the full usefulness of certain abstractions. I know Haskell allows for lazy evaluation (as an implementation of non-strictness) but Haskell programmers are NOT allowed to be lazy :-) Try learning the terms that are there... and ask for help if you need help... most of us are pretty helpful! Improving documentation can pretty much *always* be done on any project, and it looks like that's coming out of this long thread that won't die, so kudos to the ones being the gadflies in this instance. It really looked at first like a long troll, but I think something very useful is going to come out of this! Dave
-- Lennart
Cory Knapp wrote:
Actually, that was part of my point: When I mention Haskell to people,
and
when I start describing it, they're generally frightened enough by the focus on pure code and lazy evaluation-- add to this the inherently abstract nature, and we can name typeclasses "cuddlyKitten", and the language is still going to scare J. R. Programmer. By "inherently mathematical nature", I didn't mean names like "monoid" and "functor", I meant *concepts* like monoid and functor. Not that either of them are actually terribly difficult; the problem is that they are terribly abstract. That draws a lot of
On Sat, Jan 17, 2009 at 11:17 AM, Andrew Coppin
wrote: people (especially mathematicians), but most people who aren' drawn by that are hugely put off-- whatever the name is. So, I guess my point is that the name is irrelevant: the language is going to intimidate a lot of people who are intimidated by the vocabulary.
Oh, I don't know. I have no idea what the mathematical definition of "functor" is, but as far as I can tell, the Haskell typeclass merely allows you to apply a function simultaneously to all elements of a collection. That's pretty concrete - and trivial. If it weren't for the seemingly cryptic name, nobody would think twice about it. (Not sure exactly what you'd call it though...)
A monoid is a rather more vague concept. (And I'm still not really sure why it's useful on its own. Maybe I just haven't had need of it yet?)
I think, as somebody suggested about "monad", the name does tend to inspire a feeling of "hey, this must be really complicated" so that even after you've understood it, you end up wondering whether there's still something more to it than that.
But yes, some people are definitely put off by the whole "abstraction of abstractions of abstraction" thing. I think we probably just need some more concrete examples to weight it down and make it seem like something applicable to the real world.
(Thus far, I have convinced exactly *one* person to start learning Haskell. This person being something of a maths nerd, their main complaint was not about naming or abstraction, but about the "implicitness" of the language, and the extreme difficulty of visually parsing it. Perhaps not surprising comming from a professional C++ programmer...)
At the same time, I think everyone is arguing *for* better documentation. And you're probably right: better documentation will bring the abstract nonsense down to earth somewhat.
Amen!
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-01-15 at 18:10 -0500, Cale Gibbard wrote:
My personal preference would be:
class Monoid m where zero :: m (++) :: m -> m -> m
(in the Prelude of course)
- Cale
I've tried doing this (and making more widespread use of typeclassed operations) by writing my own AltPrelude. Unfortunately there is still a lot of 'unrebindable' syntax (list comprehensions, 'error' forced to exist in Monad, if-then-else not using nearest Bool, etc) which makes this hard to achieve.

John Goerzen wrote:
Though if all we're talking about is naming, I would still maintain that newbie-friendly naming is a win. We can always say "HEY MATHEMETICIANS: APPENDABLE MEANS MONOID" in the haddock docs ;-)
This is backwards. The real problem here is that most people coming from other languages aren't used to working with structures as abstract as monoids, and a natural first instinct is to try to un-abstract them, in this case via the suggested renaming. The thought process tends to be something like "I didn't have this problem in language X, Haskell must be doing something wrong." This instinct is not appropriate in the Haskell context. (Although as others have noted, the documentation doesn't do much to help guide people through this.) One of the most mind-bogglingly important features of Haskell is that it is actually possible to make effective use of structures such as monoids in real code. In most languages, you wouldn't even try this. But if you're going to create a zoo of abstract structures like monoids, with the aim of being able to use them as very general building blocks, the last thing you should be doing is naming them according to particular applications they have. This goes against the goal of abstracting in the first place, and will ultimately be confusing and misleading. (As I pointed out in another comment, the misleadingly-named 'mappend' is an example of this.) If there's an existing name for the exact structure in question, it makes sense to use that name. If you're unfamiliar with the structure, then you're going to need to learn a name for it anyway - why not learn a name by which it is already known in other contexts? The main counter to the latter question is "I want to give it a new name in order to connote an intended use," but unless the connotation in question is as general as the structure being named, this is a mistake. This issue is not unique to structures from abstract algebra & category theory. It can arise any time you have a very polymorphic function, for example. It can often make sense to provide a more specifically named and typed alias for a very general function, to make its use more natural and/or constrained in a particular context, e.g.: specificName :: SpecificType1 -> SpecificType2 specificName = moreGeneralFunction Similarly, in the case of monoid, we need to be able to do this, at least conceptually: Appendable = Monoid ...possibly with some additional constraints. In other words, "HEY PROGRAMMERS: YOU CAN USE MONOID AS AN APPENDABLE THINGY (AMONG OTHER THINGS)". This is perhaps an argument for a class alias mechanism, such as the one described at: http://repetae.net/recent/out/classalias.html But in the absence of such a mechanism, we shouldn't succumb to the temptation to confuse abstractions with their applications.
Much as I dislike Java's penchant for 200-character names for things, I'm not sure Monoid is more descriptive than SomeSortOfGenericThingThatYouCanAppendStuffToClassTemplateAbstractInterfaceThingy :-)
Usable descriptive names for very abstract structures are just not possible in general, except by agreeing on names, which can ultimately come to seem descriptive. For example, there's nothing fundamentally descriptive about the word "append". Anton

John Goerzen schrieb:
Though if all we're talking about is naming, I would still maintain that newbie-friendly naming is a win. We can always say "HEY MATHEMETICIANS: APPENDABLE MEANS MONOID" in the haddock docs ;-)
We already have a problem with this: Haskell 98 uses intuitive names for the numeric type classes. It introduces new names, which do not match the names of common algebraic structures. Why is a type Num (numeric?), whenever it supports number literals, (+) and (*)? Why not just number literals? Why not also division? The numeric type hierarchy of Haskell must be learned anyway, but the user learns terms he cannot use outside the Haskell world. Ring and Field are harder to learn first, but known and precise terms. (And if you don't like to learn the names, just write functions without signatures and let GHCi find out the signatures with the appropriate class constraints.)

2009/1/15 Lennart Augustsson
Why do people think that you should be able to understand everything without ever looking things up?
Understand, no, but "have an intuition about", very definitely yes. In mathematics (and I speak as someone with a mathematical degree, so if I caricature anyone, please excuse it as failing memory rather than intent!!!) there's a tendency to invent terminology, rather than use natural names, because new names don't have unwanted connotations - it's the need for precision driving things. In programming, the need is for *communication* and as such, using words with natural - if imprecise, and occasionally even (slightly) wrong - connotations is extremely helpful.
I'll get back to my example from the comment on the blog post. If I see 'ghee' in a cook book I'll check what it is (if I don't know).
If a significant proportion of words require me to look them up, my flow of understanding is lost and I'll either give up, end up with a muddled impression, or take far longer to understand than the recipe merits (and so, I'll probably not use that cook book again).
I'm not saying Haskell always gets naming right, all I want is to reuse words that exist instead of inventing new ones.
But you seem to be insisting that mathematical terminology is the place to reuse from - whereas, in fact, computing might be a better basis (although computing doesn't insist on the precision that maths needs, so in any "that's not precisely what I mean" argument, non-mathematical terminology starts off an a disadvantage, even though absolute precision may not be the key requirement).
(And 'monoid' is not category theory, it's very basic (abstract) algebra.)
Well, I did a MSc course in mathematics, mostly pure maths including algebra, set theory and similar areas, and I never came across the term. Of course, my degree was 25 years ago, so maybe "monoid" is a term that wasn't invented then ;-))
I don't know any category theory, but if someone tells me that blah is an endomorphism I'm happy to call it that, knowing that I have a name that anyone can figure out with just a little effort.
But unless you invest time researching, you can't draw any conclusions from that. If someone tells you it's a mapping, you can infer that it probably "maps" some things to one another, which gives you a (minimal, imprecise, and possibly wrong in some corner cases, but nevertheless useful) indication of what's going on. Mathematical precision isn't appropriate in all disciplines. Paul.

It is rather funny. When we are young kids, we learn weird symbols like
A B C a b c 1 2 3
which we accept after a while.
Then we get to learn more complex symbols like
! ? + - /
and that takes some time to get used to, but eventually, that works too.
But Functor, Monoid or Monad, that we cannot accept anymore. Why, because
these are not intuitive? Are the symbols above "intuitive"?
When I started learning Haskell I also found it insane that
strange terminology was used everywhere... But every time I try to find a
better name, the name is too specific for the situation at hand. Just like
Appendable is a good name for specific instances of Monoid
In F# they renamed Monads to Workflows for the same reason. I find this just
as confusing since a Monad has nothing to do with "work" and maybe a little
bit with a single threaded "flow"... I would have hoped that we could all
stick to the same terminology that was invented a long time ago...
Since Haskell is mainly for computer scientists, changing all of this to
make it more accessible to newcomers might lead to the mistake: "if you try
to please the whole world, you please nobody".
I mainly think the problem is not the name, but the lack of many tiny
examples demonstrating typical use cases for each concept.
On Thu, Jan 15, 2009 at 7:27 PM, Lennart Augustsson
That's very true. But programming is one where mathematical precision is needed, even if you want to call it something else.
On Thu, Jan 15, 2009 at 6:04 PM, Paul Moore
wrote: Mathematical precision isn't appropriate in all disciplines.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hello, On Thursday 15 January 2009 19:59, Peter Verswyvelen wrote:
It is rather funny. When we are young kids, we learn weird symbols like A B C a b c 1 2 3
which we accept after a while.
Then we get to learn more complex symbols like
! ? + - /
and that takes some time to get used to, but eventually, that works too.
But Functor, Monoid or Monad, that we cannot accept anymore. Why, because these are not intuitive? Are the symbols above "intuitive"?
I think there is a simple explanation of this: Consider the amount of time you spent, as a young kid, to learn to get used to these funny 1, 2, a, b, x, y, +, - and so on. I haven't got the exact schedules from school, but my impression is that we are talking about hours and hours of drill and practice, over weeks, months, years. I mean, do you show your small children (say, 5 years old) how to write numbers to represent, say, the number of oranges in a bowl and then they comprehend after, say, a couple of minutes? Or half an hour? No. Learning to get used to such things, let alone use them effectively to solve common problems, takes time. And also, of course, intense and qualified guidance, in some form. So, to learn to become familiar and effective in using new and complex concepts, we should just accept that it sometimes may take a while. And that's it. It is all a matter of practice, exposure, and guidance.
...
Best regards Thorkil

Thorkil Naur wrote:
Peter Verswyvelen wrote:
It is rather funny. When we are young kids, we learn weird symbols like
A B C a b c 1 2 3
which we accept after a while.
But Functor, Monoid or Monad, that we cannot accept anymore. Why, because these are not intuitive? Are the symbols above "intuitive"?
I think there is a simple explanation of this: Consider the amount of time you spent, as a young kid, to learn to get used to these funny 1, 2, a, b, x, y, +, - and so on. I haven't got the exact schedules from school, but my impression is that we are talking about hours and hours of drill and practice, over weeks, months, years.
So, to learn to become familiar and effective in using new and complex concepts, we should just accept that it sometimes may take a while. And that's it. It is all a matter of practice, exposure, and guidance.
That's a highly relevant wisdom! Learning something new needs practice / time and good tutors / books / guidance. It doesn't matter whether the new thing is "alphabet", "summation", "boolean", "programming" or "monoid". Obviously, those who know what a monoid is have already invested years of time practicing mathematics while those that even attack the name "monoid" clearly lack this practice. It's like peano virtuosoes compared to beginning keyboard pressers. Concerning the question whether it is necessary to invest at least some time on mathematical practice to learn Haskell, the answer is yes. There is no shortcut to learning purely functional programming and reasoning. Renaming "monoid" to "appendable" and "monad" to "warm fuzzy thing" are but useless cosmetic changes that don't make anything easier. How to learn? The options are, in order of decreasing effectiveness university course teacher in person book irc mailing list online tutorial haskell wiki haddock documentation Usually, the best thing is to have a teacher, i.e. to go to a good CS course on Haskell. Books and #haskell or the mailing list are a good substitute, but require self-discipline. Both teachers and books cost money, but you get what you pay for, the online tutorial, wiki and haddock worlds are too messy to be effective until very late in the learning process. In particular, monoids are defined and used in Richard Bird. Introduction to Functional Programming using Haskel (2nd edition). http://www.amazon.com/ Introduction-Functional-Programming-using-Haskell/dp/0134843460 I think that this book is a good benchmark for measuring the amount of practice to be invested in learning Haskell. Regards, H. Apfelmus

2009/1/16 Apfelmus, Heinrich
How to learn? The options are, in order of decreasing effectiveness
university course teacher in person book irc mailing list online tutorial haskell wiki haddock documentation
Reason by analogy from known/similar areas. I think the point here is that for Haskell, this is more possible for mathematicians than for programmers. And that's an imbalance that may need to be addressed (depending on who you want to encourage to learn). But I agree that reasoning by analogy is not a very good way of learning. And I think it's been established that the real issue here is the documentation - complete explanations and better discoverability[1] are needed. Note that for people who don't want to (or can't) invest money, and who don't want to take up too much of others' time, documentation is the most important option. Paul. [1] When I say "discoverability", I mean that no matter how good the documentation of (say) Monoid is, it's useless unless there's something that prompts me, based on the real-world programming problem I have (for example, merging a set of configuration options to use an example mentioned in this thread), to *look* at that documentation. That's where names make a difference.

Paul Moore wrote:
Apfelmus, Heinrich wrote:
How to learn? The options are, in order of decreasing effectiveness
university course teacher in person book irc mailing list online tutorial haskell wiki haddock documentation
Reason by analogy from known/similar areas. I think the point here is that for Haskell, this is more possible for mathematicians than for programmers. And that's an imbalance that may need to be addressed (depending on who you want to encourage to learn).
But I agree that reasoning by analogy is not a very good way of learning. And I think it's been established that the real issue here is the documentation - complete explanations and better discoverability[1] are needed.
Yes, agreed. However, I would say that the word "documentation" does not apply anymore, it's more "subject of study". What I want to say is that to some extend, Haskell is not only similar to mathematics, it /is/ mathematics, so programmers have to learn mathematics. Traditionally, this is done in university courses or with books, I'm not sure whether learning mathematics via internet tutorials on the computer screen works. Regards, apfelmus -- http://apfelmus.nfshost.com

Apfelmus, Heinrich schrieb:
Obviously, those who know what a monoid is have already invested years of time practicing mathematics while those that even attack the name "monoid" clearly lack this practice. It's like peano virtuosoes compared to beginning keyboard pressers.
Aren't all Haskellers some kind of Peano virtuosos? :-)

Actually programming requires -far more- precision than mathematics ever has. The standards of "formal" and "precise" that mathematicians use are a joke to computer scientists and programmers. Communication is also more important or at least more center stage in mathematics than programming. Mathematical proofs are solely about communicating understanding and are not required to execute on a machine. On Thu, 2009-01-15 at 18:27 +0000, Lennart Augustsson wrote:
That's very true. But programming is one where mathematical precision is needed, even if you want to call it something else.
On Thu, Jan 15, 2009 at 6:04 PM, Paul Moore
wrote: Mathematical precision isn't appropriate in all disciplines.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

While you're absolutely correct and I agree with you, to be fair,
essentially all mathematicians have a sense of "rigourisability"
(whether they recognise it or not), which is a peculiar standard that
they apply to everything they hear or read. The level of rigour at
which mathematicians communicate is designed not to bore the listener
with details that they could easily supply for themselves, being an
intelligent mathematician, and not a mechanical abstraction.
- Cale
2009/1/15 Derek Elkins
Actually programming requires -far more- precision than mathematics ever has. The standards of "formal" and "precise" that mathematicians use are a joke to computer scientists and programmers. Communication is also more important or at least more center stage in mathematics than programming. Mathematical proofs are solely about communicating understanding and are not required to execute on a machine.
On Thu, 2009-01-15 at 18:27 +0000, Lennart Augustsson wrote:
That's very true. But programming is one where mathematical precision is needed, even if you want to call it something else.
On Thu, Jan 15, 2009 at 6:04 PM, Paul Moore
wrote: Mathematical precision isn't appropriate in all disciplines.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-01-15 at 18:21 -0500, Cale Gibbard wrote:
While you're absolutely correct and I agree with you, to be fair, essentially all mathematicians have a sense of "rigourisability" (whether they recognise it or not), which is a peculiar standard that they apply to everything they hear or read. The level of rigour at which mathematicians communicate is designed not to bore the listener with details that they could easily supply for themselves, being an intelligent mathematician, and not a mechanical abstraction.
Indeed. One way to describe "rigorizable" is that it is (ideally) just enough precision to be unambiguous. Programmers don't have that luxury and thus clarity and hence communication suffer, which was exactly my point.

2009/1/15 Derek Elkins
On Thu, 2009-01-15 at 18:27 +0000, Lennart Augustsson wrote:
On Thu, Jan 15, 2009 at 6:04 PM, Paul Moore
wrote: Mathematical precision isn't appropriate in all disciplines.
That's very true. But programming is one where mathematical precision is needed, even if you want to call it something else.
Actually programming requires -far more- precision than mathematics ever has. The standards of "formal" and "precise" that mathematicians use are a joke to computer scientists and programmers. Communication is also more important or at least more center stage in mathematics than programming. Mathematical proofs are solely about communicating understanding and are not required to execute on a machine.
Hmm. I could argue that coding *terminology* and words used for human-to-human *discussion* of programs can afford to be far *less* precise, simply because the ultimate precision is always available in terms of actual executable code (which offers no scope for misunderstanding - it's a concrete, executable object, with precise semantics defined by the implementation). Mathematical terminology has to be much stricter, because there's no fallback of "use the source". That's not to say that I disagree entirely, but it's not as black-and-white as this discussion makes it seem. Paul.

Lennart Augustsson wrote:
Why do people think that you should be able to understand everything without ever looking things up? I'll get back to my example from the comment on the blog post. If I see 'ghee' in a cook book I'll check what it is (if I don't know). It has a precise meaning and next time I'll know. Inventing a new word for it serves no purpose, but to confuse people. Parts of Computer Science seem to love to invent new words for existing concepts. Again, I think this just confuses people in the long run. Or use existing words in a different way (like 'functor' in C++.)
ghee is ghee. There are variations of ghee, but when a cookbook calls for ghee, just about any variation will work fine. Furthermore, whenever a cookbook calls for ghee, you are making food. Conversely a monoid is an algebraic structure. A Monoid (the type class) is an abstraction used to ensure that a particular variation of a monoid is actually representative of a monoid. There are many variations (:i Monoid shows 17 instances). Most are used for different purposes and few are interchangeable for a given purpose. Inventing new words definitely serves a purpose; it is a form of abstraction. We need new words to manage conceptual complexity just as we build abstractions to manage code complexity. In fact, Monoid was once a new word that was invented to serve that very purpose. I don't think the solution to this problem is to rename it to Appendable. I think the "solution" is to allow programmers to alias Monoid as Appendable similar to the way you can alias a module when you imort it. The details of that solution would be very difficult to introduce though. Drew P. Vogel

At the risk of painting my own bikeshed...
If you're learning Haskell, which communicates the idea more clearly:
* Appendable
or
* Monoid
Would you call function composition (on endofunctions) "appending"? The join of a monad? A semi-colon (as in sequencing two imperative statements)? How do you append two numbers? Addition, multiplication, or something else entirely? All these operations are monoidal, i.e., are associative and have both left and right identities. If that's exactly what they have in common, why invent a new name? "Appendable" may carry some intuition, but it is not precise and sometimes quite misleading.
I guess the bottom line question is: who is Haskell for? Category theorists, programmers, or both? I'd love it to be for both, but I've got to admit that Brian has a point that it is trending to the first in some areas.
One of my grievances about Haskell is the occasional disregard for existing terminology. "Stream Fusion" is about lazy lists/co-lists, not streams; "type families" mean something entirely different to type theorists. This kind of misnomer is even more confusing than a name that doesn't mean anything (at least, until you learn more category theory). Wouter This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

On Thu, Jan 15, 2009 at 1:44 PM, Wouter Swierstra
Would you call function composition (on endofunctions) "appending"? The join of a monad? A semi-colon (as in sequencing two imperative statements)? How do you append two numbers? Addition, multiplication, or something else entirely?
All these operations are monoidal, i.e., are associative and have both left and right identities. If that's exactly what they have in common, why invent a new name? "Appendable" may carry some intuition, but it is not precise and sometimes quite misleading.
I think this highlights an interesting point: Haskell is more abstract than most other languages. While in other languages "Appendable" might just mean what Brian suggested in his post, "something with an empty state and the ability to append things to the end", in Haskell it applies to numbers and everything else that has an associative operator, that is, everything that is a monoid. So "Appendable" for numbers would be quite wrong; either it would never be used in situations where one wanted to "append things to the end" (which is limiting), or it would be used in these situations, which would be quite confusing. I think it is much more important to have good documentation about the typeclasses (and everything else in the library). This issue was mentioned recently in a discussion about monads, and the documentation for the Haskell library is quite uninformative. It would be nice if 1) people would not be scared of names like "monoid" and "functor" and 2) the documentation clearly stated what these things are, in a programming context, preferably with some examples. I think 2 would mitigate some of the fear mentioned in 1, if newcomers started to experience things like "hey, that's one funky-named stuff this monoid thing, but I see here in the documentation that it is quite simple". -- []s, Andrei Formiga

John Goerzen
Wikipedia's first sentence about monoids is:
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element.
Which is *not* intuitive to someone that comes from a background in.... any other programming language.
Instead of Wikipedia, why not try a dictionary? Looking up monoid using dictionary.com: An operator * and a value x form a monoid if * is associative and x is its left and right identity. On the other hand, appendable doesn't seem to be a word, and while you can infer that it means "something that can be appended to", that's only half of the story...

On Thu, Jan 15, 2009 at 9:04 AM,
John Goerzen
writes: Wikipedia's first sentence about monoids is:
In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single, associative binary operation and an identity element.
Which is *not* intuitive to someone that comes from a background in.... any other programming language.
Instead of Wikipedia, why not try a dictionary? Looking up monoid using dictionary.com:
An operator * and a value x form a monoid if * is associative and x is its left and right identity.
On the other hand, appendable doesn't seem to be a word, and while you can infer that it means "something that can be appended to", that's only half of the story...
Monoid isn't something I came across and didn't understand, its something I should have been using for a long time before I discovered it. But it never jumped out at me when I was browsing the library documentation tree.

Yes! The library documentation tree has a way of making everything
seem equally important, when that is not the case. This is why we need
well-crafted tutorials and books.
2009/1/15 David Fox
Monoid isn't something I came across and didn't understand, its something I should have been using for a long time before I discovered it. But it never jumped out at me when I was browsing the library documentation tree.

"John" == John Goerzen
writes:
John> I guess the bottom line question is: who is Haskell for? Category John> theorists, programmers, or both? I'd love it to be for both, but John> I've got to admit that Brian has a point that it is trending to John> the first in some areas. Thank you for so nicely put it together... Sincerely, Gour -- Gour | Zagreb, Croatia | GPG key: C6E7162D ----------------------------------------------------------------

If you're learning Haskell, which communicates the idea more clearly:
* Appendable
or
* Monoid
I can immediately figure out what the first one means. With the second, I could refer to the GHC documentation, which does not describe what a Monoid does. Or read a wikipedia article about a branch of mathematics and try to figure out how it applies to Haskell.
However, "Appendable" carries baggage with it which is highly misleading. Consider, for instance, the monoid of rational numbers under multiplication (which, by the way, is quite useful with the writer transformed list monad for dealing with probabilities) -- you can claim that multiplication here is a sort of "appending", perhaps, but it's not really appropriate. Modular addition, or multiplication in some group is even farther from it.

On Thu, Jan 15, 2009 at 3:16 PM, Cale Gibbard
However, "Appendable" carries baggage with it which is highly misleading. Consider, for instance, the monoid of rational numbers under multiplication (which, by the way, is quite useful with the writer transformed list monad for dealing with probabilities) -- you can claim that multiplication here is a sort of "appending", perhaps, but it's not really appropriate.
It's rather funny that there's a mathematical sense in which all monoid operations *are* appending. The free monoid on a set has appending as its operation, and the free monoid is initial in the category of monoids on that set (by definition), so all monoid operations are appending, modulo some equivalence relation. :) --Max

On Thu, 2009-01-15 at 15:25 -0800, Max Rabkin wrote:
On Thu, Jan 15, 2009 at 3:16 PM, Cale Gibbard
wrote: However, "Appendable" carries baggage with it which is highly misleading. Consider, for instance, the monoid of rational numbers under multiplication (which, by the way, is quite useful with the writer transformed list monad for dealing with probabilities) -- you can claim that multiplication here is a sort of "appending", perhaps, but it's not really appropriate.
It's rather funny that there's a mathematical sense in which all monoid operations *are* appending. The free monoid on a set has appending as its operation, and the free monoid is initial in the category of monoids on that set (by definition), so all monoid operations are appending, modulo some equivalence relation.
Right. So we start explaining that to new Haskellers. We already have participants in this discussion who can never quite remember where the term Monad comes from; and now we need them to remember some complicated argument about quotients of free monoids justifying the term `Append'? jcc

On Thu, Jan 15, 2009 at 9:15 AM, John Goerzen
If you're learning Haskell, which communicates the idea more clearly:
* Appendable
or
* Monoid
But Appendable is wrong. merge :: Ord a => [a] -> [a] -> [a] merge [] ys = ys merge xs [] = xs merge (x:xs) (y:ys) | x <= y = x : merge xs (y:ys) | otherwise = y : merge (x:xs) ys newtype MergeList a = MergeList [a] instance (Ord a) => Monoid (MergeList a) where mempty = MergeList [] mappend (MergeList xs) (MergeList ys) = MergeList (merge xs ys) This is a perfectly good monoid -- one I have used -- but in no sense would I call MergeList "appendable". Also, in what sense is mappend on Endo appending? I just realized that the name "mappend" sucks :-). (++) would be great! In any case, to me being a Monoid is about having an associative operator, not about "appending". The only fitting terms I can think of are equally scary ones such as "semigroup". Which is worse: naming things with scary category names, as in "monad" and "monoid", or naming things with approachable popular names that are wrong (wrt. to their popular usage), as in "class" and "instance". In the former case, the opinion becomes "Haskell is hard -- what the hell is a monad?"; in the latter it becomes "Haskell sucks -- it's class system is totally stupid" because we are *violating* people's intuitions, rather than providing them with none whatsoever. In a lot of cases there is a nice middle ground. Cases where (1) we can find an intuitive word that does not have a popular CS meaning, or (2) where the popular CS meaning is actually correct ("integer"?). But eg. programming with monads *changes the way you think*; we cannot rewire people's brains just-in-time with a word. I like the word Monad. It makes people know they have to work hard to understand them. ** Luke

Here is a great "Monoid found in the wild story": I just implemented a library for binary message serialization that follows Google's protocol buffer format. The documentation of this was very scattered in some respects but I kept reading snippets which I have pasted below. The effect of these snippets is to document that the messages on the wire should mimic an API where they can be combined in various merge operations (right-biased, concatenation, and recursive merging), and that well-formed messages have default values for all fields (which can be set in the spec). So the code below is a well thought out collection of properties that has reinvented the wheel known as "Monoid", so the Haskell API creates Monoid instances. http://code.google.com/apis/protocolbuffers/docs/encoding.html
Normally, an encoded message would never have more than one instance of an optional or required field. However, parsers are expected to handle the case in which they do. For numeric types and strings, if the same value appears multiple times, the parser accepts the last value it sees. For embedded message fields, the parser merges multiple instances of the same field, as if with the Message::MergeFrom method – that is, all singular scalar fields in the latter instance replace those in the former, singular embedded messages are merged, and repeated fields are concatenated. The effect of these rules is that parsing the concatenation of two encoded messages produces exactly the same result as if you had parsed the two messages separately and merged the resulting objects. That is, this:
MyMessage message; message.ParseFromString(str1 + str2);
is equivalent to this:
MyMessage message, message2; message.ParseFromString(str1); message2.ParseFromString(str2); message.MergeFrom(message2);
This property is occasionally useful, as it allows you to merge two messages even if you do not know their types.
And this at http://code.google.com/apis/protocolbuffers/docs/proto.html
As mentioned above, elements in a message description can be labeled optional. A well-formed message may or may not contain an optional element. When a message is parsed, if it does not contain an optional element, the corresponding field in the parsed object is set to the default value for that field. The default value can be specified as part of the message description. For example, let's say you want to provide a default value of 10 for a SearchRequest's result_per_page value.
optional int32 result_per_page = 3 [default = 10];
If the default value is not specified for an optional element, a type-specific default value is used instead: for strings, the default value is the empty string. For bools, the default value is false. For numeric types, the default value is zero. For enums, the default value is the first value listed in the enum's type definition.

G'day all.
Quoting John Goerzen
If I see Appendable I can guess what it might be. If I see "monoid", I have no clue whatsoever, because I've never heard of a monoid before.
Any sufficiently unfamiliar programming language looks like line noise. That's why every new language needs to use curly braces.
If you're learning Haskell, which communicates the idea more clearly:
* Appendable
or
* Monoid
I can immediately figure out what the first one means.
No you can't. It is in no way clear, for example, that Integers with addition are "Appendable". I'm not saying that "Monoid" is the most pragmatically desirable term, merely that "Appendable" is misleading. And FWIW, I agree with everyone who has commented that the documentation is inadequate. It'd be nice if there was some way to contribute better documentation without needing checkin access to the libraries. Cheers, Andrew Bromage

On Sat, Jan 17, 2009 at 09:12:32PM -0500, ajb@spamcop.net wrote:
And FWIW, I agree with everyone who has commented that the documentation is inadequate. It'd be nice if there was some way to contribute better documentation without needing checkin access to the libraries.
There is. The current state of the docs may be viewed at http://www.haskell.org/ghc/dist/current/docs/libraries/ Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch. Sure, it could be smoother, but there's hardly a flood of contributions.

On Sun, 18 Jan 2009, Ross Paterson wrote:
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers. -- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself. -- As a coproduct, it comes with two maps i : 1 -> 1+1 and j : 1 -> 1+1 -- such that for any Y and maps u: 1 -> Y and v: 1 -> Y, there is a unique -- map (u+v): 1+1 -> Y such that (u+v) . i = u, and (u+v) . j = v -- as shown in the diagram below. -- -- 1 -- u --> Y -- ^ ^^ -- | / | -- i u + v v -- | / | -- 1+1 - j --> 1 -- -- In Haskell we call we define 'False' to be i(*) and 'True' to be j(*) -- where *:1. -- Furthermore, if Y is any type, and we are given a:Y and b:Y, then we -- can define u(*) = a and v(*) = b. -- From the above there is a unique map (u + v) : 1+1 -> Y, -- or in other words, (u+v) : Bool -> Y. -- Haskell has a built in syntax for this map: -- @if z then a else b@ equals (u+v)(z). -- -- From the commuting triangle in the diagram we see that -- (u+v)(i(*)) = u(*). -- Translated into Haskell notation, this law reads -- @if True then a else b = a@. -- Similarly from the other commuting triangle we see that -- (u+v)(j(*)) = v(*), which means -- @if False then a else b = b@ -- Russell O'Connor http://r6.ca/ ``All talk about `theft,''' the general counsel of the American Graphophone Company wrote, ``is the merest claptrap, for there exists no property in ideas musical, literary or artistic, except as defined by statute.''

roconnor@theorem.ca wrote:
On Sun, 18 Jan 2009, Ross Paterson wrote:
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself. -- As a coproduct, it comes with two maps i : 1 -> 1+1 and j : 1 -> 1+1 -- such that for any Y and maps u: 1 -> Y and v: 1 -> Y, there is a unique -- map (u+v): 1+1 -> Y such that (u+v) . i = u, and (u+v) . j = v -- as shown in the diagram below. -- -- 1 -- u --> Y -- ^ ^^ -- | / | -- i u + v v -- | / | -- 1+1 - j --> 1 -- -- In Haskell we call we define 'False' to be i(*) and 'True' to be j(*) -- where *:1. -- Furthermore, if Y is any type, and we are given a:Y and b:Y, then we -- can define u(*) = a and v(*) = b. -- From the above there is a unique map (u + v) : 1+1 -> Y, -- or in other words, (u+v) : Bool -> Y. -- Haskell has a built in syntax for this map: -- @if z then a else b@ equals (u+v)(z). -- -- From the commuting triangle in the diagram we see that -- (u+v)(i(*)) = u(*). -- Translated into Haskell notation, this law reads -- @if True then a else b = a@. -- Similarly from the other commuting triangle we see that -- (u+v)(j(*)) = v(*), which means -- @if False then a else b = b@
I'm going to go ahead and assume this was a joke and crack up... The sad part is I didn't actually find this difficult to read... Cory "lost touch with the real world" Knapp

Am Sonntag, 18. Januar 2009 17:48 schrieb roconnor@theorem.ca:
On Sun, 18 Jan 2009, Ross Paterson wrote:
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
Thanks. Really helpful. A few minor typos, though.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself. -- As a coproduct, it comes with two maps i : 1 -> 1+1 and j : 1 -> 1+1 -- such that for any Y and maps u: 1 -> Y and v: 1 -> Y, there is a unique -- map (u+v): 1+1 -> Y such that (u+v) . i = u, and (u+v) . j = v -- as shown in the diagram below. -- -- 1 -- u --> Y -- ^ ^^ -- | / | -- i u + v v -- | / | -- 1+1 - j --> 1
You have the arrows i and j pointing in the wrong direction.
-- -- In Haskell we call we define 'False' to be i(*) and 'True' to be j(*)
Delete "we call".
-- where *:1. -- Furthermore, if Y is any type, and we are given a:Y and b:Y, then we -- can define u(*) = a and v(*) = b. -- From the above there is a unique map (u + v) : 1+1 -> Y, -- or in other words, (u+v) : Bool -> Y. -- Haskell has a built in syntax for this map: -- @if z then a else b@ equals (u+v)(z). -- -- From the commuting triangle in the diagram we see that -- (u+v)(i(*)) = u(*). -- Translated into Haskell notation, this law reads -- @if True then a else b = a@. -- Similarly from the other commuting triangle we see that -- (u+v)(j(*)) = v(*), which means -- @if False then a else b = b@

2009/1/18 Daniel Fischer
Am Sonntag, 18. Januar 2009 17:48 schrieb roconnor@theorem.ca:
On Sun, 18 Jan 2009, Ross Paterson wrote:
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
Thanks. Really helpful. A few minor typos, though.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself. -- As a coproduct, it comes with two maps i : 1 -> 1+1 and j : 1 -> 1+1 -- such that for any Y and maps u: 1 -> Y and v: 1 -> Y, there is a unique -- map (u+v): 1+1 -> Y such that (u+v) . i = u, and (u+v) . j = v -- as shown in the diagram below. -- -- 1 -- u --> Y -- ^ ^^ -- | / | -- i u + v v -- | / | -- 1+1 - j --> 1
You have the arrows i and j pointing in the wrong direction.
-- -- In Haskell we call we define 'False' to be i(*) and 'True' to be j(*)
Delete "we call".
-- where *:1. -- Furthermore, if Y is any type, and we are given a:Y and b:Y, then we -- can define u(*) = a and v(*) = b. -- From the above there is a unique map (u + v) : 1+1 -> Y, -- or in other words, (u+v) : Bool -> Y. -- Haskell has a built in syntax for this map: -- @if z then a else b@ equals (u+v)(z). Also, "equals (a+b)(z)" should be here.
-- -- From the commuting triangle in the diagram we see that -- (u+v)(i(*)) = u(*). -- Translated into Haskell notation, this law reads -- @if True then a else b = a@. -- Similarly from the other commuting triangle we see that -- (u+v)(j(*)) = v(*), which means -- @if False then a else b = b@
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, Jan 18, 2009 at 5:48 PM,
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself.
Russell, this does seem like it might be very helpful, but it might be useful to include a note about what category you are working in. People may sometimes naively assume that one is working in the category of Haskell/Hugs/GHC data types and Haskell functions, in which there are no terminal -- or initial -- objects ('undefined' and 'const undefined' are distinct maps between any two objects X and Y), or else in the similar category without lifted bottoms, in which the empty type is terminal and the unit type isn't ('undefined' and 'const ()' are both maps from any object X to the unit type). These niceties will not confuse the advanced reader, but it may help the beginner if you are more explicit. - Benja P.S. :-)

This is a great effort, but the root of the problem isn't just poor documentation, but an insistence on some obscure name. How about renaming Bool to YesOrNoDataVariable? I think this would help novice programmers a great deal. It would also make the documentation flow much more naturally: The Bool type is the coproduct of the terminal object with itself. --huh? The YesOrNoDataVariable is the coproduct of the terminal object with itself. --Oh! Of course! --S On Jan 18, 2009, at 12:17 PM, Benja Fallenstein wrote:
On Sun, Jan 18, 2009 at 5:48 PM,
wrote: I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself.
Russell, this does seem like it might be very helpful, but it might be useful to include a note about what category you are working in. People may sometimes naively assume that one is working in the category of Haskell/Hugs/GHC data types and Haskell functions, in which there are no terminal -- or initial -- objects ('undefined' and 'const undefined' are distinct maps between any two objects X and Y), or else in the similar category without lifted bottoms, in which the empty type is terminal and the unit type isn't ('undefined' and 'const ()' are both maps from any object X to the unit type). These niceties will not confuse the advanced reader, but it may help the beginner if you are more explicit.
- Benja
P.S. :-) _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Sterling Clover wrote:
This is a great effort, but the root of the problem isn't just poor documentation, but an insistence on some obscure name. How about renaming Bool to YesOrNoDataVariable? I think this would help novice programmers a great deal.
It would also make the documentation flow much more naturally:
The Bool type is the coproduct of the terminal object with itself.
--huh?
The YesOrNoDataVariable is the coproduct of the terminal object with itself.
--Oh! Of course!
I'm sorry, but I don't get neither Bool nor YesOrNoDataVariable, it's too confusing for newcomers. Can we please name it to TerminalObjectCoSquared that's much more intuitive. Also, the wikipedia page http://en.wikipedia.org/wiki/YesOrNoDataVariable is extremely unhelpful. Not that the wikipedia page for Bool which links to http://en.wikipedia.org/wiki/Boolean_datatype is any better. The introduction goes to great lengths to note that For instance the ISO SQL:1999 standard defined a Boolean data type for SQL which could hold three possible values: true, false, unknown (SQL null is treated as equivalent to the unknown truth value, but only for the Boolean data type) What is SQL, do they mean the SesQuiLinear forms that I'm familiar with? But what does it have to do with TerminalObjectCoSquared? I'm confused. Regards, apfelmus -- http://apfelmus.nfshost.com

That's a great start, but "coproduct" is still pretty scary. Why not refer
to it as OneOrTheOtherButNotBothDataConstructor?
-Nathan Bloomfield
On Sun, Jan 18, 2009 at 11:32 AM, Sterling Clover
This is a great effort, but the root of the problem isn't just poor documentation, but an insistence on some obscure name. How about renaming Bool to YesOrNoDataVariable? I think this would help novice programmers a great deal.
It would also make the documentation flow much more naturally:
The Bool type is the coproduct of the terminal object with itself.
--huh?
The YesOrNoDataVariable is the coproduct of the terminal object with itself.
--Oh! Of course!
--S
On Jan 18, 2009, at 12:17 PM, Benja Fallenstein wrote:
On Sun, Jan 18, 2009 at 5:48 PM,
wrote: I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself.
Russell, this does seem like it might be very helpful, but it might be useful to include a note about what category you are working in. People may sometimes naively assume that one is working in the category of Haskell/Hugs/GHC data types and Haskell functions, in which there are no terminal -- or initial -- objects ('undefined' and 'const undefined' are distinct maps between any two objects X and Y), or else in the similar category without lifted bottoms, in which the empty type is terminal and the unit type isn't ('undefined' and 'const ()' are both maps from any object X to the unit type). These niceties will not confuse the advanced reader, but it may help the beginner if you are more explicit.
- Benja
P.S. :-) _______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, 2009-01-18 at 18:17 +0100, Benja Fallenstein wrote:
On Sun, Jan 18, 2009 at 5:48 PM,
wrote: I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
-- |The Bool datatype is named after George Boole (1815-1864). -- The Bool type is the coproduct of the terminal object with itself.
Russell, this does seem like it might be very helpful, but it might be useful to include a note about what category you are working in. People may sometimes naively assume that one is working in the category of Haskell/Hugs/GHC data types and Haskell functions, in which there are no terminal -- or initial -- objects
The naive way of making a "Haskell" category doesn't even work. Taking objects to be Haskell types, all Haskell functions as arrows, arrow equality being observational equality, and (.) and id to be the composition and identity, you fail to even have a category. Proof of this is left as an (easy) exercise for the reader.

roconnor@theorem.ca wrote:
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean. (Or at least, the languages that *have* a name for it...) But I'm far more perturbed by names like Eq, Ord, Num, Ix (??), and so on. The worst thing about C is the unecessary abbriviations; let's not copy them, eh?

On Mon, 2009-01-19 at 19:33 +0000, Andrew Coppin wrote:
roconnor@theorem.ca wrote:
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean. (Or at least, the languages that *have* a name for it...)
Except C++? But then again:
But I'm far more perturbed by names like Eq, Ord, Num, Ix (??), and so on. The worst thing about C is the unecessary abbriviations; [sic] let's not copy them, eh?
I agree. I've always felt that class EqualsClass randomTypeSelectedByTheUser => TotalOrderClass randomTypeSelectedByTheUser where compareXToY :: randomTypeSelectedByTheUser -> randomTypeSelectedByTheUser -> OrderingValue lessThanOrEqualTo :: randomTypeSelectedByTheUser -> randomTypeSelectedByTheUser -> Boolean lessThan :: randomTypeSelectedByTheUser -> randomTypeSelectedByTheUser -> Boolean was both more understandable to the reader, and easier to remember and reproduce for the writer. Or, in other words, leave well enough alone; we should always err in the direction of being like C, to avoid erring in the direction of being like Java. jcc

G'day all. On Mon, 2009-01-19 at 19:33 +0000, Andrew Coppin wrote:
My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean. (Or at least, the languages that *have* a name for it...)
Jonathan Cast commented:
Except C++?
And perhaps more to the point, "Boolean" is an adjective, not a noun. Therefore, it would be better reserved for a typeclass. class (PartialOrder a) => JoinSemilattice a where (||) :: a -> a -> a class (MeetSemilattice a) => BoundedJoinSemilattice a where bottom :: a class (PartialOrder a) => MeetSemilattice a where (&&) :: a -> a -> a class (MeetSemilattice a) => BoundedMeetSemilattice a where top :: a class (BoundedJoinSemilattice a, BoundedMeetSemilattice a) => Heyting a where implies :: a -> a -> a not :: a -> a not x = x `implies` bottom class (Heyting a) => Boolean a where {- the additional axiom that x || not x == top -} Cheers, Andrew Bromage

On Mon, Jan 19, 2009 at 7:22 PM,
And perhaps more to the point, "Boolean" is an adjective, not a noun. Therefore, it would be better reserved for a typeclass.
There's also John Meacham's Boolean package. http://repetae.net/recent/out/Boolean.html
class (Heyting a) => Boolean a where {- the additional axiom that x || not x == top -}
Are there any instances of Boolean that aren't isomorphic to Bool?
(I'm assuming that (||) and (&&) are intended to be idempotent,
commutative, and associative.)
--
Dave Menendez

On Mon, Jan 19, 2009 at 6:25 PM, David Menendez
Are there any instances of Boolean that aren't isomorphic to Bool?
a->Bool for any a. I think. Though I think it should be called GeorgeBoolean otherwise we might confuse it for something his father might have invented. -- Dan

G'day all.
Quoting David Menendez
Are there any instances of Boolean that aren't isomorphic to Bool?
Sure. Two obvious examples: - The lattice of subsets of a "universe" set, where "or" is union "and" is intersection and "not" is complement with respect to the universe. - Many-valued logic systems. - Intuitionistic logic systems. - The "truth values" of an arbitrary topos (i.e. the points of the subobject classifier). Look up "Heyting algebra" for examples. Cheers, Andrew Bromage

G'day all. I wrote:
- Intuitionistic logic systems.
- The "truth values" of an arbitrary topos (i.e. the points of the subobject classifier).
Sorry, I misread the question. These are _not_ instances of Boolean (or at least the latter isn't an instance in general). Cheers, Andrew Bromage

On Mon, Jan 19, 2009 at 11:33 AM, Andrew Coppin
My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean. (Or at least, the languages that *have* a name for it...)
Python: bool ocaml: bool C++: bool C99: bool C#: bool
But I'm far more perturbed by names like Eq, Ord, Num, Ix (??), and so on. The worst thing about C is the unecessary abbriviations; let's not copy them, eh?
They're short so they're quick to parse (for a human) and read. They're easy to type. If you have a constraint like (Eq a,Num a,Ord a,Show a,Ix a) you can see all five type classes at a single glance without having to scan your eye across the line. They're highly mnemonic in the sense that once I'd learnt what they meant it became hard to forget them again. What exactly is wrong with them? -- Dan

Dan Piponi wrote:
On Mon, Jan 19, 2009 at 11:33 AM, Andrew Coppin
wrote: My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean. (Or at least, the languages that *have* a name for it...)
Python: bool ocaml: bool C++: bool C99: bool C#: bool
Versus Java, Pascal, Smalltalk and Eiffel who all call it Boolean. Oh well. At least it's pretty obvious what it means.
But I'm far more perturbed by names like Eq, Ord, Num, Ix (??), and so on. The worst thing about C is the unecessary abbriviations; let's not copy them, eh?
They're short so they're quick to parse (for a human) and read. They're easy to type. If you have a constraint like (Eq a,Num a,Ord a,Show a,Ix a) you can see all five type classes at a single glance without having to scan your eye across the line. They're highly mnemonic in the sense that once I'd learnt what they meant it became hard to forget them again. What exactly is wrong with them?
Would it really hurt to type a few more keystrokes and say "Equal"? "Ordered"? "Index"? I don't think so. Sure, we don't especially want to end up with classes like StrictlyOrderedAssociativeSet or something, but a few more characters wouldn't exactly kill you. But, again, this is too difficult to change now, so we're stuck with it. PS. Ord implies Eq, so you don't need both in the same constraint. Num implies Show, so you don't need that either. So actually, (Ord a, Num a, Ix a) - or rather, (Ordered a, Number a, Index a) - would do just fine.

On Mon, 2009-01-19 at 20:55 +0000, Andrew Coppin wrote:
Dan Piponi wrote:
On Mon, Jan 19, 2009 at 11:33 AM, Andrew Coppin
wrote: My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean. (Or at least, the languages that *have* a name for it...)
Python: bool ocaml: bool C++: bool C99: bool C#: bool
Versus Java, Pascal,
Again, we don't want to imitate these two!
Smalltalk and Eiffel who all call it Boolean. Oh well. At least it's pretty obvious what it means.
But I'm far more perturbed by names like Eq, Ord, Num, Ix (??), and so on. The worst thing about C is the unecessary abbriviations; let's not copy them, eh?
They're short so they're quick to parse (for a human) and read. They're easy to type. If you have a constraint like (Eq a,Num a,Ord a,Show a,Ix a) you can see all five type classes at a single glance without having to scan your eye across the line. They're highly mnemonic in the sense that once I'd learnt what they meant it became hard to forget them again. What exactly is wrong with them?
Would it really hurt to type a few more keystrokes and say "Equal"? "Ordered"? "Index"? I don't think so.
Constantly? Yeah. Commonly used names should be short, or abbreviated. You can't abbreviate type classes.
Sure, we don't especially want to end up with classes like StrictlyOrderedAssociativeSet or something, but a few more characters wouldn't exactly kill you.
But, again, this is too difficult to change now, so we're stuck with it.
PS. Ord implies Eq, so you don't need both in the same constraint. Num implies Show, so you don't need that either. So actually, (Ord a, Num a, Ix a) - or rather, (Ordered a, Number a, Index a) - would do just fine.
newtype MyFoo = MyWrapsWhatever deriving (Eq, Ord, Read, Show, Num, Ix, Data, Typeable) vs. newtype MyFoo = MyWrapsWhatever deriving (Equality, Order, Read, Show, Number, Index, Data, Typeable) Yeah. Count me out. jcc

On 20 Jan 2009, at 8:33 am, Andrew Coppin wrote:
roconnor@theorem.ca wrote:
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
My only problem with it is that it's called Bool, while every other programming language on Earth calls it Boolean.
(Or at least, the languages that *have* a name for it...)
Algol 68, C99, C++, C#, Standard ML, OCAML, F#, Clean all use 'bool', not 'Boolean'. Of course if you want to go for historical priority, that'd be Fortran's LOGICAL type...

roconnor@theorem.ca schrieb:
On Sun, 18 Jan 2009, Ross Paterson wrote:
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I noticed the Bool datatype isn't well documented. Since Bool is not a common English word, I figured it could use some haddock to help clarify it for newcomers.
The type should be named "Truth".

ross:
On Sat, Jan 17, 2009 at 09:12:32PM -0500, ajb@spamcop.net wrote:
And FWIW, I agree with everyone who has commented that the documentation is inadequate. It'd be nice if there was some way to contribute better documentation without needing checkin access to the libraries.
There is. The current state of the docs may be viewed at
http://www.haskell.org/ghc/dist/current/docs/libraries/
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I imagine if we set up a wiki-like system where the entire hackage docs could be edited, as well as viewed, we would end up with a flood. A modification to haddock perhaps, that sends edits to generated docs to libraries@ ? -- Don

2009/1/18 Don Stewart
ross:
On Sat, Jan 17, 2009 at 09:12:32PM -0500, ajb@spamcop.net wrote:
And FWIW, I agree with everyone who has commented that the documentation is inadequate. It'd be nice if there was some way to contribute better documentation without needing checkin access to the libraries.
There is. The current state of the docs may be viewed at
http://www.haskell.org/ghc/dist/current/docs/libraries/
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I imagine if we set up a wiki-like system where the entire hackage docs could be edited, as well as viewed, we would end up with a flood.
A modification to haddock perhaps, that sends edits to generated docs to libraries@ ?
This has come up many times lately. I've created a ticket for it: http://trac.haskell.org/haddock/ticket/72 If anyone has suggestions for design or implementation of a system like this, don't hesitate to post to this ticket! David

david.waern:
2009/1/18 Don Stewart
: ross:
On Sat, Jan 17, 2009 at 09:12:32PM -0500, ajb@spamcop.net wrote:
And FWIW, I agree with everyone who has commented that the documentation is inadequate. It'd be nice if there was some way to contribute better documentation without needing checkin access to the libraries.
There is. The current state of the docs may be viewed at
http://www.haskell.org/ghc/dist/current/docs/libraries/
Anyone can check out the darcs repos for the libraries, and post suggested improvements to the documentation to libraries@haskell.org (though you have to subscribe). It doesn't even have to be a patch.
Sure, it could be smoother, but there's hardly a flood of contributions.
I imagine if we set up a wiki-like system where the entire hackage docs could be edited, as well as viewed, we would end up with a flood.
A modification to haddock perhaps, that sends edits to generated docs to libraries@ ?
This has come up many times lately. I've created a ticket for it:
http://trac.haskell.org/haddock/ticket/72
If anyone has suggestions for design or implementation of a system like this, don't hesitate to post to this ticket!
Added to the entry on the proposals tracker, http://www.reddit.com/r/haskell_proposals/ If nothing else, this would make a good SoC project. -- Don

* John Goerzen
If you're learning Haskell, which communicates the idea more clearly:
* Appendable
or
* Monoid
I can immediately figure out what the first one means.
I think that's deceptively misleading. Sure, list1 `mappend` list2 is concatenation, of which Appendable is suggestive; but what on earth does it mean to append a number to another number, or append a function to another function? By doing some research, you can find out the answer, but if you start off with a name that means nothing to you, I suspect you'll be less confused than if you start off with a name that seems like it makes sense, but actually doesn't. (Of course, the name of mappend itself doesn't exactly help...)
I guess the bottom line question is: who is Haskell for? Category theorists, programmers, or both? I'd love it to be for both, but I've got to admit that Brian has a point that it is trending to the first in some areas.
I don't really understand why Appendable is specifically a "programmer-friendly" name; it doesn't really have any existing meaning elsewhere in programming languages, for example. -- mithrandi, i Ainil en-Balandor, a faer Ambar

On Thu, 15 Jan 2009, Lennart Augustsson wrote:
If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia.
And if you're a typical programmer who is now learning Haskell, this will likely make you want to run screaming and definitely be hard to understand. We at least need a description that's aimed at people who probably don't consider themselves any flavour of mathematician, however amateur. One that, while giving the definition, concentrates significantly on intuition. -- flippa@flippac.org "I think you mean Philippa. I believe Phillipa is the one from an alternate universe, who has a beard and programs in BASIC, using only gotos for control flow." -- Anton van Straaten on Lambda the Ultimate

On Fri, Jan 16, 2009 at 1:23 PM, Philippa Cowderoy
On Thu, 15 Jan 2009, Lennart Augustsson wrote:
If I see Monoid I know what it is, if I didn't know I could just look on Wikipedia.
And if you're a typical programmer who is now learning Haskell, this will likely make you want to run screaming and definitely be hard to understand. We at least need a description that's aimed at people who probably don't consider themselves any flavour of mathematician, however amateur. One that, while giving the definition, concentrates significantly on intuition.
Wikibooks has a patchy book on Abstract Algebra which seemed quite friendly to me (a non-mathematician and amateur FPer). I take it for granted there will be parts I don't understand but if I just continue to spot instances in the wild where they come up then it slowly becomes obvious. Collecting examples of concrete monoids is fairly easy fi you read some of the popular Haskell projects: Xmonad, Cabal, etc. I honestly don't see what all the fuss is about. No one's arguing that more documentation is a bad thing. But some people seem to think the mere existence of (a) technical terms or (b) technical terms not invented by programmers are an affront. Cheers, D

jgoerzen:
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
I'd also recommend yesterday's post http://intoverflow.wordpress.com/2009/01/13/why-haskell-is-beyond-ready-for-... Why Haskell is beyond ready for prime time For a few other insights. (Notably, the joy of hoogle/hayoo library search)

On Thu, Jan 15, 2009 at 7:34 AM, John Goerzen
I'd be inclined to call it something like "Appendable".
But I don't know what Appendable means. Maybe it means
class Appendable a where append :: a x -> x -> a x
ie. a container x's lets you can add an x to the end or maybe it means
class Appendable a where append :: a -> x -> a
ie. something that you can append anything to or maybe it means
class Appendable a where append :: a -> a -> a
so you can append any two elements of the same type together. Why use words that are so vague when there's already word that unambiguously says what the type class looks like? And even worse, why use duplicate terminology to make it even harder to see when mathematicians and computer scientists are talking about the same thing, so widening the divide between two groups of people who have much to share. -- Dan

John Goerzen wrote:
Haskell developers, stop letting the category theorists name things. Please. I beg of you.
I'd like to echo that sentiment!
He went on to add:
If you?re not a category theorists, and you're learning (or thinking of learning) Haskell, don't get scared off by names like "monoid" or "functor". And ignore anyone who starts their explanation with references to category theory- you don't need to know category theory, and I don't think it helps.
I'd echo that one too.
I am constantly shocked and saddened at the Haskell community's attitude here. It seems to boil down to "Why should we make it easier to learn Haskell? If people aren't prepaired to learn abstract algebra, category theory, predicate logic and type system theory, why should we bother to help them?" So much for the famously helpful Haskell community. I am seriously beginning to wonder if the people using Haskell actually realise what regular programmers do and don't know about. (You may recall my recent straw poll where 80% of the programmer nerds I asked had no clue what a "coroutine" is or what "existential quantification" means.) Notice that "monoid" sounds almost *exactly* like "monad". And yet, what you use them for is wildly unrelated. In a similar vein, who here can tell me off the top of their head what the difference between an epimorphism and a hylomorphism is? I've got a brick-thick group theory book sat right next to me and *I* can't even remember! Best of all, if Joe Programmer makes any attempt to look these terms up, the information they get will be almost completely useless for the purposes of writing code or reading somebody else's. I was especially amused by the assertion that "existential quantification" is a more precise term than "type variable hiding". (The former doesn't even tell you that the feature in question is related to the type system! Even the few people in my poll who knew of the term couldn't figure out how it might be related to Haskell. And one guy argued that "forall" should denote universal rather than existential quantification...) The sad thing is, it's not actually complicated. The documentation just makes it seem like it is! :-( Databases are based on the relational algebra. But that doesn't seem to stop them from coming up with names for things that normal humans can understand without first taking a course in relational algebra. (Does the Oracle user guide state that "a relation is simply a subset of the extended Cartesian product of the respective domains of its attributes"? No, I don't *think* so! It says "Oracle manages tables which are made up of rows..." Technically less precise, but vastly easier to comprehend.) Why can't we do the same? If we *must* insist on using the most obscure possible name for everything, can we at least write some documentation that doesn't require a PhD to comprehend?? (Anybody who attempts to argue that "monoid" is not actually an obscure term has clearly lost contact with the real world.) As somebody else said, it basically comes down to this: Who the hell is Haskell actually "for"? If it's seriously intended to be used by programmers, things need to change. And if things aren't going to change, then let's all stop pretending that Haskell actually cares about real programmers. Sorry if this sounds like just another clueless rant, but I'm really getting frustrated about all this. Nobody seems to think there's actually a problem here, despite the incontravertible fact that there is... PS. As a small aside... Is the Monoid class actually used *anywhere* in all of Haskell? Surely saying that something is a monoid is so vague as to be unhelpful. "The most you can say about almost everything is practically nothing" and all that. All it means is that the type in question has a function that happens to take 2 arguments, and this function happens to have an identity value. How is this information useful? Surely what you'd want to know is what that function *does*?! And since a given type can only be a Monoid instance in one way, wouldn't passing the function and its identity in as parameters be simpler anyway? The integers form at least two different monoids AFAIK...

On Thu, Jan 15, 2009 at 07:46:02PM +0000, Andrew Coppin wrote:
John Goerzen wrote:
If we *must* insist on using the most obscure possible name for everything, can we at least write some documentation that doesn't require a PhD to comprehend?? (Anybody who attempts to argue that "monoid" is not actually an obscure term has clearly lost contact with the real world.)
Several people have suggested this, and I think it would go a long way towards solving the problem. The problem is: this documentation can really only be written by those that understand the concepts, understand how they are used practically, and have the time and inclination to submit patches. Experience suggests there may be no such people out there :-)
As somebody else said, it basically comes down to this: Who the hell is Haskell actually "for"? If it's seriously intended to be used by programmers, things need to change. And if things aren't going to change, then let's all stop pretending that Haskell actually cares about real programmers.
It might surprise you to see me say this, but I don't see this discussion as necessarily a weakness. I know of no other language community out there that has such a strong participation of both academics and applied users. This is a great strength. And, of course, Haskell's roots are firmly in academia. I think there there is a ton of interest in Haskell from the, ahem, "real world" programmer types. In fact, it seems to me that's where Haskell's recent growth has been. There are a lot of things showing up on Hackage relating to networking, Unicode encoding, databases, web apps, and the like. The nice thing about Haskell is that you get to put the theory in front of a lot of people that would like to use it to solve immediate programming problems. But they will only use it if you can explain it in terms they understand. There are a number of efforts in that direction: various websites, articles, books, libraries, etc. And I think the efforts are succeeding. But that doesn't mean there is no room for improvement. -- John

On Thu, Jan 15, 2009 at 12:11 PM, John Goerzen
On Thu, Jan 15, 2009 at 07:46:02PM +0000, Andrew Coppin wrote:
John Goerzen wrote:
can we at least write some documentation that doesn't require a PhD to comprehend? Several people have suggested this, and I think it would go a long way towards solving the problem.
That sounds like a good plan. Which precise bit of documentation should I update? Make a new wiki page? Put it in here: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html -- Dan

Dan Piponi wrote:
Several people have suggested this, and I think it would go a long way towards solving the problem.
That sounds like a good plan. Which precise bit of documentation should I update? Make a new wiki page? Put it in here: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html
I think it would be great if the haddock documentation itself were a wiki, so everyone can edit it right in place. Regards, H. Apfelmus

That looks like a freakin' cool idea; however very hard to implement;
so why not write such wikis in predefined places, like,
haskell.org/haskellwiki/Data/Monoid/ and allow haddock to
automatically put links there from the generated documentation? This
would make the documentation (on the wiki) more organized, more
'extensible' and people would know a place where they can surely share
their knowledge in a useful and very findable way.
Are there any drawbacks to this?
2009/1/16 Apfelmus, Heinrich
Dan Piponi wrote:
Several people have suggested this, and I think it would go a long way towards solving the problem.
That sounds like a good plan. Which precise bit of documentation should I update? Make a new wiki page? Put it in here: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html
I think it would be great if the haddock documentation itself were a wiki, so everyone can edit it right in place.
Regards, H. Apfelmus
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, Jan 16, 2009 at 3:46 AM, Eugene Kirpichov
That looks like a freakin' cool idea; however very hard to implement; so why not write such wikis in predefined places, like, haskell.org/haskellwiki/Data/Monoid/ and allow haddock to automatically put links there from the generated documentation? This would make the documentation (on the wiki) more organized, more 'extensible' and people would know a place where they can surely share their knowledge in a useful and very findable way.
See annocpan http://annocpan.org for prior art.
Are there any drawbacks to this?
2009/1/16 Apfelmus, Heinrich
: Dan Piponi wrote:
Several people have suggested this, and I think it would go a long way towards solving the problem.
That sounds like a good plan. Which precise bit of documentation should I update? Make a new wiki page? Put it in here:
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html
I think it would be great if the haddock documentation itself were a wiki, so everyone can edit it right in place.
Regards, H. Apfelmus
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-01-15 at 14:11 -0600, John Goerzen wrote:
On Thu, Jan 15, 2009 at 07:46:02PM +0000, Andrew Coppin wrote:
John Goerzen wrote:
If we *must* insist on using the most obscure possible name for everything, can we at least write some documentation that doesn't require a PhD to comprehend?? (Anybody who attempts to argue that "monoid" is not actually an obscure term has clearly lost contact with the real world.)
Several people have suggested this, and I think it would go a long way towards solving the problem. The problem is: this documentation can really only be written by those that understand the concepts, understand how they are used practically, and have the time and inclination to submit patches. Experience suggests there may be no such people out there :-)
As somebody else said, it basically comes down to this: Who the hell is Haskell actually "for"? If it's seriously intended to be used by programmers, things need to change. And if things aren't going to change, then let's all stop pretending that Haskell actually cares about real programmers.
It might surprise you to see me say this, but I don't see this discussion as necessarily a weakness. I know of no other language community out there that has such a strong participation of both academics and applied users. This is a great strength. And, of course, Haskell's roots are firmly in academia.
I think there there is a ton of interest in Haskell from the, ahem, "real world" programmer types. In fact, it seems to me that's where Haskell's recent growth has been. There are a lot of things showing up on Hackage relating to networking, Unicode encoding, databases, web apps, and the like.
The nice thing about Haskell is that you get to put the theory in front of a lot of people that would like to use it to solve immediate programming problems. But they will only use it if you can explain it in terms they understand.
There are plenty of "real world" programmer types who are using these scarily named things, Monoid, Monad, Functor, Existential Quantification. Programmers such as you*. Despite poor documentation, which everyone agrees could be improved, they've somehow managed to understand these things anyway. My impression is that to most of them Monoids, Functors, and Monads are Just Another Interface and Existential Quantification is Just Another Language Feature. There are poorly documented interfaces in every language**. Any "real world" programmer has some (plenty...) of experience dealing with this issue. These programmers do what they need to do to get stuff done. Again, somehow they learn how to use these things without waiting for "us" to provide an "explanation" in "terms they can understand;" too busy trying to get stuff done.
There are a number of efforts in that direction: various websites, articles, books, libraries, etc. And I think the efforts are succeeding. But that doesn't mean there is no room for improvement.
No one doubts that there is room for improvement. However, the direction is better documentation, not different names. Better names is fine, but I have not heard any remotely convincing alternative for any of the above terms. * Or me for that matter. I'm not an academic now and certainly wasn't when I started learning Haskell. I didn't know what a monoid was, had never heard of category theory or monads or functors. I was using monads and functors and monoids in less than a month after I started using Haskell. ** Heck, papers and decades worth of mathematical texts at almost every level is a heck of a lot more documentation than most "poorly documented" interfaces have.

Derek Elkins wrote:
No one doubts that there is room for improvement. However, the direction is better documentation, not different names. Better names is fine, but I have not heard any remotely convincing alternative for any of the above terms.
After thinking about it, I think you are right. But, there is a problem -- nobody is stepping up to write that documentation. If we can't get it, then perhaps we ought to fall back on better names. If we *can* get it, all the better, because these things need good docs, regardless of what they're called. I can see it now: a monoid by any other name would smell as sweet... -- John

On Thu, Jan 15, 2009 at 07:46:02PM +0000, Andrew Coppin wrote:
If we *must* insist on using the most obscure possible name for everything,
I don't think anybody even suggests using obscure names. Some people insist on precise names. The problem is that many Haskell constructs are so abstract and so general that precise names will be obscure to anybody with no background in logic (existential quantification), algebra (monoid) or category theory (monad). This level of abstraction is a great benefit, since it allows reuse of code and concepts, but the problem is internalizing the abstraction and learning to recognize how it works for different concrete data types. As pointed out numerouos times, calling Monoids "Appendable" would be wildly misleading. But I think the real problem here is learning and understandig very abstract concepts, not the names.
can we at least write some documentation that doesn't require a PhD to comprehend?
I agree (with everybody) that documentation is lacking. Referring to category theory, logic, or scientific papers is good, but leaving it at that is pure intellectual terrorism. Good documentations should: 1. describe the abstraction and 2. list instances with examples for each. For extra credit also include a section with excercises - and I'm only half joking here.
(Anybody who attempts to argue that "monoid" is not actually an obscure term has clearly lost contact with the real world.)
Anybody who calls Monoids "Appendable" has clearly lost contact with their programming language :-) -k -- If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde
On Thu, Jan 15, 2009 at 07:46:02PM +0000, Andrew Coppin wrote:
If we *must* insist on using the most obscure possible name for everything,
I don't think anybody even suggests using obscure names. Some people insist on precise names.
Ketil, to second your here: "Appendable" *is* an obscure name! Even more than Monoid. I remember my early CS algebra courses. I met cool animals there: Group, Ring, Vector Space. Those beasts were very strong, but also very calm at the same time. Although I was a bit shy at first, after some work we became friends. When I first saw Monad, Monoid, Functor and others, I wasn't scared. They must be from the same zoo as my old friends! Some work is needed to establish a connection, but it is doable. And it is very rewarding, because then you get very powerful, dependable friends that give you strong guaranties! Now compare ICollecion, IAppendable or the alike. These are warm, and fuzzy, and "don't hurt me please", so the guaranties they give depend on mood or something as "intuitive" as phase of the moon. And don't feed corner cases to them, because they may scratch you! So: Warm, fuzzy: under defined, intuitive, sloppy... Cool, strong: well defined, dependable, concrete... There are plenty of warm, fuzzy languages out there, if you want Java, you know where to find it. And *real programmers* seem to look for something more these days. I need to sleep well knowing my programs work. I need powerful, strong abstraction. I use Haskell wherever possible because it is based on the strongest thing we have: MATHS! Keep it that way! Monads aren't warm, they are COOL! -- Gracjan

G'day all.
Quoting Gracjan Polak
I remember my early CS algebra courses. I met cool animals there: Group, Ring, Vector Space. Those beasts were very strong, but also very calm at the same time. Although I was a bit shy at first, after some work we became friends.
I don't know about you, bu the word "module" threw me. That is probably the one name from algebra that clashes with computer science too much. Cheers, Andrew Bromage

Ketil Malde wrote:
The problem is that many Haskell constructs are so abstract and so general that precise names will be obscure to anybody with no background in logic (existential quantification), algebra (monoid) or category theory (monad). This level of abstraction is a great benefit, since it allows reuse of code and concepts, but the problem is internalizing the abstraction and learning to recognize how it works for different concrete data types.
Abstraction is a great thing to have. I'd just prefer it to not look so intimidating; the majority of these abstractions aren't actually "complicated" in any way, once you learn what they are...
As pointed out numerouos times, calling Monoids "Appendable" would be wildly misleading. But I think the real problem here is learning and understandig very abstract concepts, not the names.
If you're going to implement an abstraction for monoids, you might as well call it "monoid". On that I agree. I still think "appendable" (where it really *is* used only for appendable collections) is a more useful abstraction to have, since it's more specific. Generalising things is nice, but if you generalise things too far you end up with something too vague to be of practical use.
I agree (with everybody) that documentation is lacking.
If there is one single thing to come out of this giant flamewar, I hope it's better documentation. (I'll even lend a hand myself if I can figure out how...) Clearer documentation can't possibly be a bad thing!
(Anybody who attempts to argue that "monoid" is not actually an obscure term has clearly lost contact with the real world.)
Anybody who calls Monoids "Appendable" has clearly lost contact with their programming language :-)
Calling something "appendable" if it really is a general monoid would be slightly silly, yes.

Andrew Coppin wrote:
Abstraction is a great thing to have. I'd just prefer it to not look so intimidating;
What makes it look intimidating? If the answer is "it looks intimidating because the documentation consists of nothing more than a mathematical term, without a definition, and a reference to a paper", then I agree with you, and it seems so does most everyone else. But if the intimidation factor is coming from preconceptions like "it's mathy, therefore it's scary"; or "it's an unfamiliar term, therefore it's scary", then I think that's something that the reader needs to work on, not the designers and documenters of Haskell. Computer programming is full of terms that need to be learned, and if anything terms like "monoid" are fantastically useful because they're so precisely defined, and are part of a larger well-defined universe. I would have thought that any "true programmer" (like a true Scotsman) could appreciate the separation of concerns and factoring that's gone into abstract algebra. The idea that it's not relevant to programming (an implication that was made earlier) misses a bigger picture. How could a collection of very general structures associated with general operations *not* be relevant to programming? Given that mathematicians have spent centuries honing these useful structures, and given that plenty of applications for them in programming have been identified, it would virtually be a crime not to use them where they make sense. (A crime against... humanity? I look forward to the trials at The Hague of errant programming language and library designers.)
the majority of these abstractions aren't actually "complicated" in any way, once you learn what they are...
Which underscores my question - what's the source of the intimidation, then?
If you're going to implement an abstraction for monoids, you might as well call it "monoid". On that I agree.
Excellent.
I still think "appendable" (where it really *is* used only for appendable collections) is a more useful abstraction to have, since it's more specific. Generalising things is nice, but if you generalise things too far you end up with something too vague to be of practical use.
That's only one side of the story. Quite a few examples of monoid use has been given in this thread. How many of them are actually uses of Appendable, I wonder? There's an equal and opposite risk of under-generalizing here: if you design something to take an Appendable argument, and if Appendable precludes other kinds of "non-appendable" monoids, you may be precluding certain argument types that would otherwise be perfectly reasonable, and building in restrictions to your code for no good reason - restrictions that don't relate to the actual requirements of the code. Of course, if you're just saying you want Appendable as an alias for Monoid, that's reasonable (I mentioned that possibility in another message), but a similar effect might be achieved by documentation that points out that appendability is one application for monoids. A more suitable "friendly" synonym for "monoid" might be "combinable", which can more easily be defended: a binary operation combines its arguments by definition, since it turns two arguments into one result. But again, it would make more sense to observe in the documentation that monoids are combinable things, for various reasons that others have already addressed. I like the reasons that Manuel Chakravarty gave - in part, "the language and the community favours drilling down to the core of a problem and exposing its essence in the bright light of mathematical precision". If anyone finds that scary, my advice to them is to wear sunglasses until they get used to it. In practice, what that means is don't fuss over the fact that there's a lot of unfamiliar knowledge that seems important -- yes, it is important, but you can use Haskell quite well without knowing it all. I speak from experience, since I'm not a mathematician, let alone a category theorist. Anton

Anton van Straaten wrote:
Andrew Coppin wrote:
Abstraction is a great thing to have. I'd just prefer it to not look so intimidating;
What makes it look intimidating?
If the answer is "it looks intimidating because the documentation consists of nothing more than a mathematical term, without a definition, and a reference to a paper", then I agree with you, and it seems so does most everyone else.
But if the intimidation factor is coming from preconceptions like "it's mathy, therefore it's scary"; or "it's an unfamiliar term, therefore it's scary", then I think that's something that the reader needs to work on, not the designers and documenters of Haskell.
I guess you're right. A problem I see a lot of [and other people have mentioned this] is that a lot of documentation presents highly abstracted things, and gives *no hint* of why on earth these might possibly be useful for something. (E.g., "coarbitrary". Wuh??) Perhaps fixing this *would* help make Haskell more accessible. (The "other" problem of course is that what documentation that does exist is scattered all over the place...) I still think existential quantification is a step too far though. :-P

I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-) Cheers, /Niklas

Niklas Broberg wrote:
I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-)
Andrew's core objection here seems reasonable to me. It was this:
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system.
But I suspect I part company from Andrew in thinking that something like ExistentiallyQuantifiedTypes would be a perfectly fine alternative. Anton

On Fri, 2009-01-16 at 18:14 -0500, Anton van Straaten wrote:
Niklas Broberg wrote:
I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-)
Andrew's core objection here seems reasonable to me. It was this:
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system.
But I suspect I part company from Andrew in thinking that something like ExistentiallyQuantifiedTypes would be a perfectly fine alternative.
+1 (Although shouldn't it really be ExistentiallyQuantifiedConstructorTypes or something? If GHC ever actually adds first-class existentials, what is Cabal going to call *that* then?) jcc

On Fri, 2009-01-16 at 15:21 -0800, Jonathan Cast wrote:
On Fri, 2009-01-16 at 18:14 -0500, Anton van Straaten wrote:
Niklas Broberg wrote:
I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-)
Andrew's core objection here seems reasonable to me. It was this:
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system.
But I suspect I part company from Andrew in thinking that something like ExistentiallyQuantifiedTypes would be a perfectly fine alternative.
+1
This focus on names is ridiculous. I agree that good names are beneficial, but they don't have to encode everything about the referent into themselves. Haskell is called "Haskell" not "StaticallyTypedPurelyFunctionalProgrammingLanguage." In this particular case, it's absurd. In this case the name is only of mnemonic value, other than that it could be called FraggleRock. Regardless of the name you are going to have to look up what it refers to (in the user's guide), or, having already done that earlier, just know what it means.
(Although shouldn't it really be ExistentiallyQuantifiedConstructorTypes or something? If GHC ever actually adds first-class existentials, what is Cabal going to call *that* then?)
FreeExistentials. FirstClassExistentials would also be reasonable. Though renaming the current LANGUAGE tag to LocalExistentialQuantification would be better.

On Sat, Jan 17, 2009 at 12:14 AM, Anton van Straaten
I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-)
Andrew's core objection here seems reasonable to me. It was this:
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system.
But I suspect I part company from Andrew in thinking that something like ExistentiallyQuantifiedTypes would be a perfectly fine alternative.
Well, I definitely agree to that, but that's not what he wrote in the post I answered. My point was that existential quantification is nowhere near scary. But yes - making the Types part explicit is certainly not a bad idea. +1 for ExistentiallyQuantifiedTypes. Cheers, /Niklas

Anton van Straaten wrote:
Niklas Broberg wrote:
I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-)
Andrew's core objection here seems reasonable to me. It was this:
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system.
But I suspect I part company from Andrew in thinking that something like ExistentiallyQuantifiedTypes would be a perfectly fine alternative.
I would suggest that ExistentiallyQuantifiedTypeVariables would be an improvement on just ExistentialQuantification - but I'd still prefer the less cryptic HiddenTypeVariables. (Since, after all, that's all this actually does.) Either way, nobody is going to change the name, so why worry? PS. There exist courses on logic? That could be potentially interesting...

On Sat, 2009-01-17 at 11:07 +0000, Andrew Coppin wrote:
Anton van Straaten wrote:
Niklas Broberg wrote:
I still think existential quantification is a step too far though. :-P
Seriously, existential quantification is a REALLY simple concept, that you would learn week two (or maybe three) in any introductory course on logic. In fact, I would argue that far more people probably know what existential quantification is than that know what a monoid is. :-)
Andrew's core objection here seems reasonable to me. It was this:
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system.
But I suspect I part company from Andrew in thinking that something like ExistentiallyQuantifiedTypes would be a perfectly fine alternative.
I would suggest that ExistentiallyQuantifiedTypeVariables would be an improvement on just ExistentialQuantification - but I'd still prefer the less cryptic HiddenTypeVariables. (Since, after all, that's all this actually does.)
Consider the expression (I hate this expression) case error "Urk!" of x -> error "Yak!" When you translate this into System F, you have to come up with a fresh type variable for the type of x, even though that variable is unused in the type of the entire expression. Which is what HiddenTypeVariables brings to my mind every time you use it. jcc

Andrew Coppin
I would suggest that ExistentiallyQuantifiedTypeVariables would be an improvement [...]
That must be a joke. Typing the long extension names in LANGUAGE pragmas over and over again is tiring and annoying enough already. We really don't need even longer names, and your "improvement" fills up almost half of the width of an 80 characters terminal. I can't await the next Haskell standard, where at last all those extensions are builtin. For the remaining extensions, I strongly suggest abbreviations and a more convenient way to specify them. For example an import-like statement: uses MPTC uses FlexInst instead of: {-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-} Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Ertugrul Soeylemez wrote:
Andrew Coppin
wrote: I would suggest that ExistentiallyQuantifiedTypeVariables would be an improvement [...]
That must be a joke. Typing the long extension names in LANGUAGE pragmas over and over again is tiring and annoying enough already. We really don't need even longer names, and your "improvement" fills up almost half of the width of an 80 characters terminal.
Which is why I personally prefer HiddenTypeVariables. (This has the advantage of using only pronouncible English words, which means you can use it when speaking out loud.) But, as I say, nobody is going to rename anything, so it's moot.
I can't await the next Haskell standard, where at last all those extensions are builtin.
This frightens me. At the moment, I understand how Haskell 98 works. There are lots of extensions out there, but I don't have to care about that because I don't use them. If I read somebody else's code and it contains a LANGUAGE pragma, I can immediately tell that the code won't be comprehendable, so I don't need to waste time trying to read it. But once Haskell' becomes standard, none of this holds any more. Haskell' code will use obscure lanuage features without warning, and unless I somehow learn every extension in the set, I'll never be able to read Haskell again! (One presumes that they won't add any extensions which actually *break* backwards compatibility, so hopefully I can still pretend these troublesome extensions don't exist when writing my own code...)

Andrew Coppin wrote:
I can't await the next Haskell standard, where at last all those extensions are builtin.
This frightens me.
The example he had had the "uses" keyword, so I assume it's built in in the same way Perl pragma are built in. So you can happily ignore code when you see "uses" at the top of the file. ;) Although I could be wrong. Cheers, Cory

----- Original Message ----
From: Andrew Coppin
Which is why I personally prefer HiddenTypeVariables. (This has the advantage of using only pronouncible English words, which means you can use it when speaking out loud.)
Existential - English, easy to pronounce Quantify - English, easy to pronounce I know I've been seeing those backwards E's and upside down A's in not-so-advanced Maths courses for a long time (since high school, I'm sure) and I certainly encountered them before 'Boolean'. If you could do a geometry proof in high school, you have the Maths background need to understand the ideas. (How they apply to types is another story, but the words shouldn't be scary.)
I can't await the next Haskell standard, where at last all those extensions are builtin.
This frightens me.
At the moment, I understand how Haskell 98 works. There are lots of extensions out there, but I don't have to care about that because I don't use them. If I read somebody else's code and it contains a LANGUAGE pragma, I can immediately tell that the code won't be comprehendable, so I don't need to waste time trying to read it. But once Haskell' becomes standard, none of this holds any more. Haskell' code will use obscure lanuage features without warning, and unless I somehow learn every extension in the set, I'll never be able to read Haskell again! (One presumes that they won't add any extensions which actually *break* backwards compatibility, so hopefully I can still pretend these troublesome extensions don't exist when writing my own code...)
Some of the most useful libraries (e.g. parsec, generics) use these type system extensions (higher rank polymorphism, existentials). It would be great if these could be considered 'standard Haskell'. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Andrew Coppin
Ertugrul Soeylemez wrote:
Andrew Coppin
wrote: I would suggest that ExistentiallyQuantifiedTypeVariables would be an improvement [...]
That must be a joke. Typing the long extension names in LANGUAGE pragmas over and over again is tiring and annoying enough already. We really don't need even longer names, and your "improvement" fills up almost half of the width of an 80 characters terminal.
Which is why I personally prefer HiddenTypeVariables. (This has the advantage of using only pronouncible English words, which means you can use it when speaking out loud.)
But, as I say, nobody is going to rename anything, so it's moot.
Well, yes, unfortunately, unless someone proposes extension renamings together with a long paper about the psychological implications and advantages of using shorter names.
I can't await the next Haskell standard, where at last all those extensions are builtin.
This frightens me.
At the moment, I understand how Haskell 98 works. There are lots of extensions out there, but I don't have to care about that because I don't use them. If I read somebody else's code and it contains a LANGUAGE pragma, I can immediately tell that the code won't be comprehendable, so I don't need to waste time trying to read it. But once Haskell' becomes standard, none of this holds any more. Haskell' code will use obscure lanuage features without warning, and unless I somehow learn every extension in the set, I'll never be able to read Haskell again! (One presumes that they won't add any extensions which actually *break* backwards compatibility, so hopefully I can still pretend these troublesome extensions don't exist when writing my own code...)
I think, the list of accepted extensions is well chosen. And don't worry, the extensions I'm talking about mainly, are easy to comprehend and very useful, for example multi-parameter type classes and rank-n types. Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

* Andrew Coppin
A problem I see a lot of [and other people have mentioned this] is that a lot of documentation presents highly abstracted things, and gives *no hint* of why on earth these might possibly be useful for something.
I think this is definitely something that should be addressed by better documentation of some kind. Unfortunately, this is quite possibly the hardest kind of knowledge to put down into words: before you learn the concepts, you don't know them, so you can't write about them, but after you learn them, they seem so obvious that you don't know how to describe them. (At least, this is typically the problem I have; I can answer questions about something easily, maybe even walk someone through understanding it, but I can't draft a document that will describe things adequately to a newbie). This problem is worse in Haskell than other languages, simply because abstractions are used more frequently and pervasively in Haskell. In many other languages, these abstractions are perfectly applicable, but actually encoding them in the language is simply too unwieldy. Thus, while the abstraction may be present as a fuzzy concept at the back of the programmer's mind, or even as a "design pattern", the code people actually work with tends to be at a more concrete level, despite the more limited possibilities of code reuse at this level. This ties in with the complaint that Haskell variable / parameter names aren't descriptive enough. You frequently hear things like "why call it 'xs' instead of 'applicableItems'?"; often, the answer to this is simply that the value in question is something so general that you cannot describe it more specifically than "a list of something or other". Haskell code is being written at a higher level of abstraction than the newcomer is used to, and thus the highly abstract names are mistaken for vague or imprecise names. Now, it's all very well to explain the reasons behind this to the newcomer, but they're still left in a position where they can't find the tools they need to solve a particular problem. They're used to looking for the concrete tools they need to do some task or another, which aren't there; instead, there are all these abstract tools which can perform the concrete task at hand, but what is really needed is help finding the abstract tool for the concrete task at hand, or even abstracting the concrete task at hand, thus making the choice of abstract tool(s) an obvious one. Sure, you can pop into #haskell and hopefully find someone to walk you through the processes until you begin to understand the abstractions yourself, but I think we (I almost hesitate to include myself, given my own relatively miniscule Haskell knowledge) can do better than this in terms of helping people unfamiliar with these concepts. Also, more importantly, I'm referring specifically to teaching *programmers* the concepts; I have no problem with *naming* things based on category theory or abstract algebra or quantum mechanics, but I should not be required to learn half a dozen fields of mathematics or physics in order to *use* things. Writing about how Monads in Haskell relate to Monads in category theory is of interest to category theorists, but isn't something programmers should be reading. Hopefully nothing I've said here comes as a surprise to anyone, and I'd be surprised if there were many serious objections to any of it, but perhaps it does need to be highlighted more prominently as an important area to improve if Haskell is to grow as a programming language. -- mithrandi, i Ainil en-Balandor, a faer Ambar

On Thu, 15 Jan 2009, John Goerzen wrote:
Several people have suggested this, and I think it would go a long way towards solving the problem. The problem is: this documentation can really only be written by those that understand the concepts, understand how they are used practically, and have the time and inclination to submit patches. Experience suggests there may be no such people out there :-)
I'd probably be willing to have a crack given people to report back to (so someone else can comment on whether the docs're any good) and a clear process to follow. How much I'd actually get done's another matter of course, and is also likely to depend on who's willing to talk about docs on IRC! I'm thinking we probably need a #haskell-docs for coordination, there's too much traffic to do it in #haskell itself these days. -- flippa@flippac.org "The reason for this is simple yet profound. Equations of the form x = x are completely useless. All interesting equations are of the form x = y." -- John C. Baez

On Thu, Jan 15, 2009 at 7:46 PM, Andrew Coppin
The sad thing is, it's not actually complicated. The documentation just makes it seem like it is! :-(
This is so true for a heck of a lot of things. Existential quantification being just one of them. Loads of things in Haskell have big powerful (but scary) names which I really think intimidate people, the situation isn't helped when a lot of tutorials use the theoretical basis for the construct as a starting point, rather then actually describing the construct from the perspective of a programmer first (see Monads). Haskell really isn't that difficult compared to other languages, but people still get the impression that you need to be a big brain on a stick to use it, terminology is certainly part of the equation. This doesn't mean that making up new words is always better, but we should certainly strive to exploit any opportunity to clarify the issue and (this means that haddock comments and language books/tutorials shouldn't refer to academic papers first and foremost, but use common English and practical examples to describe what's being used, and academic nerds can consult the footnotes for their fill of papers containing pages of squiggly symbols!). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

Sebastian Sylvan wrote:
On Thu, Jan 15, 2009 at 7:46 PM, Andrew Coppin
mailto:andrewcoppin@btinternet.com> wrote: The sad thing is, it's not actually complicated. The documentation just makes it seem like it is! :-(
This is so true for a heck of a lot of things. Existential quantification being just one of them. Loads of things in Haskell have big powerful (but scary) names which I really think intimidate people, the situation isn't helped when a lot of tutorials use the theoretical basis for the construct as a starting point, rather then actually describing the construct from the perspective of a programmer first (see Monads). Haskell really isn't that difficult compared to other languages, but people still get the impression that you need to be a big brain on a stick to use it, terminology is certainly part of the equation.
This doesn't mean that making up new words is always better, but we should certainly strive to exploit any opportunity to clarify the issue and (this means that haddock comments and language books/tutorials shouldn't refer to academic papers first and foremost, but use common English and practical examples to describe what's being used, and academic nerds can consult the footnotes for their fill of papers containing pages of squiggly symbols!).
I basically agree with most of what you just said. I'm not sure having a Monoid class is actually useful for anything - but if we must have it, there seems to be little better possible name for something so vague. {-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system. (Ideally it would also be pronouncible.) Of course, nobody will take any notice, since changing this would induce mass breakage for all the millions of LoC that already use the old name. I think "documenting" a package by saying "read this academic paper" should be banned. (Most especially if the paper in question isn't even available online and can only be obtained from a reputable university library!!) For example, I was looking at one of the monad transformers (I don't even remember which one now), and the Haddoc contained some type signatures and a line saying "read this paper". The paper in question mentioned the transformer in passing as a 5-line example of how to use polymorphism, but *still* without explaining how to actually use it! (I.e., the paper was about polymorphism, and this transformer was just a quick example.) What the hell?? I presume I can call "more documentation please!" without upsetting even the most ardant category theory millitant... ;-) Unfortunately, it's not going to write itself, and I have no idea how to solve the problem. (That is, even if I wrote some better documentation myself, I don't know how to submit it to get it into the official package documentation. E.g., Parsec has a great tutorial document, but the Haddoc pages are barren. It'd be easy to fix, but I don't know how to submit the updates.)

On Thu, 2009-01-15 at 21:17 +0000, Andrew Coppin wrote:
Sebastian Sylvan wrote:
On Thu, Jan 15, 2009 at 7:46 PM, Andrew Coppin
mailto:andrewcoppin@btinternet.com> wrote: The sad thing is, it's not actually complicated. The documentation just makes it seem like it is! :-(
This is so true for a heck of a lot of things. Existential quantification being just one of them. Loads of things in Haskell have big powerful (but scary) names which I really think intimidate people, the situation isn't helped when a lot of tutorials use the theoretical basis for the construct as a starting point, rather then actually describing the construct from the perspective of a programmer first (see Monads). Haskell really isn't that difficult compared to other languages, but people still get the impression that you need to be a big brain on a stick to use it, terminology is certainly part of the equation.
This doesn't mean that making up new words is always better, but we should certainly strive to exploit any opportunity to clarify the issue and (this means that haddock comments and language books/tutorials shouldn't refer to academic papers first and foremost, but use common English and practical examples to describe what's being used, and academic nerds can consult the footnotes for their fill of papers containing pages of squiggly symbols!).
I basically agree with most of what you just said.
I'm not sure having a Monoid class is actually useful for anything - but if we must have it, there seems to be little better possible name for something so vague.
{-# LANGUAGE ExistentialQuantification #-} is an absurd name and should be changed to something that, at a minimum, tells you it's something to do with the type system. (Ideally it would also be pronouncible.) Of course, nobody will take any notice, since changing this would induce mass breakage for all the millions of LoC that already use the old name.
I think "documenting" a package by saying "read this academic paper" should be banned. (Most especially if the paper in question isn't even available online and can only be obtained from a reputable university library!!) For example, I was looking at one of the monad transformers (I don't even remember which one now), and the Haddoc contained some type signatures and a line saying "read this paper". The paper in question mentioned the transformer in passing as a 5-line example of how to use polymorphism, but *still* without explaining how to actually use it! (I.e., the paper was about polymorphism, and this transformer was just a quick example.) What the hell??
I presume I can call "more documentation please!" without upsetting even the most ardant category theory millitant... ;-)
But you don't seem to be capable of separating your valid complaints from your invalid ones. Everyone wants the Haddock documentation to be maximally useful. But the should never be a confusion between *defining* a term used in a library and *choosing* that term. They are simply different activities, and neither can be a substitute for the other. jcc

I'm not sure having a Monoid class is actually useful for anything - but if we must have it, there seems to be little better possible name for something so vague.
IMO the Monoid class is useful since, if you define mempty and mappend, you get mconcat for free. I don't see what the problem is. Most people will accept Functor, as it used a lot. Monoid might be less used, but if you reject it, then by principle you must just as well reject Functor, and any other type classes.

On Thu, 2009-01-15 at 19:46 +0000, Andrew Coppin wrote:
PS. As a small aside... Is the Monoid class actually used *anywhere* in all of Haskell?
Yes. They're used quite a lot in Cabal. Package databases are monoids. Configuration files are monoids. Command line flags and sets of command line flags are monoids. Package build information is a monoid. It is also used in the Foldable class which is a nice interface for traversing/visiting structures. Binary serialisation is also a monoid. Duncan

duncan.coutts:
On Thu, 2009-01-15 at 19:46 +0000, Andrew Coppin wrote:
PS. As a small aside... Is the Monoid class actually used *anywhere* in all of Haskell?
Yes.
They're used quite a lot in Cabal. Package databases are monoids. Configuration files are monoids. Command line flags and sets of command line flags are monoids. Package build information is a monoid.
It is also used in the Foldable class which is a nice interface for traversing/visiting structures. Binary serialisation is also a monoid.
Also, xmonad configuration hooks are monoidal. So all those xmonad users gluing together keybindings are using the Monoid class. -- Don

I think perhaps the correct question here is not "how many instances of
Monoid are there?", but "how many functions are written that can use an
arbitrary Monoid". E.g., the fact that there are a lot of instances of Monad
doesn't make it useful. There are a lot of instances of Monad because it's
useful to have instances of Monad. Why? Because of
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.htm... !
Look at all the cool stuff you can automagically do with your type just
because it's an instance of Monad! I think that's the point. What can you do
with arbitrary Monoids? Not much, as evidenced by
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html
On Thu, Jan 15, 2009 at 3:51 PM, Don Stewart
duncan.coutts:
On Thu, 2009-01-15 at 19:46 +0000, Andrew Coppin wrote:
PS. As a small aside... Is the Monoid class actually used *anywhere* in all of Haskell?
Yes.
They're used quite a lot in Cabal. Package databases are monoids. Configuration files are monoids. Command line flags and sets of command line flags are monoids. Package build information is a monoid.
It is also used in the Foldable class which is a nice interface for traversing/visiting structures. Binary serialisation is also a monoid.
Also, xmonad configuration hooks are monoidal. So all those xmonad users gluing together keybindings are using the Monoid class.
-- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-01-15 at 16:03 -0500, Andrew Wagner wrote:
I think perhaps the correct question here is not "how many instances of Monoid are there?", but "how many functions are written that can use an arbitrary Monoid". E.g., the fact that there are a lot of instances of Monad doesn't make it useful. There are a lot of instances of Monad because it's useful to have instances of Monad. Why? Because of http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.htm... ! Look at all the cool stuff you can automagically do with your type just because it's an instance of Monad! I think that's the point. What can you do with arbitrary Monoids? Not much, as evidenced by http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html
Data.Foldable has several functions that use an arbitrary monoid. A new API I've been working on for manipulating cabal files uses a tree of values of any monoid type. Sets of installation paths is a monoid and is parametrised by another monoid type (so we can have both sets of file paths and path templates). A similar point applies for package indexes. Most of the utility functions for handling command line arguments in Cabal are parameterised by the monoid, because different command line flags are different kinds of monoid. Some are list monoids, others are first / last style monoids. But it's not just the ability to write generic functions that is relevant. By making a type an instance of Monoid instead of exporting emptyFoo, joinFoo functions it makes the API clearer because it shows that we are re-using an existing familiar concept rather than inventing a new one. It also means the user already knows that joinFoo must be associative and have unit emptyFoo without having to read the documentation. Duncan

Andrew Wagner wrote:
I think perhaps the correct question here is not "how many instances of Monoid are there?", but "how many functions are written that can use an arbitrary Monoid". E.g., the fact that there are a lot of instances of Monad doesn't make it useful. There are a lot of instances of Monad because it's useful to have instances of Monad. Why? Because of http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.htm... ! Look at all the cool stuff you can automagically do with your type just because it's an instance of Monad! I think that's the point. What can you do with arbitrary Monoids? Not much, as evidenced by http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Monoid.html
One example where Monoids (in full generality) are useful is that of measurements in the Data.Sequence paper (which is sadly not implemented in the library, although it is used to maintain the length for efficient indexing), http://www.soi.city.ac.uk/~ross/papers/FingerTree.html The concept applies to any tree that represents an ordered list. The basic idea is that given a measurement for single elements, class Monoid v => Measured a v where measure :: a -> v we can annotate a tree with cached measurements of the corresponding sequences, data Tree a v = Empty | Leaf v a | Node v (Tree a v) (Tree a v) measureTree :: Measured a v => Tree a v -> v measureTree Empty = mzero measureTree (Leaf v _) = v measureTree (Node v _ _) = v which can be calculated easily by smart constructors: leaf :: Measured a v => a -> Tree a v leaf a = Leaf (measure a) a node :: Measured a v => Tree a v -> Tree a v -> Tree a v node l r = Node (measureTree l `mappend` measureTree r) l r Because v is a monoid, the construction satisfies the law measureTree = mconcat . map measure . toList where toList Empty = [] toList (Leaf _ a) = [a] toList (Node _ l r) = toList l ++ toList r All usually efficient tree operations, insertion, deletion, splitting, concatenation, and so on, will continue to work, if the cached values are ignored on pattern matching and the smart constructors are used for constructing the new trees. measure or `mappend` will be called for each smart constructor use - if they take constant time, the complexity of the tree operations doesn't change. Applications include: - finding and maintaining the sum of any substring of the sequence. - maintaining minimum and maximum of the list elements - maintaining the maximal sum of any substring of the sequence (this can be done by measuring four values for each subtree: 1. the sum of all elements of the sequence 2. the maximum sum of any prefix of the sequence 3. the maximum sum of any suffix of the sequence 4. the maximum sum of any substring of the sequence) I also found the idea useful for http://projecteuler.net/index.php?section=problems&id=220 starting out with -- L system basis class Monoid a => Step a where l :: a r :: a f :: a and then providing a few instances for Step, one of which was a binary tree with two measurements. Bertram

Graphic composition using painters algorithm can be seen as a monoid.
data Graphic = Empty
| Graphic `Over` Graphic
| Ellipse Bounds
| ....
instance Monoid Graphic where
mempty = Empty
mappend = Over
So all functions that operate on monoids can be used on Graphic as well,
like mconcat that converts a [Graphic] into a Graphic
On Thu, Jan 15, 2009 at 9:51 PM, Don Stewart
duncan.coutts:
On Thu, 2009-01-15 at 19:46 +0000, Andrew Coppin wrote:
PS. As a small aside... Is the Monoid class actually used *anywhere* in all of Haskell?
Yes.
They're used quite a lot in Cabal. Package databases are monoids. Configuration files are monoids. Command line flags and sets of command line flags are monoids. Package build information is a monoid.
It is also used in the Foldable class which is a nice interface for traversing/visiting structures. Binary serialisation is also a monoid.
Also, xmonad configuration hooks are monoidal. So all those xmonad users gluing together keybindings are using the Monoid class.
-- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Duncan Coutts wrote:
On Thu, 2009-01-15 at 19:46 +0000, Andrew Coppin wrote:
PS. As a small aside... Is the Monoid class actually used *anywhere* in all of Haskell?
Yes.
They're used quite a lot in Cabal. Package databases are monoids. Configuration files are monoids. Command line flags and sets of command line flags are monoids. Package build information is a monoid.
OK, well then my next question would be "in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
It is also used in the Foldable class which is a nice interface for traversing/visiting structures. Binary serialisation is also a monoid.
Foldable I'm vaguely familiar with. Its utility is more apparent.

On Thu, 2009-01-15 at 21:21 +0000, Andrew Coppin wrote:
OK, well then my next question would be "in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
[ I know I'm repeating myself from elsewhere in this thread but this is the better question for the answer :-) ] By making a type an instance of Monoid instead of exporting emptyFoo, joinFoo functions it makes the API clearer because it shows that we are re-using an existing familiar concept rather than inventing a new one. It also means the user already knows that joinFoo must be associative and have unit emptyFoo without having to read the documentation. Perhaps it's what OO programmers would call a design pattern. Identify a pattern, give it a name and then when the pattern crops up again (and again) then the reader of the code will have an easier time because they are already familiar with that named pattern. Of course the fact that we can occasionally use the pattern to parametrise and write more re-usable code is a bonus. Duncan

Duncan Coutts wrote:
By making a type an instance of Monoid instead of exporting emptyFoo, joinFoo functions it makes the API clearer because it shows that we are re-using an existing familiar concept rather than inventing a new one. It also means the user already knows that joinFoo must be associative and have unit emptyFoo without having to read the documentation.
I don't know about you, but rather than knowing that joinFoo is associative, I'd be *far* more interested in finding out what it actually _does_. Knowing that it's a monoid doesn't really tell me anything useful. A monoid can be almost anything. As an aside, the integers form two different monoids. Haskell can't [easily] handle that. Does anybody know of a language that can?

On Thu, Jan 15, 2009 at 5:32 PM, Andrew Coppin
As an aside, the integers form two different monoids. Haskell can't [easily] handle that. Does anybody know of a language that can?
Some of the ML-derived languages can do that. Essentially, your code
takes another module which implements a monoid as an argument.
The catch is that you have to explicitly provide the monoid
implementation in order to use your code.
--
Dave Menendez

On Thursday 15 January 2009 6:21:28 pm David Menendez wrote:
On Thu, Jan 15, 2009 at 5:32 PM, Andrew Coppin
wrote: As an aside, the integers form two different monoids. Haskell can't [easily] handle that. Does anybody know of a language that can?
Some of the ML-derived languages can do that. Essentially, your code takes another module which implements a monoid as an argument.
The catch is that you have to explicitly provide the monoid implementation in order to use your code.
You can do that in Haskell, as well, although it will end up uglier than ML. You can write your own dictionary type: data Monoid a = Monoid { unit :: a , bin :: m -> m -> m } And pass that around: twice :: Monoid m -> m -> m twice mon m = bin mon m m And even manually simulate locally opening the dictionary with a where clause twice mon m = m ++ m where (++) = bin mon This is, after all, what GHC translates type classes into behind the scenes (although that isn't necessarily how they must be implemented). Some folks even argue that type classes are overused and this style should be significantly more common. -- Dan

On Thu, 15 Jan 2009, Andrew Coppin wrote:
I don't know about you, but rather than knowing that joinFoo is associative, I'd be *far* more interested in finding out what it actually _does_.
A good many descriptions won't tell you whether it's associative though, and sometimes you need to know - for example, are foldl and foldr (denotationally) equivalent with this function? That is, can you just swap which function you call without any further checking?
As an aside, the integers form two different monoids. Haskell can't [easily] handle that. Does anybody know of a language that can?
There're many ways of doing it, the question's what you lose in the process. Usually you have to explicitly state which monoid you're using in each and every place, and there has to be a means for types that're based around (say) a monoid to state which monoid it is they're based around (this one's more likely to crop up with orderings). Haskell effectively dodges a limited form of dependent typing by being able to deduce that directly from the types involved. -- flippa@flippac.org "The reason for this is simple yet profound. Equations of the form x = x are completely useless. All interesting equations are of the form x = y." -- John C. Baez

On Thu, Jan 15, 2009 at 5:27 PM, Duncan Coutts
On Thu, 2009-01-15 at 21:21 +0000, Andrew Coppin wrote:
OK, well then my next question would be "in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
[ I know I'm repeating myself from elsewhere in this thread but this is the better question for the answer :-) ]
By making a type an instance of Monoid instead of exporting emptyFoo, joinFoo functions it makes the API clearer because it shows that we are re-using an existing familiar concept rather than inventing a new one. It also means the user already knows that joinFoo must be associative and have unit emptyFoo without having to read the documentation.
I assume these are all documented where the type is defined? One
disadvantage of Monoid compared to Monad is that you really need to
explain what operation your Monoid instance performs.
For example, the documentation for Maybe describes what its Monad
instance does, but not its Monoid instance.
I don't think any of the instances for [] are documented. (Admittedly,
this is difficult, since [] is not actually exported from anywhere.)
--
Dave Menendez

On 15/01/2009, at 23:27, Duncan Coutts wrote:
On Thu, 2009-01-15 at 21:21 +0000, Andrew Coppin wrote:
OK, well then my next question would be "in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
[ I know I'm repeating myself from elsewhere in this thread but this is the better question for the answer :-) ]
By making a type an instance of Monoid instead of exporting emptyFoo, joinFoo functions it makes the API clearer because it shows that we are re-using an existing familiar concept rather than inventing a new one. It also means the user already knows that joinFoo must be associative and have unit emptyFoo without having to read the documentation.
Perhaps it's what OO programmers would call a design pattern. Identify a pattern, give it a name and then when the pattern crops up again (and again) then the reader of the code will have an easier time because they are already familiar with that named pattern.
Exactly, documenting knowledge was one of the benefits of design patterns. Monoid looks like the Composite pattern, one of the original GoF patterns. Is Composite is a better name for Monoid? I guess that when the GoF folks were writing the book they had to come up with quite a few names, and some came out better than others. If anything, the Haskell approach is more consistent.

2009/1/15 Andrew Coppin
OK, well then my next question would be "in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
I can't speak from the perspective of the Cabal developers, but combining configurations with partial information using a monoid operation is generally a good way to structure things. Basically, this would be analogous to the way that the First monoid (or the Last monoid) works, but across a number of fields. You have an empty or default configuration which specifies nothing that serves as the identity, and then a way of layering choices together, which is the monoid operation. - Cale

On Thu, 2009-01-15 at 18:41 -0500, Cale Gibbard wrote:
2009/1/15 Andrew Coppin
: OK, well then my next question would be "in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
I can't speak from the perspective of the Cabal developers, but combining configurations with partial information using a monoid operation is generally a good way to structure things. Basically, this would be analogous to the way that the First monoid (or the Last monoid) works, but across a number of fields. You have an empty or default configuration which specifies nothing that serves as the identity, and then a way of layering choices together, which is the monoid operation.
Exactly. Some fields are the Last monoid (we call it Flag) and some are the list monoid. Whole sets of such settings are monoids point-wise. It is indeed great for combining/overriding setting from defaults, config files and the command line. Duncan

Andrew Coppin wrote:
Duncan Coutts wrote:
[Monoids are] used quite a lot in Cabal. Package databases are monoids. Configuration files are monoids. Command line flags and sets of command line flags are monoids. Package build information is a monoid.
OK, well then my next question would be "in what way is defining configuration files as a monoid superior to, uh, not defining them as a monoid?" What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason...
It makes those things generically combinable. Anton

On Thu, Jan 15, 2009 at 12:38 PM, Duncan Coutts wrote: On Thu, 2009-01-15 at 19:46 +0000, Andrew Coppin wrote: PS. As a small aside... Is the Monoid class actually used *anywhere* in
all of Haskell? Yes. They're used quite a lot in Cabal. Package databases are monoids.
Configuration files are monoids. Command line flags and sets of command
line flags are monoids. Package build information is a monoid. It is also used in the Foldable class which is a nice interface for
traversing/visiting structures. Binary serialisation is also a monoid. The Writer Monad requires that you give it a Monoid for it to do its work
properly. Duncan _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

"Andrew" == Andrew Coppin
writes:
Andrew> If we *must* insist on using the most obscure possible name for Andrew> everything, can we at least write some documentation that Andrew> doesn't require a PhD to comprehend?? (Anybody who attempts to Andrew> argue that "monoid" is not actually an obscure term has clearly Andrew> lost contact with the real world.) *thumb up* Let the elitists enjoy in obscure terminology, but pls. write docs for programmers (with examples included). Sincerely, Gour -- Gour | Zagreb, Croatia | GPG key: C6E7162D ----------------------------------------------------------------

Maybe you can explain that again? I see how the subset of Kleisli arrows (a -> m a) forms a monoid (a, return . id, >>=), but what to do with (a -> m b)? (>>=) is not closed under this larger set. Dan Miguel Mitrofanov wrote:
Notice that "monoid" sounds almost *exactly* like "monad". And yet, what you use them for is wildly unrelated.
Well, monads are monoids. I remember explaining you that... _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 16 Jan 2009, at 01:10, Dan Weston wrote:
Maybe you can explain that again?
Sure. Consider the following setting: a category C and a bifunctor T : C x C -> C, which is associative and have a (left and right) unit I. This is what is called "monoidal category". A "monoid" is an object X in C with two morphisms: I -> X and T(X, X) -
X, satisfying two relatively simple conditions (I don't want to draw commutative diagrams).
If your category is a category of sets, and T is a cartesian product, then you have ordinary monoids (I is a one-element set, first morphism is a unit of a monoid, and second morphism is monoid multiplication). If, however, you category is a category of endofunctors of some category D (that is, functors D -> D), and T is composition, then our "monoids" become monads on D: I is an identity functor, first morphism is "return", and second one is "join".
I see how the subset of Kleisli arrows (a -> m a) forms a monoid (a, return . id, >>=), but what to do with (a -> m b)? (>>=) is not closed under this larger set.
Dan
Miguel Mitrofanov wrote:
Notice that "monoid" sounds almost *exactly* like "monad". And yet, what you use them for is wildly unrelated. Well, monads are monoids. I remember explaining you that...
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Jan 15, 2009 at 2:24 PM, Miguel Mitrofanov
If, however, you category is a category of endofunctors of some category D (that is, functors D -> D), and T is composition, then our "monoids" become monads on D: I is an identity functor, first morphism is "return", and second one is "join".
You can see this more concretely in Haskell code here: http://sigfpe.blogspot.com/2008/11/from-monoids-to-monads.html (This probably ought to be in a separate thread.) -- Dan

off the top of their head what the difference between an epimorphism and a hylomorphism is?
They're not even from the same branch of mathematics. Epimorphisms are defined in category theory, as arrows which can be cancelled when they appear on the right of a composite, that is, if f is an epimorphism, and g . f = h . f, then g = h. Such arrows are somewhat comparable to surjective functions. Hylomorphisms are from recursion theory. They are the composite of an anamorphism (which builds up a recursive structure from an initial seed) with a catamorphism, (which replaces the constructors in that recursive structure with other functions). Terminology has value, in that it allows you to see things in a new way which is clearer than what could otherwise be achieved. Any programmer worth their salt should be comfortable absorbing a new term, the same way they learn a new library function. We should remember that Haskell's beauty is not an accident. It is proportional to the amount of effort which went into building the solid mathematical foundations describing its semantics, and designing a language which reflected those semantics as clearly as possible. Discarding those foundations in an attempt to get more users is a price I would personally never want to see us pay. - Cale

On Thu, 15 Jan 2009, Andrew Coppin wrote:
I was especially amused by the assertion that "existential quantification" is a more precise term than "type variable hiding". (The former doesn't even tell you that the feature in question is related to the type system! Even the few people in my poll who knew of the term couldn't figure out how it might be related to Haskell. And one guy argued that "forall" should denote universal rather than existential quantification...)
This one's a particularly awkward special case. The original syntax for writing existential quantifications had looking like existing datatype declarations as a major goal, and this turned out to be just the wrong thing - GADTs made this rather more clear, and with the new syntax it should be much easier to explain why the forall keyword ends up meaning that. The first word that comes to mind for the old syntax... well, starts with an F. These things happen when you use research concepts though, and I can't see how at the time anyone could have been expected to do any better. As for "what's it got to do with types?" - well, that's a Curry-Howard thing. If I ever find myself in the situation of documenting a typed FPL for ordinary programmers, briefly explaining the relationship between type systems and logics is going to happen very early on indeed! -- flippa@flippac.org There is no magic bullet. There are, however, plenty of bullets that magically home in on feet when not used in exactly the right circumstances.

On Thu, 2009-01-15 at 09:34 -0600, John Goerzen wrote:
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
One thing that does annoy me about Haskell- naming. Say you've noticed a common pattern, a lot of data structures are similar to the difference list I described above, in that they have an empty state and the ability to append things onto the end. Now, for various reasons, you want to give this pattern a name using on Haskell's tools for expressing common idioms as general patterns (type classes, in this case). What name do you give it? I'd be inclined to call it something like "Appendable". But no, Haskell calls this pattern a "Monoid". Yep, that's all a monoid is- something with an empty state and the ability to append things to the end. Well, it's a little more general than that, but not much. Simon Peyton Jones once commented that the biggest mistake Haskell made was to call them "monads" instead of "warm, fluffy things". Well, Haskell is exacerbating that mistake. Haskell developers, stop letting the category theorists name things. Please. I beg of you.
I'd like to echo that sentiment!
No. Never. We will fight in the mailing lists. We will fight in the blog posts. We will never surrender. Where, in the history of western civilization, has there ever been an engineering discipline whose adherents were permitted to remain ignorant of the basic mathematical terminology and methodology that their enterprise is founded on? Why should software engineering be the lone exception? No one may be a structural engineer, and remain ignorant of physics. No one may be a chemical engineer, and remain ignorant of chemistry. Why on earth should any one be permitted to be a software engineer, and remain ignorant of computing science? jcc

Jonathan Cast wrote:
Where, in the history of western civilization, has there ever been an engineering discipline whose adherents were permitted to remain ignorant of the basic mathematical terminology and methodology that their enterprise is founded on? Why should software engineering be the lone exception?
No one may be a structural engineer, and remain ignorant of physics. No one may be a chemical engineer, and remain ignorant of chemistry. Why on earth should any one be permitted to be a software engineer, and remain ignorant of computing science?
Indeed. Because abstract alebra is highly relevant to computer programming. Oh, wait... Many people complain that too many "database experts" don't know the first thing about basic normalisation rules, SQL injection attacks, why you shouldn't use cursors, and so forth. But almost nobody complains that database experts don't know set theory or relational alebra. Why should proramming be any different? Don't get me wrong, there are mathematical concepts that are relevant to computing, and we should encourage people to learn about them. But you really *should not* need to do an undergraduate course in mathematical theory just to work out how to concat two lists. That's absurd. Some kind of balance needs to be found.

On Thu, 2009-01-15 at 21:29 +0000, Andrew Coppin wrote:
Jonathan Cast wrote:
Where, in the history of western civilization, has there ever been an engineering discipline whose adherents were permitted to remain ignorant of the basic mathematical terminology and methodology that their enterprise is founded on? Why should software engineering be the lone exception?
No one may be a structural engineer, and remain ignorant of physics. No one may be a chemical engineer, and remain ignorant of chemistry. Why on earth should any one be permitted to be a software engineer, and remain ignorant of computing science?
Indeed. Because abstract alebra is highly relevant to computer programming. Oh, wait...
Beg pardon? That was an argument? I'm sorry, but I can't infer your middle term.
Many people complain that too many "database experts" don't know the first thing about basic normalisation rules, SQL injection attacks, why you shouldn't use cursors, and so forth. But almost nobody complains that database experts don't know set theory or relational alebra.
I didn't know this. I intend to start. But, in any case, you picked your counter-example *from within software engineering*, at least as broadly understood. My claim is that the computer industry as a whole is *sick*, that we are simply going about this enterprise of dealing with these (memory-limited) universal Turing machines (= implementations of lambda calculus = universal recursive functions) *wrong*. More cases of this, within the computer industry, re-enforces my claim, rather than weakening it.
Don't get me wrong, there are mathematical concepts that are relevant to computing,
You mean like monads?
and we should encourage people to learn about them. But you really *should not* need to do an undergraduate course in mathematical theory just to work out how to concat two lists.
Look, if you want (++), you know where to find it. Or are you complaining that you shouldn't have to study mathematics to understand what (++) and, say, the choice operation on events, have in common? jcc

On Thu, Jan 15, 2009 at 1:29 PM, Andrew Coppin
But you really *should not* need to do an undergraduate course in mathematical theory just to work out how to concat two lists. That's absurd. Some kind of balance needs to be found.
Balance is good, but it's hard to find a balance when people exaggerate so much. Firstly: You don't need monoids to concat two lists. You need monoids when you want to abstract the operation of concatting two lists so that the same code can be reused in other ways. A good example is the writer monad. The author of that monad could have made it work just with strings. But one of the coollest things about Haskell is the way it's so amenable to micro-refactoring. By realising there's a bunch of other things the Writer monad can do using *exactly the same implementation* you get reusability. If you don't want this kind of reusability you may be better off with C or Fortran. Secondly: you don't need an "undergraduate course." to understand monoids. A monoid is just a collection of things with an operation allowing you to combine two things to get another one. And an element that acts like 'nothing' so that when you combine it with other elements it leaves them unchanged (and an additional simple condition). This would be the first 30 seconds of a course on monoids that presupposes nothing more than a naive idea of what a set is. The only thing that's difficult about monoids is that it's a new word. There's little 'theory' involved. Your talk of undergraduate courses to concat two lists isn't grounded in any kind of reality, muddies the waters, and probably scares people away from Haskell by giving the impression that it's much harder than it is. -- Dan

Your talk of undergraduate courses to concat two lists isn't grounded in any kind of reality, muddies the waters, and probably scares people away from Haskell by giving the impression that it's much harder than it is.
I've been studying Haskell a bit to understand and learn more about functional programming (I'm using F#). I have to say, the scariest thing I've faced was exactly what you say. Everything I read built "monads" up to be this ungraspable thing like quantum mechanics. Even after I actually understood it well enough, I kept thinking I must be missing something because there's so much fuss about it. -Michael

On Thu, Jan 15, 2009 at 7:02 PM, Michael Giagnocavo
Your talk of undergraduate courses to concat two lists isn't grounded in any kind of reality, muddies the waters, and probably scares people away from Haskell by giving the impression that it's much harder than it is.
I've been studying Haskell a bit to understand and learn more about functional programming (I'm using F#). I have to say, the scariest thing I've faced was exactly what you say. Everything I read built "monads" up to be this ungraspable thing like quantum mechanics.
Yeah, monad is on the same level as quantum mechanics. Both are equally simple and popularly construed as ungraspable. (However to grasp monads easily you need a background in FP; to grasp QM easily you need a background in linear algebra) Luke

Richard Feinman once said: "if someone says he understands quantum mechanics, he doesn't understand quantum mechanics". But what did he know... Luke Palmer wrote:
On Thu, Jan 15, 2009 at 7:02 PM, Michael Giagnocavo
mailto:mgg@giagnocavo.net> wrote: >Your talk of undergraduate courses to concat two lists isn't grounded >in any kind of reality, muddies the waters, and probably scares people >away from Haskell by giving the impression that it's much harder than >it is.
I've been studying Haskell a bit to understand and learn more about functional programming (I'm using F#). I have to say, the scariest thing I've faced was exactly what you say. Everything I read built "monads" up to be this ungraspable thing like quantum mechanics.
Yeah, monad is on the same level as quantum mechanics. Both are equally simple and popularly construed as ungraspable.
(However to grasp monads easily you need a background in FP; to grasp QM easily you need a background in linear algebra)
Luke

Dan Weston wrote:
Richard Feinman once said: "if someone says he understands quantum mechanics, he doesn't understand quantum mechanics".
But what did he know...
Well, I am a physicist and Feynman (with a y, not an i), is not talking about the linear algebra. Of course, linear algebra [1] here is used a vector space [2]. The tricky thing is that humans then "measure" the state. And this is confusing step that causes Feynman to say that no one understands it. But the measurement step and how it interacts with the vector space can be approximated by an algorithm [3] using ExistentialQuantification and Arrows. [1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/numeric-quest http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmatrix http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Vec http://hackage.haskell.org/cgi-bin/hackage-scripts/package/blas [2] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/vector-space [3] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/quantum-arrow -- Chris

On Thu, 15 Jan 2009 13:21:57 -0800, you wrote:
Where, in the history of western civilization, has there ever been an engineering discipline whose adherents were permitted to remain ignorant of the basic mathematical terminology and methodology that their enterprise is founded on?
Umm, all of them?
No one may be a structural engineer, and remain ignorant of physics. No one may be a chemical engineer, and remain ignorant of chemistry. Why on earth should any one be permitted to be a software engineer, and remain ignorant of computing science?
Do you know any actual working structural or chemical engineers? Most engineering disciplines require a basic grasp of the underlying theory, yes, but not much beyond that. Pretty much everything else is covered by rules (either rules of thumb or published standards). Show me an electrical engineer who can explain the physics of a pn junction and how it acts as a rectifier, or a civil engineer who can explain why the stress/strain curve of a steel beam has the shape that it does, or a chemical engineer who can explain molecular orbital theory. Those kinds of engineers do exist, of course, but they are few and far between. If you aim your product only at the kinds of engineers who _can_ do those things, you will be reaching a tiny, tiny fraction of the overall population. Steve Schafer Fenestra Technologies Corp. http://www.fenestra.com/

On Thu, 2009-01-15 at 17:06 -0500, Steve Schafer wrote:
On Thu, 15 Jan 2009 13:21:57 -0800, you wrote:
Where, in the history of western civilization, has there ever been an engineering discipline whose adherents were permitted to remain ignorant of the basic mathematical terminology and methodology that their enterprise is founded on?
Umm, all of them?
Really. So the engineer who designed the apartment building I'm in at the moment didn't know any physics, thought `tensor' was a scary math term irrelevant to practical, real-world engineering, and will only read books on engineering that replace the other scary technical term `vector' with point-direction-value-thingy? I think I'm going to sleep under the stars tonight...
No one may be a structural engineer, and remain ignorant of physics. No one may be a chemical engineer, and remain ignorant of chemistry. Why on earth should any one be permitted to be a software engineer, and remain ignorant of computing science?
Do you know any actual working structural or chemical engineers?
Um, no. I try to avoid people as much as possible; computers at least make sense. Also anything else to do with the real world :)
Most engineering disciplines require a basic grasp of the underlying theory, yes, but not much beyond that.
Perhaps I should have said `completely ignorant'? Or do you think that join . join = join . fmap join is of the same level of theoretical depth as quantum orbital mechanics?
Pretty much everything else is covered by rules (either rules of thumb or published standards).
Show me an electrical engineer who can explain the physics of a pn junction and how it acts as a rectifier, or a civil engineer who can explain why the stress/strain curve of a steel beam has the shape that it does,
Again, do engineers know *what* stress is? Do they understand terms like `tensor'? Those things are the rough equivalents of terms like `monoid'. jcc

On Thu, Jan 15, 2009 at 10:18 PM, Jonathan Cast
On Thu, 2009-01-15 at 17:06 -0500, Steve Schafer wrote:
On Thu, 15 Jan 2009 13:21:57 -0800, you wrote:
Where, in the history of western civilization, has there ever been an engineering discipline whose adherents were permitted to remain ignorant of the basic mathematical terminology and methodology that their enterprise is founded on?
Umm, all of them?
Really. So the engineer who designed the apartment building I'm in at the moment didn't know any physics, thought `tensor' was a scary math term irrelevant to practical, real-world engineering, and will only read books on engineering that replace the other scary technical term `vector' with point-direction-value-thingy? I think I'm going to sleep <snip> It feels like this conversation is going in circles. What I'm taking away from the two very different arguments being made is that
1) math terms have their place when they describe the concept very precisely, e.g. monoid 2) the Haskell docs _don't_ do good enough a job at giving intuition for what math terms mean If we fix #2, then #1 is no longer a problem, yes? For you folks who work on GHC, is it acceptable to open tickets for poor documentation of modules in base? I think leaving the documentation to the tragedy of the commons isn't the best move, but if even a few of us could remember to open tickets when new Haskell'ers complain about something being confusing then it could be on _someone's_ docket. Cheers, Creighton

2) the Haskell docs _don't_ do good enough a job at giving intuition for what math terms mean
If we fix #2, then #1 is no longer a problem, yes?
For you folks who work on GHC, is it acceptable to open tickets for poor documentation of modules in base? I think leaving the documentation to the tragedy of the commons isn't the best move, but if even a few of us could remember to open tickets when new Haskell'ers complain about something being confusing then it could be on _someone's_ docket.
I can't find the thread at the moment, but this has been discussed before, and my recollection is that wikis were to be used to accumulate documentation comments and updates (there also was some discussion about the best format for comment patches, but getting content was thought to be more important). So, there would be a subset of the Haskell wiki for the base library, and package-specific wiki locations for their documentations. Haddock already seems to provide the necessary support: http://www.haskell.org/haddock/doc/html/invoking.html --comments-base=URL , --comments-module=URL , --comments-entity=URL Include links to pages where readers may comment on the documentation. This feature would typically be used in conjunction with a Wiki system. Use the --comments-base option to add a user comments link in the header bar of the contents and index pages. Use the --comments-module to add a user comments link in the header bar of each module page. Use the --comments-entity option to add a comments link next to the documentation for every value and type in each module. In each case URL is the base URL where the corresponding comments page can be found. For the per-module and per-entity URLs the same substitutions are made as with the --source-module and --source-entity options above. For example, if you want to link the contents page to a wiki page, and every module to subpages, you would say haddock --comments-base=url --comments-module=url/%M So it seems it is only a question of actually using these options with suitably prepared per-package documentation wikis, and improving the documentation or asking for clarifications would be almost as easy as emailing here!-) And if everyone who stumbles over a documentation problem puts the solution on the wiki, documentation should improve quickly (there is still the issue of selecting wiki improvement suggestions for inclusion in the "real" documentation). Does anyone know why these options are not in use already? Claus

On Fri, Jan 16, 2009 at 5:39 AM, Creighton Hogg
For you folks who work on GHC, is it acceptable to open tickets for poor documentation of modules in base? I think leaving the documentation to the tragedy of the commons isn't the best move, but if even a few of us could remember to open tickets when new Haskell'ers complain about something being confusing then it could be on _someone's_ docket.
I second that. Upon reading this thread, I asked myself : what's a monoid ? I had no idea. I read some posts, then google "haskell monoid". The first link leads me to Data.Monoid which starts with << Description The Monoid class with various general-purpose instances. Inspired by the paper /Functional Programming with Overloading and Higher-Order Polymorphism/, Mark P Jones (http://citeseer.ist.psu.edu/jones95functional.html) Advanced School of Functional Programming, 1995.
Before going further, I click on the link and I'm on citeseer. The abstract talks about the Hindley/Milner type system, but no mention of monoid. I download the pdf, and search for monoid in acrobat reader. No matches. I read further on Data.Monoid... << The monoid class. A minimal complete definition must supply mempty and mappend, and these should satisfy the monoid laws."
The laws are not mentionned. I learn that there are 3 operations on monoids: mappend which is an associative operation mempty which is an identity of mappend. mconcat which "folds a list using the monoid", which I think I understand this way : mempty will be the seed of the fold, and mappend the fonction called for each item. The module defines the dual of a monoid without explaining much; the "monoid of endomorphisms under composition" (another word to look up) In fact I realise many monoids are defined, and I don't know what are they usefull for. The next few pages google gives me are about monads. Then there's some blog posts by sigfpe, which I'm not going to read because they're often way too complicated for me to understand. Actually I still read it and there I find what I think is a monoid law: << They are traditionally sets equipped with a special element and a binary operator so that the special element acts as an identity for the binary operator, and where the binary operator is associative. We expect type signatures something like one :: m and mult :: m -> m -> m so that, for example, m (m a b) c == m a (m b c).
I don't get it right away, though, and the rest is code that I skip because I just want info on monoids. Another page : MonadPlus VS Monoids... Still not the basic info that I'd love to find. I am now at the end of the first page of google results, and I don't have any clue about: - what are the laws of a monoid besides it has an associative operation and an identity ? - what is the point of a monoid other than being a generalisation/abstraction ? What kind of uses this particular generalisation brings me ? Part of the problem is that something like a monoid is so general that I can't wrap my head around why going so far in the abstraction. For example, the writer monad works with a monoid; using the writer monad with strings makes sense because the mappend operation for lists is (++), now why should I care that I can use the writer monad with numbers which it will sum ? ( if I understood correctly ! ) I don't care about the name, it's ok for me that the name mathematicians defined is used, but there are about two categories of people using haskell and I would love that each concept would be adequately documented for everyone: - real-world oriented programming documentation with usefulness and examples for the non mathematician - the mathematics concepts and research papers for the mathematicians for those who want/need to go further As someone mentionned, the documentation can't really be done by someone that doesn't fully grok the concepts involved.

On Fri, 2009-01-16 at 14:16 +0100, david48 wrote:
Upon reading this thread, I asked myself : what's a monoid ? I had no idea. I read some posts, then google "haskell monoid".
The first link leads me to Data.Monoid which starts with
<< Description The Monoid class with various general-purpose instances.
Inspired by the paper /Functional Programming with Overloading and Higher-Order Polymorphism/, Mark P Jones (http://citeseer.ist.psu.edu/jones95functional.html) Advanced School of Functional Programming, 1995.
Ross just updated the documentation for the Monoid module. Here is how it reads now: The module header now reads simply: A class for monoids (types with an associative binary operation that has an identity) with various general-purpose instances. Note, no links to papers. And the Monoid class has: The class of monoids (types with an associative binary operation that has an identity). The method names refer to the monoid of lists, but there are many other instances. Minimal complete definition: 'mempty' and 'mappend'. Some types can be viewed as a monoid in more than one way, e.g. both addition and multiplication on numbers. In such cases we often define @newtype@s and make those instances of 'Monoid', e.g. 'Sum' and 'Product'. If you or anyone else has further concrete suggestions / improvements then post them here now! :-) Duncan

On Fri, 16 Jan 2009, Duncan Coutts wrote:
If you or anyone else has further concrete suggestions / improvements then post them here now! :-)
Spell out what associativity means and what it means for that operation to have an identity. List a few examples (stating that they're not all instances), including the * and + ones for arithmetic (use this to explain the issue with multiple monoids existing on one type, possibly use it to mention newtypes and newtype deriving as a workaround). Mention the intuition of monoids as abstract 'sequences', in which the sequence itself may be hidden after evaluation or be irrelevant because the operation commutes - the two arithmetic monoids are a good example in both cases. Bonus points for explaining the relationship between monoids and folds somewhere. -- flippa@flippac.org 'In Ankh-Morpork even the shit have a street to itself... Truly this is a land of opportunity.' - Detritus, Men at Arms

Philippa Cowderoy wrote:
On Fri, 16 Jan 2009, Duncan Coutts wrote:
If you or anyone else has further concrete suggestions / improvements then post them here now! :-)
Spell out what associativity means
It probably makes sense to do as Jeremy Shaw suggests and explicitly list the monoid laws, which would include the associative equality, but there really shouldn't be any other text in the definition of Monoid devoted to explaining what associativity means. Instead, linking words like "associative" to a definition in a glossary would make sense. Anton

Anton van Straaten wrote:
It probably makes sense to do as Jeremy Shaw suggests and explicitly list the monoid laws, which would include the associative equality, but there really shouldn't be any other text in the definition of Monoid devoted to explaining what associativity means. Instead, linking words like "associative" to a definition in a glossary would make sense.
I don't know - associativity is almost the only property a monoid has. (Obviously the other one is an identity element.) Either way, wherever the description gets put, just saying "associativity means that (x + y) + z = x + (y + z)" is insufficient. Sure, that's the *definition* of what it is, but we should point out that "associativity means that the ordering of the operations does not affect the result" or something. Something that's intuitive. (The tricky part, of course, is explaining how associative /= commutative.)

2009/1/16 Andrew Coppin
Either way, wherever the description gets put, just saying "associativity means that (x + y) + z = x + (y + z)" is insufficient. Sure, that's the *definition* of what it is, but we should point out that "associativity means that the ordering of the operations does not affect the result" or something. Something that's intuitive. (The tricky part, of course, is explaining how associative /= commutative.)
How about "associativity means that how you pair up the operations does not affect the result"? Paul.

On Fri, Jan 16, 2009 at 12:09 PM, Paul Moore
How about "associativity means that how you pair up the operations does not affect the result"?
I think a better way is this: If you have an element of a monoid, a, there are two ways to combine it with another element, on the left or on the right. You get a `mappend` x or x `mappend` a. Now suppose you're going to combine a with x on the left and y on the right. Associativity means it doesn't matter which you do first. You can think of each element of a monoid as having two sides. The idea is that the left side and right side are independent things that don't interfere with each other. For example, adding some stuff at the beginning of a list, and adding some stuff at the end of a list, don't affect each other, and it doesn't matter which you do first. That's the idea that associativity captures. -- Dan

Dan Piponi wrote:
On Fri, Jan 16, 2009 at 12:09 PM, Paul Moore
wrote: How about "associativity means that how you pair up the operations does not affect the result"?
I think a better way is this: If you have an element of a monoid, a, there are two ways to combine it with another element, on the left or on the right. You get
a `mappend` x or x `mappend` a.
Now suppose you're going to combine a with x on the left and y on the right. Associativity means it doesn't matter which you do first.
You can think of each element of a monoid as having two sides. The idea is that the left side and right side are independent things that don't interfere with each other. For example, adding some stuff at the beginning of a list, and adding some stuff at the end of a list, don't affect each other, and it doesn't matter which you do first.
That's the idea that associativity captures. -- Dan
Indeed, that's the idea that associativity captures; but explicitly pointing out that the left and the right side are their own bubbles is a bit misleading: addition is associative, but there is no "left and right." I think a better wording is: " If you have an element of a monoid, a, there are two ways to combine it with another element, on the left or on the right. You get a `mappend` x or x `mappend` a. Now suppose you're going to combine a with x on the left and y on the right. Associativity means it doesn't matter which you do first. So x `mappend` (a `mappend` y) = (x `mappend` a) `mappend` y, but as we've pointed out, a `mappend` x is not necessarily the same as x `mappend` a although a specific monoid might have them be equal, for example Int (where mappend is *). Is that better? Cory

Hello, Personally, I would like to see the laws more explicitly listed. Some like: -- The Monoid Laws: -- -- 1. Associative: -- -- x `mappend` (y `mappend` z) == (x `mappend` y) `mappend` z -- -- 2. Left Identity: -- -- mempty `mappend` y == y -- -- 3. Right identity: -- -- x `mappend` mempty == x (Actually, what I'd really like to see is the laws provided as QuickCheck properties. I know there is a project doing this already.) j. At Fri, 16 Jan 2009 13:39:10 +0000, Duncan Coutts wrote:
On Fri, 2009-01-16 at 14:16 +0100, david48 wrote:
Upon reading this thread, I asked myself : what's a monoid ? I had no idea. I read some posts, then google "haskell monoid".
The first link leads me to Data.Monoid which starts with
<< Description The Monoid class with various general-purpose instances.
Inspired by the paper /Functional Programming with Overloading and Higher-Order Polymorphism/, Mark P Jones (http://citeseer.ist.psu.edu/jones95functional.html) Advanced School of Functional Programming, 1995.
Ross just updated the documentation for the Monoid module. Here is how it reads now:
The module header now reads simply:
A class for monoids (types with an associative binary operation that has an identity) with various general-purpose instances.
Note, no links to papers.
And the Monoid class has:
The class of monoids (types with an associative binary operation that has an identity). The method names refer to the monoid of lists, but there are many other instances.
Minimal complete definition: 'mempty' and 'mappend'.
Some types can be viewed as a monoid in more than one way, e.g. both addition and multiplication on numbers. In such cases we often define @newtype@s and make those instances of 'Monoid', e.g. 'Sum' and 'Product'.
If you or anyone else has further concrete suggestions / improvements then post them here now! :-)
Duncan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, Jan 16, 2009 at 8:39 AM, Duncan Coutts
Ross just updated the documentation for the Monoid module. Here is how it reads now:
The module header now reads simply:
A class for monoids (types with an associative binary operation that has an identity) with various general-purpose instances.
Note, no links to papers.
And the Monoid class has:
The class of monoids (types with an associative binary operation that has an identity). The method names refer to the monoid of lists, but there are many other instances.
Minimal complete definition: 'mempty' and 'mappend'.
Some types can be viewed as a monoid in more than one way, e.g. both addition and multiplication on numbers. In such cases we often define @newtype@s and make those instances of 'Monoid', e.g. 'Sum' and 'Product'.
If you or anyone else has further concrete suggestions / improvements then post them here now! :-)
A reference to the writer monad and to Data.Foldable might be helpful.
So far as I know they are the only uses of the Monoid abstraction in
the standard libraries.
It's probably a good idea to explicitly state the three monoid laws.
It would be nice to explain what operations have been chosen for the
Monoid instances of Prelude data types. (Maybe this belongs in the
Prelude documentation.)
I'd add a reminder that if you're defining a type with a Monoid
instance, your documentation should explain what the instance does.
--
Dave Menendez

On Fri, Jan 16, 2009 at 12:00:40PM -0500, David Menendez wrote:
A reference to the writer monad and to Data.Foldable might be helpful. So far as I know they are the only uses of the Monoid abstraction in the standard libraries.
It's probably a good idea to explicitly state the three monoid laws.
I've added the laws.
It would be nice to explain what operations have been chosen for the Monoid instances of Prelude data types. (Maybe this belongs in the Prelude documentation.)
The right place for that is the instances, as soon as Haddock starts showing instance comments (http://trac.haskell.org/haddock/ticket/29). Then this information will be listed under both the type and the class.
I'd add a reminder that if you're defining a type with a Monoid instance, your documentation should explain what the instance does.
When that Haddock enhancement is done, this will be general advice for instances of all classes.

On Fri, Jan 16, 2009 at 12:19 PM, Ross Paterson
On Fri, Jan 16, 2009 at 12:00:40PM -0500, David Menendez wrote:
It would be nice to explain what operations have been chosen for the Monoid instances of Prelude data types. (Maybe this belongs in the Prelude documentation.)
The right place for that is the instances, as soon as Haddock starts showing instance comments (http://trac.haskell.org/haddock/ticket/29). Then this information will be listed under both the type and the class.
In the case of Monoid, I think it would be best to have the documentation now, and then attach them to the instances later, once Haddock supports that.
I'd add a reminder that if you're defining a type with a Monoid instance, your documentation should explain what the instance does.
When that Haddock enhancement is done, this will be general advice for instances of all classes.
Sure, but it's especially important for Monoid.
--
Dave Menendez

Hello david48, Friday, January 16, 2009, 4:16:51 PM, you wrote:
Upon reading this thread, I asked myself : what's a monoid ? I had no idea. I read some posts, then google "haskell monoid".
it would be interesting to google "C++ class" or "Lisp function" and compare experience :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Fri, Jan 16, 2009 at 3:10 PM, Bulat Ziganshin
Hello david48,
Friday, January 16, 2009, 4:16:51 PM, you wrote:
Upon reading this thread, I asked myself : what's a monoid ? I had no idea. I read some posts, then google "haskell monoid".
it would be interesting to google "C++ class" or "Lisp function" and compare experience :)
The first link for C++ class I find on google is the wikipedia article which I find understandable, has examples and explanations that relate to programming. OTOH, the wikipedia article for monoid is less easy (for me), though now I can follow the first paragraphs. But I don't find on the page how/why/where it relates to programming.

On Sat, Jan 17, 2009 at 1:41 AM, david48
wrote:
On Fri, Jan 16, 2009 at 3:10 PM, Bulat Ziganshin
wrote: Hello david48,
Friday, January 16, 2009, 4:16:51 PM, you wrote:
Upon reading this thread, I asked myself : what's a monoid ? I had no idea. I read some posts, then google "haskell monoid".
it would be interesting to google "C++ class" or "Lisp function" and compare experience :)
The first link for C++ class I find on google is the wikipedia article which I find understandable, has examples and explanations that relate to programming. OTOH, the wikipedia article for monoid is less easy (for me), though now I can follow the first paragraphs. But I don't find on the page how/why/where it relates to programming.
So you're saying it should be better documented in Haskell what a Monoid is. Did you say you searched for "C++ class" why not "Haskell Monoid" then? The first correct google hit that didn't think I meant Monads, takes you straight to the GHC documentation for Data.Monoid.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sat, Jan 17, 2009 at 4:08 PM, David Leimbach
So you're saying it should be better documented in Haskell what a Monoid is. Did you say you searched for "C++ class" why not "Haskell Monoid" then? The first correct google hit that didn't think I meant Monads, takes you straight to the GHC documentation for Data.Monoid.
Read my first post on the thread, that's exactly what I did ( and then complained that the doc for Data.Monoid was close to useless )

On Sat, Jan 17, 2009 at 9:16 AM, david48
wrote:
On Sat, Jan 17, 2009 at 4:08 PM, David Leimbach
wrote: So you're saying it should be better documented in Haskell what a Monoid is. Did you say you searched for "C++ class" why not "Haskell Monoid" then? The first correct google hit that didn't think I meant Monads, takes you straight to the GHC documentation for Data.Monoid.
Read my first post on the thread, that's exactly what I did ( and then complained that the doc for Data.Monoid was close to useless )
Sorry missed it! This is an exceptionally long thread! :-) I agree Data.Monoid's docs don't give you much to work with.

On Sat, 2009-01-17 at 13:36 -0800, David Leimbach wrote:
On Sat, Jan 17, 2009 at 9:16 AM, david48
wrote: On Sat, Jan 17, 2009 at 4:08 PM, David Leimbach wrote: > So you're saying it should be better documented in Haskell what a Monoid is. > Did you say you searched for "C++ class" why not "Haskell Monoid" then? > The first correct google hit that didn't think I meant Monads, takes you > straight to the GHC documentation for Data.Monoid.
Read my first post on the thread, that's exactly what I did ( and then complained that the doc for Data.Monoid was close to useless )
Sorry missed it! This is an exceptionally long thread! :-) I agree Data.Monoid's docs don't give you much to work with.
Do you think they look better now: http://www.haskell.org/ghc/dist/current/docs/libraries/base/Data-Monoid.html Any other improvements you'd make? Duncan

Hello, Just some minor suggestions and comments: The description might read better as two sentences: A class for monoids with various general-purpose instances. Monoids are types with an associative binary operation that has an identity. One thing that I think is a bit unclear from that description is the fact that it does not matter *what* the binary operation does, as long as the laws are followed. That is the whole point of the monoid class -- you use it when you only care about the laws, not the specific operation... For the laws, it would be nice to label each rule, something like * mappend mempty x = x -- Left Identity * mappend x empty = x -- Right Identity * mappend x (mappend y z) = mappend (mappend x y) z -- Associative * mconcat = foldr mappend mempty -- Not sure what to call this. Perhaps it an axiom? See the Applicative class for a formatting example: http://www.haskell.org/ghc/dist/current/docs/libraries/base/Control-Applicat... As an expert, seeing the name is faster than reverse engineering the meaning of each law. I also suspect many people have never heard of the concept of an identity element (I am pretty sure I hadn't when I first started Haskell). So, I think it would be nice to tie together the concepts mentioned in the description with the actual laws so that the link is explicit. j. At Sun, 18 Jan 2009 13:57:07 +0000, Duncan Coutts wrote:
On Sat, 2009-01-17 at 13:36 -0800, David Leimbach wrote:
On Sat, Jan 17, 2009 at 9:16 AM, david48
wrote: On Sat, Jan 17, 2009 at 4:08 PM, David Leimbach wrote: > So you're saying it should be better documented in Haskell what a Monoid is. > Did you say you searched for "C++ class" why not "Haskell Monoid" then? > The first correct google hit that didn't think I meant Monads, takes you > straight to the GHC documentation for Data.Monoid.
Read my first post on the thread, that's exactly what I did ( and then complained that the doc for Data.Monoid was close to useless )
Sorry missed it! This is an exceptionally long thread! :-) I agree Data.Monoid's docs don't give you much to work with.
Do you think they look better now:
http://www.haskell.org/ghc/dist/current/docs/libraries/base/Data-Monoid.html
Any other improvements you'd make?
Duncan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-01-22 at 11:32 -0600, Jeremy Shaw wrote:
Hello,
Just some minor suggestions and comments:
The description might read better as two sentences:
A class for monoids with various general-purpose instances. Monoids are types with an associative binary operation that has an identity.
One thing that I think is a bit unclear from that description is the fact that it does not matter *what* the binary operation does, as long as the laws are followed. That is the whole point of the monoid class -- you use it when you only care about the laws, not the specific operation...
For the laws, it would be nice to label each rule, something like
* mappend mempty x = x -- Left Identity * mappend x empty = x -- Right Identity * mappend x (mappend y z) = mappend (mappend x y) z -- Associative
* mconcat = foldr mappend mempty -- Not sure what to call this. Perhaps it an axiom?
This is just a definition, both actually and nominally.

On Fri, 2009-01-16 at 14:16 +0100, david48 wrote:
Part of the problem is that something like a monoid is so general that I can't wrap my head around why going so far in the abstraction. For example, the writer monad works with a monoid; using the writer monad with strings makes sense because the mappend operation for lists is (++), now why should I care that I can use the writer monad with numbers which it will sum ?
To accumulate a running count, maybe? A fairly common pattern for counting in imperative languages is int i = 0; while (<get a value>) i+= <count of something in value> Using the writer monad, this turns into execWriter $ mapM_ (write . countFunction) $ getValues jcc

On Fri, Jan 16, 2009 at 4:04 PM, Jonathan Cast
On Fri, 2009-01-16 at 14:16 +0100, david48 wrote:
Part of the problem is that something like a monoid is so general that I can't wrap my head around why going so far in the abstraction. For example, the writer monad works with a monoid; using the writer monad with strings makes sense because the mappend operation for lists is (++), now why should I care that I can use the writer monad with numbers which it will sum ?
To accumulate a running count, maybe? A fairly common pattern for counting in imperative languages is
int i = 0; while (<get a value>) i+= <count of something in value>
Using the writer monad, this turns into
execWriter $ mapM_ (write . countFunction) $ getValues
well thank you for the example, if I may ask something: why would I need to write a running count this way instead of, for example, a non monadic fold, which would probably result in clearer and faster code (IMHO) ?

On Sat, Jan 17, 2009 at 1:47 AM, david48
why would I need to write a running count this way instead of, for example, a non monadic fold, which would probably result in clearer and faster code?
Maybe my post here will answer some questions like that: http://sigfpe.blogspot.com/2009/01/haskell-monoids-and-their-uses.html -- Dan

On Sat, Jan 17, 2009 at 11:19 PM, Dan Piponi
On Sat, Jan 17, 2009 at 1:47 AM, david48
wrote:
why would I need to write a running count this way instead of, for example, a non monadic fold, which would probably result in clearer and faster code?
Maybe my post here will answer some questions like that: http://sigfpe.blogspot.com/2009/01/haskell-monoids-and-their-uses.html
Just wow. Very very nice post. one to keep in the wikis. Thank you *very* much, Dan, for writing this.

On Sun, 18 Jan 2009 08:51:10 +0100
david48
On Sat, Jan 17, 2009 at 11:19 PM, Dan Piponi
wrote: On Sat, Jan 17, 2009 at 1:47 AM, david48
wrote: why would I need to write a running count this way instead of, for example, a non monadic fold, which would probably result in clearer and faster code?
Maybe my post here will answer some questions like that: http://sigfpe.blogspot.com/2009/01/haskell-monoids-and-their-uses.html
Just wow. Very very nice post. one to keep in the wikis. Thank you *very* much, Dan, for writing this.
Seconded. And I hope, Dan, that you will find time at some point to write about those other things you said at the end that you didn't have time to write about! -- Robin

On Sat, 2009-01-17 at 10:47 +0100, david48 wrote:
On Fri, Jan 16, 2009 at 4:04 PM, Jonathan Cast
wrote: On Fri, 2009-01-16 at 14:16 +0100, david48 wrote:
Part of the problem is that something like a monoid is so general that I can't wrap my head around why going so far in the abstraction. For example, the writer monad works with a monoid; using the writer monad with strings makes sense because the mappend operation for lists is (++), now why should I care that I can use the writer monad with numbers which it will sum ?
To accumulate a running count, maybe? A fairly common pattern for counting in imperative languages is
int i = 0; while (<get a value>) i+= <count of something in value>
Using the writer monad, this turns into
execWriter $ mapM_ (write . countFunction) $ getValues
well thank you for the example, if I may ask something: why would I need to write a running count this way instead of, for example, a non monadic fold, which would probably result in clearer and faster code (IMHO) ?
I agree with you, for this special case. (Did I remember to post the simpler solution: sum $ map countFunction $ getValues somewhere in this thread?) But, just like the (utterly useless) C++ example translated to Haskell in another thread, the monadic form provides a framework you can fill out with larger code fragments. So if the while loop above was replaced with a larger control structure, maybe recursion over a custom tree type, then standard recusion operators, such as folds, may be inapplicable. In that case, moving to a Writer monad can get you some of the advantage back, so you don't end up passing your accumulator around everywhere by hand. jcc

david48 wrote:
I don't care about the name, it's ok for me that the name mathematicians defined is used, but there are about two categories of people using haskell and I would love that each concept would be adequately documented for everyone: - real-world oriented programming documentation with usefulness and examples for the non mathematician - the mathematics concepts and research papers for the mathematicians for those who want/need to go further
As someone mentionned, the documentation can't really be done by someone that doesn't fully grok the concepts involved.
Good account of the current documentation situation. Hm, what about the option of opening Bird's "Introduction on Functional Programming using Haskell" in the section about fold? Monoid is on page 62 in the translated copy I've got here. Does Hutton's book mention them? Real World Haskell? I don't think that I would try to learn a programming language, for example Python, without obtaining a paper book on it. Regards, H. Apfelmus

On Fri, Jan 16, 2009 at 10:28 PM, Apfelmus, Heinrich
david48 wrote:
I don't care about the name, it's ok for me that the name mathematicians defined is used, but there are about two categories of people using haskell and I would love that each concept would be adequately documented for everyone: - real-world oriented programming documentation with usefulness and examples for the non mathematician - the mathematics concepts and research papers for the mathematicians for those who want/need to go further
As someone mentionned, the documentation can't really be done by someone that doesn't fully grok the concepts involved.
Good account of the current documentation situation.
Hm, what about the option of opening Bird's "Introduction on Functional Programming using Haskell" in the section about fold? Monoid is on page 62 in the translated copy I've got here.
I don't have this book. I have real world haskell and purely functional data structures though.
Does Hutton's book mention them? Real World Haskell?
the first time it is mentionned in RWH according to the index is page 266 where we read "We forgot to test the Monoid instance" ... "...of Monoid, which is the class of types that support appending and empty elements:" Appending.... :) On the other hand, on page 320 there is a nice explanation of Monoid, and on page 380, which isn't mentionned in the index, there might be the first time one can understand why the writer monad works with monoids instead of lists: to be able to use better suited data types for appending. All of this is still lacking the great why : why/how an abstraction so generic can be useful. I'm starting to believe that the only reason to make a datatype an instance of Monoid is... why not ! since it's not hard to find an associative operation and a neutral element.
I don't think that I would try to learn a programming language, for example Python, without obtaining a paper book on it.
I would, if the online documentation makes it possible, and then I would buy a paper book later, to go further or for reference. That's how I learned Haskell, and much later I've bought my first book.

The great "that's why" is as follows: when you have an abstraction,
then it is sufficient to hold the abstraction in mind instead of the
whole concrete implementation. That's the whole purpose of
abstraction, after all, be it maths or programming.
Let me illustrate this.
Suppose you are developing a library that, for instance, has a notion
of "settings" and is able to combine two "settings" with several
strategies for resolving conflicts between settings with duplicate
keys: take the first setting, the second, none, or make a setting with
multiple values. For example, an alternative GetOpt.
Suppose you don't know what a monoid is and don't even know that such
an abstraction exists.
Now, when you're reasoning about the library, you have to think "If I
combine 3 settings, should the order of combination matter? Hmm, would
be nice if it didn't. Also, what should I return if someone queries
for a non-existent key in the settings? Should I return an empty list,
or a Nothing, or throw an error, or what? Do empty settings make
sense?" etc. If you're smart and lucky, you will most probably get
most things right and inconsciously create a settings monoid.
Now, if you know what a monoid is, you immediately recognize that your
settings should be a monoid by nature, and now you have absolutely no
doubt that you should make the combining operation associative and
provide a unit; and you use this monoid abstraction all the time you
are designing this library. Now, you don't think about whether you
should throw an error or return a Nothing for an empty key; but
instead you think about which result would behave like a unit for the
monoid, being motivated by mathematical principles rather than pure
intuition.
You end up designing a mathematically sound library whose principles
make sence and has no design flaws, at least in the mathematically
sound part, even if you never actually use the word "monoid" in the
documentation.
Also, read this post by sigfpe that motivated me to learn abstract
algebra in depth (I am yet in the beginning, however), and, overall,
this is a breathtaking post:
http://sigfpe.blogspot.com/2008/11/approach-to-algorithm-parallelisation.htm...
- this is where I started to appreciate the power of mathematical
abstractions even more.
2009/1/17 david48
All of this is still lacking the great why : why/how an abstraction so generic can be useful. I'm starting to believe that the only reason to make a datatype an instance of Monoid is... why not ! since it's not hard to find an associative operation and a neutral element.
-- Eugene Kirpichov

david48 wrote:
Apfelmus, Heinrich wrote:
Hm, what about the option of opening Bird's "Introduction on Functional Programming using Haskell" in the section about fold? Monoid is on page 62 in the translated copy I've got here.
I don't think that I would try to learn a programming language, for example Python, without obtaining a paper book on it.
I would, if the online documentation makes it possible, and then I would buy a paper book later, to go further or for reference. That's how I learned Haskell, and much later I've bought my first book.
Interesting, I wouldn't want to miss actual paper when learning difficult topics. Also, some great resources like the contents of Bird's book just aren't available online ;). I'd recommend to borrow it from a library, though, the current amazon price is quite outrageous. Regards, apfelmus -- http://apfelmus.nfshost.com

david48 wrote:
On the other hand, on page 320 there is a nice explanation of Monoid, and on page 380, which isn't mentionned in the index, there might be the first time one can understand why the writer monad works with monoids instead of lists: to be able to use better suited data types for appending.
(I too usually use the monoid instance mainly for difference lists.)
All of this is still lacking the great why : why/how an abstraction so generic can be useful. I'm starting to believe that the only reason to make a datatype an instance of Monoid is... why not ! since it's not hard to find an associative operation and a neutral element.
As Bertram Felgenhauer has already mentioned, a very powerful application of monoids are 2-3 finger trees Ralf Hinze and Ross Patterson. Finger trees: a simple general-purpose data structure. http://www.soi.city.ac.uk/~ross/papers/FingerTree.html Basically, they allow you write to fast implementations for pretty much every abstract data type mentioned in Okasaki's book "Purely Functional Data Structures". For example, you can do sequences, priority queues, search trees and priority search queues. Moreover, any fancy and custom data structures like interval trees or something for stock trading are likely to be implementable in this framework as well. How can one tree be useful for so many different data structures? The answer: *monoids*! Namely, the finger tree works with elements that are related to a monoid, and all the different data structures mentioned above arise by different choices for this monoid. Let me explain this monoid magic, albeit not in this message which would become far too long, but at http://apfelmus.nfshost.com/monoid-fingertree.html Regards, apfelmus -- http://apfelmus.nfshost.com

A very nice writeup about the use of monoid with finger tree.
But please, use the names of the monoid operations that the rest of
the Haskell libraries use.
By using different names you are just confusing readers (even if you
don't like the standard names).
Also, you can replace Infinity by maxBound.
-- Lennart
On Tue, Jan 20, 2009 at 3:42 PM, Heinrich Apfelmus
david48 wrote:
On the other hand, on page 320 there is a nice explanation of Monoid, and on page 380, which isn't mentionned in the index, there might be the first time one can understand why the writer monad works with monoids instead of lists: to be able to use better suited data types for appending.
(I too usually use the monoid instance mainly for difference lists.)
All of this is still lacking the great why : why/how an abstraction so generic can be useful. I'm starting to believe that the only reason to make a datatype an instance of Monoid is... why not ! since it's not hard to find an associative operation and a neutral element.
As Bertram Felgenhauer has already mentioned, a very powerful application of monoids are 2-3 finger trees
Ralf Hinze and Ross Patterson. Finger trees: a simple general-purpose data structure. http://www.soi.city.ac.uk/~ross/papers/FingerTree.html
Basically, they allow you write to fast implementations for pretty much every abstract data type mentioned in Okasaki's book "Purely Functional Data Structures". For example, you can do sequences, priority queues, search trees and priority search queues. Moreover, any fancy and custom data structures like interval trees or something for stock trading are likely to be implementable in this framework as well.
How can one tree be useful for so many different data structures? The answer: *monoids*! Namely, the finger tree works with elements that are related to a monoid, and all the different data structures mentioned above arise by different choices for this monoid.
Let me explain this monoid magic, albeit not in this message which would become far too long, but at
http://apfelmus.nfshost.com/monoid-fingertree.html
Regards, apfelmus
-- http://apfelmus.nfshost.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Lennart Augustsson wrote:
A very nice writeup about the use of monoid with finger tree.
Thanks :)
But please, use the names of the monoid operations that the rest of the Haskell libraries use. By using different names you are just confusing readers (even if you don't like the standard names).
True. Unfortunately, mappend is no option because it's way too long, I barely got away with writing out measure . There is ++ but it's already taken. Alternatively, I could opt for unicode ⊕ and at least match the paper. Thoughts?
Also, you can replace Infinity by maxBound.
Good idea, thanks. Regards, apfelmus -- http://apfelmus.nfshost.com

On Tue, Jan 20, 2009 at 3:42 PM, Heinrich Apfelmus
Let me explain this monoid magic, albeit not in this message which would become far too long, but at
That is a very nice summary! I did my own investigation of fingertrees recently [1] and also came to the conclusion that the function names for Monoid really are the ugliest, impractical things for such a beautiful, simple concept. Ah, if only we could go back in time :-) [1]: http://www.dougalstanton.net/blog/index.php/2008/12/12/a-brief-look-at-finge... Cheers, D

On Thu, Jan 15, 2009 at 10:39:18PM -0600, Creighton Hogg wrote:
For you folks who work on GHC, is it acceptable to open tickets for poor documentation of modules in base?
Personally, I don't think that doing so would make it more likely that someone would actually write the documentation; it would just be another ticket lost in the noise. The best way to get better docs would be to create a wiki page with proposed docs, and send a URL to the libraries list and solicit improvements, in my opinion. While you may say that people asking for docs for X don't know enough to write them, I would claim that they normally manage to use X in their program shortly afterwards, and could thus at least put together a tiny example of what X can be used for (in English) and how to use it (in Haskell). These initial drafts don't have to be perfect, or even correct, as the libraries list can refine them, but someone does need to put the effort into picking a good, small example, getting the phrasing nice, etc. Once the list has settled on good docs, then filing a ticket with the docs attached is definitely useful. Thanks Ian

On Thu, 15 Jan 2009 20:18:50 -0800, you wrote:
Really. So the engineer who designed the apartment building I'm in at the moment didn't know any physics, thought `tensor' was a scary math term irrelevant to practical, real-world engineering, and will only read books on engineering that replace the other scary technical term `vector' with point-direction-value-thingy? I think I'm going to sleep under the stars tonight...
As a rule, buildings are designed by architects, whose main job is to ensure that they follow the requirements set by the relevant building code (e.g., the International Building Code, used in most of the United States and a few other places). Of course, an experienced architect has most of that stuff in his/her brain already, and doesn't need to constantly refer to the code books. A jurisdiction may require that the architect's design be signed off by one or more engineers. This is almost always the case for public buildings and multi-unit housing, and almost always not the case for single-unit housing. But if the building is a run-of-the-mill design, then the engineer checking it is unlikely to use anything beyond simple algebra. It's only in case of unusual structures and one-offs (skyscrapers, most anything built in Dubai these days, etc.) that engineers will really get down and dirty with the math. And yes, most professional engineers would not be able to do that kind of work without some kind of refresher, not so much because they never learned it, but because they haven't used it in so long.
Um, no. I try to avoid people as much as possible; computers at least make sense. Also anything else to do with the real world :)
Well, that it explains it then...
Again, do engineers know *what* stress is? Do they understand terms like `tensor'? Those things are the rough equivalents of terms like `monoid'.
Stress, probably, at least in basic terms. Tensor, probably not. Steve Schafer Fenestra Technologies Corp. http://www.fenestra.com/

Steve Schafer
But if the building is a run-of-the-mill design, then the engineer checking it is unlikely to use anything beyond simple algebra. It's only in case of unusual structures and one-offs (skyscrapers, most anything built in Dubai these days, etc.) that engineers will really get down and dirty with the math.
Heh, nice analogy which I suspect many math-happy Haskell programmers will be happy to embrace. (Who wants to churn out run-of-the-mill mass-market stuff from the assembly line anyway?) -k -- If I haven't seen further, it is by standing in the footprints of giants

John Goerzen ha scritto:
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
One thing that does annoy me about Haskell- naming.
I'm fine with current names. However I would like to see better documentation, and examples. You can't just have in the documentation: this is xxx from yyy branch of mathematics, see this paper. You should explain how (and why) to use xxx.
[...]
Regards Manlio Perillo

On 15 Jan 2009, at 16:34, John Goerzen wrote:
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on- haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
[snip] Sorry, I'm not going to refer to that paragraph, instead, I'm going to point out how depressing it is, that the message we're getting across to new haskellers is that "Monads, and variations on monads and extensions to monads and operations on monads are the primary way Haskell combines code-". We have loads of beautiful ways of combining code (not least ofc, simple application), why is it than Monad is getting singled out as the one that we must use for everything? My personal suspicion on this one is that Monad is the one that makes concessions to imperative programmers, by on of its main combinators (>>=) having the type (>>=) :: (Monad m) => m a -> (a -> m b) -> m b, and not the much nicer type (>>=) :: (Monad m) => (a -> m b) -> (m a -
m b).
Bob

Thanks, Bob! I'm with on both counts: Monad is misrepresented as central in
code composition; and (Monad m) => (a -> m b) -> (m a -> m b) is a much
nicer type (for monadic extension), only in part because it encourages
retraining away from sequential thinking. I encountered this nicer
formulation only recently, and am glad to finally understand why I've been
so uncomfortable with the type of (>>=).
- Conal
2009/1/15 Thomas Davie
On 15 Jan 2009, at 16:34, John Goerzen wrote:
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
[snip] Sorry, I'm not going to refer to that paragraph, instead, I'm going to point out how depressing it is, that the message we're getting across to new haskellers is that "Monads, and variations on monads and extensions to monads and operations on monads are the primary way Haskell combines code-". We have loads of beautiful ways of combining code (not least ofc, simple application), why is it than Monad is getting singled out as the one that we must use for everything?
My personal suspicion on this one is that Monad is the one that makes concessions to imperative programmers, by on of its main combinators (>>=) having the type (>>=) :: (Monad m) => m a -> (a -> m b) -> m b, and not the much nicer type (>>=) :: (Monad m) => (a -> m b) -> (m a -> m b).
Bob

It's a criticism already voiced by the great David Bowie:
"My Brain Hurt like a warehouse, it had no room to spare
I had to cram so many things to store everything in there"
Immanuel
On Thu, Jan 15, 2009 at 4:34 PM, John Goerzen
Hi folks,
Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker:
http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on-haskell/
It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph:
One thing that does annoy me about Haskell- naming. Say you've noticed a common pattern, a lot of data structures are similar to the difference list I described above, in that they have an empty state and the ability to append things onto the end. Now, for various reasons, you want to give this pattern a name using on Haskell's tools for expressing common idioms as general patterns (type classes, in this case). What name do you give it? I'd be inclined to call it something like "Appendable". But no, Haskell calls this pattern a "Monoid". Yep, that's all a monoid is- something with an empty state and the ability to append things to the end. Well, it's a little more general than that, but not much. Simon Peyton Jones once commented that the biggest mistake Haskell made was to call them "monads" instead of "warm, fluffy things". Well, Haskell is exacerbating that mistake. Haskell developers, stop letting the category theorists name things. Please. I beg of you.
I'd like to echo that sentiment!
He went on to add:
If you?re not a category theorists, and you're learning (or thinking of learning) Haskell, don't get scared off by names like "monoid" or "functor". And ignore anyone who starts their explanation with references to category theory- you don't need to know category theory, and I don't think it helps.
I'd echo that one too.
-- John _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 15 Jan 2009, John Goerzen wrote:
One thing that does annoy me about Haskell- naming. Say you've noticed a common pattern, a lot of data structures are similar to the difference list I described above, in that they have an empty state and the ability to append things onto the end. Now, for various reasons, you want to give this pattern a name using on Haskell's tools for expressing common idioms as general patterns (type classes, in this case). What name do you give it? I'd be inclined to call it something like "Appendable". But no, Haskell calls this pattern a "Monoid".
I risk to repeat someones point, since I have not read the entire thread ... What I don't like about the Monoid class is, that its members are named "mempty" and "mappend". It may be either (also respecting qualified import) Monoid(identity, op) or Appendable(empty, append) where only the first one seems reasonable, since the Sum monoid and its friends do not append anything.

On Tue, 2009-01-20 at 23:41 +0100, Henning Thielemann wrote:
On Thu, 15 Jan 2009, John Goerzen wrote:
One thing that does annoy me about Haskell- naming. Say you've noticed a common pattern, a lot of data structures are similar to the difference list I described above, in that they have an empty state and the ability to append things onto the end. Now, for various reasons, you want to give this pattern a name using on Haskell's tools for expressing common idioms as general patterns (type classes, in this case). What name do you give it? I'd be inclined to call it something like "Appendable". But no, Haskell calls this pattern a "Monoid".
I risk to repeat someones point, since I have not read the entire thread ... What I don't like about the Monoid class is, that its members are named "mempty" and "mappend". It may be either (also respecting qualified import) Monoid(identity, op)
+1 If we're going to change any names in the standard library at all, this is the change we should make. jcc
participants (74)
-
ajb@spamcop.net
-
Andrei Formiga
-
Andrew Coppin
-
Andrew Wagner
-
Anton van Straaten
-
Apfelmus, Heinrich
-
Benja Fallenstein
-
Bertram Felgenhauer
-
Bulat Ziganshin
-
Cale Gibbard
-
ChrisK
-
Claus Reinke
-
Conal Elliott
-
Cory Knapp
-
Creighton Hogg
-
Dan Doel
-
Dan Piponi
-
Dan Weston
-
Daniel Fischer
-
David Fox
-
David Leimbach
-
David Menendez
-
David Waern
-
david48
-
Derek Elkins
-
Don Stewart
-
Dougal Stanton
-
Drew Vogel
-
Duncan Coutts
-
Ertugrul Soeylemez
-
Eugene Kirpichov
-
George Pollard
-
Gour
-
Gracjan Polak
-
Heinrich Apfelmus
-
Henning Thielemann
-
Henning Thielemann
-
Ian Lynagh
-
Immanuel Litzroth
-
Jeremy Shaw
-
John A. De Goes
-
John Goerzen
-
Jonathan Cast
-
Ketil Malde
-
Lennart Augustsson
-
Luke Palmer
-
mail@justinbogner.com
-
Manlio Perillo
-
Manuel M T Chakravarty
-
Max Rabkin
-
Michael Giagnocavo
-
Miguel Mitrofanov
-
Nathan Bloomfield
-
Niklas Broberg
-
Paul Moore
-
pepe
-
Peter Verswyvelen
-
Philippa Cowderoy
-
Richard O'Keefe
-
Robert Greayer
-
Robin Green
-
roconnor@theorem.ca
-
Ross Mellgren
-
Ross Paterson
-
Ryan Ingram
-
Sebastian Sylvan
-
Sittampalam, Ganesh
-
Sterling Clover
-
Steve Schafer
-
Thomas Davie
-
Thomas DuBuisson
-
Thorkil Naur
-
Tristan Seligmann
-
Wouter Swierstra