Re: Monad of no `return` Proposal (MRP): Moving `return` out of `Monad`

On other social media forums, I am seeing educators who use Haskell as a vehicle for their main work, but would not consider themselves Haskell researchers, and certainly do not have the time to follow Haskell mailing lists, who are beginning to say that these kinds of annoying breakages to the language, affecting their research and teaching materials, are beginning to disincline them to continue using Haskell. They are feeling like they would be less well disposed to reviewing papers that use Haskell, and less well disposed to collaborating on writing papers that involve Haskell. Please can the Haskell Prime Committee take into account the views of such "peripheral" users of the language, who after all, form some measure of its recent "success". Haskell is a real-world tool for many people, and breakage of their code, and their sources of information about Haskell, is a powerful disincentive to continue with it. Regards, Malcolm
On 5 Oct 2015, at 10:05, Malcolm Wallace wrote:
I am also a strong -1 on small changes that break huge numbers of things for somewhat trivial benefits.
Regards, Malcolm
On 2 Oct 2015, at 11:09, Henrik Nilsson wrote:
Hi all,
I have discussed the monad of no return proposal with my colleague Graham Hutton: a long-standing member of the Haskell community, well-known researcher, some 20 years of experience of teaching Haskell to undergraduate students, and author of one of the most successful introductory Haskell textbooks there are.
The upshot of this e-mail is a strong collective -2 from us both on particular proposal, and a general call for much more caution when it comes to breaking changes that are not critically important.
First, on a general note, there has recently been a flurry of breaking changes to the (de facto) Haskell standards. In many cases for very good reasons, but sometimes it seems more in a quest for perfection without due consideration for the consequences. It used to be the case that breaking changes were very rare indeed. And for good reason.
Right now, the main "measure of breakage" in the on-line discussions seems to be how many packages that break in Hackage. Which of course makes sense: the Hackage repository is very important and such a measure is objective, as far as it goes.
But we should not forget that breakage can go far beyond Hackage. For starters, there is *lots* of code that is not in Hackage, yet critically important to its users, however many or few they are. There are more than hundreds of thousands of copies of books out there where that may break in that examples may no longer work. And they cannot be changed. There are countless research papers that may break in the same way. Every single institution that use Haskell in their teaching may have to update their teaching materials (slides, code, lab instructions) for every module that teaches or uses Haskell. And last but not the least, what countless of programmers and students have learned about Haskell over decades, most of whom are *not* power users who can take these changes in their stride, may also break.
Now, of course a language has to evolve, and sometimes breaking backwards compatibility is more or less essential for the long-term benefit of the language. But we should not let perfection be the enemy of the good.
As to this particular proposal, the monad of no return, it does not seem essential to us, but mostly motivated by a quest for "perfection" as defined by a very knowledgeable but in relative terms small group of people.
One argument put forward was that applicative code that uses "return" instead of "pure" should get a less constrained type. But such code is relatively new. The methods of the Monad class have been return and bind for some 25 years. So indeed, as Henning Thielemann wrote: why should not the newer code be adapted and use "pure" instead? In fact, the use of "pure" in such code could serve as a quite useful cue that the code is applicative rather than monadic, especially where applicative do is employed.
Another reason put forward is support for the applicative do syntax. But that is, in standard terms, only a future possibility at this point. Further, the syntax is arguably rather dubious anyway as it, in particular to someone with an imperative background, suggests a sequential reading which is exactly what applicative is not and why it is useful. So from that perspective, using the applicative operators directly, without any syntactic sugar, actually amounts to a much more transparent and honest syntax, even if a bit more verbose in some cases.
The bottom line is that it is premature to put forward support for the applicative do syntax as a rationale for a non-essential breaking change.
Best regards,
Henrik Nilsson and Graham Hutton
-- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk

Hi,
As a person who used Haskell in all three capacities (for scientific
research, for commercial purpose, and to introduce others to benefits of
pure and strongly typed programming), I must voice an supportive voice for
this change:
1. Orthogonal type classes are easier to explain.
2. Gradual improvements helps us to generalize further, and this in turn
makes education easier.
3. Gradual change that break only a little help to prevent either
stagnation (FORTRAN) and big breakage (py3k). That keeps us excited.
That would also call to split TCs into their orthogonal elements: return,
ap, bind having the basic TC on their own.
So:
+1, but only if it is possible to have compatibilty mode. I believe that
rebindable syntax should allow us to otherwise make our own prelude, if we
want such a split. Then we could test it well before it is used by the base
library.
That said, I would appreciate Haskell2010 option just like Haskell98 wad,
so that we can compile old programs without changes. Even by using some
Compat version of standard library. Would that satisfy need for stability?
PS And since all experts were beginners some time ago, I beg that we do not
call them "peripheral".
--
Best regards
Michał
On Monday, 5 October 2015, Malcolm Wallace
On other social media forums, I am seeing educators who use Haskell as a vehicle for their main work, but would not consider themselves Haskell researchers, and certainly do not have the time to follow Haskell mailing lists, who are beginning to say that these kinds of annoying breakages to the language, affecting their research and teaching materials, are beginning to disincline them to continue using Haskell. They are feeling like they would be
(...) -- Pozdrawiam Michał

Hello all. I write this to be a little provocative, but … It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future. If we make claims - I believe rightly - that Haskell is hitting the mainstream, then we need to think about all changes in terms of the costs and benefits of each of them in the widest possible sense. There’s an old fashioned maxim that sums this up in a pithy way: “if it ain’t broke, don’t fix it”. Simon Thompson
On 5 Oct 2015, at 10:47, Michał J Gajda
wrote: Hi,
As a person who used Haskell in all three capacities (for scientific research, for commercial purpose, and to introduce others to benefits of pure and strongly typed programming), I must voice an supportive voice for this change: 1. Orthogonal type classes are easier to explain. 2. Gradual improvements helps us to generalize further, and this in turn makes education easier. 3. Gradual change that break only a little help to prevent either stagnation (FORTRAN) and big breakage (py3k). That keeps us excited.
That would also call to split TCs into their orthogonal elements: return, ap, bind having the basic TC on their own.
So: +1, but only if it is possible to have compatibilty mode. I believe that rebindable syntax should allow us to otherwise make our own prelude, if we want such a split. Then we could test it well before it is used by the base library.
That said, I would appreciate Haskell2010 option just like Haskell98 wad, so that we can compile old programs without changes. Even by using some Compat version of standard library. Would that satisfy need for stability?
PS And since all experts were beginners some time ago, I beg that we do not call them "peripheral". -- Best regards Michał
On Monday, 5 October 2015, Malcolm Wallace
mailto:malcolm.wallace@me.com> wrote: On other social media forums, I am seeing educators who use Haskell as a vehicle for their main work, but would not consider themselves Haskell researchers, and certainly do not have the time to follow Haskell mailing lists, who are beginning to say that these kinds of annoying breakages to the language, affecting their research and teaching materials, are beginning to disincline them to continue using Haskell. They are feeling like they would be (...) -- Pozdrawiam Michał _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson@kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 05/10/15 11:59, Simon Thompson wrote:
There’s an old fashioned maxim that sums this up in a pithy way: “if it ain’t broke, don’t fix it”. But... it *is* broken.
- -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJWEmoJAAoJENQqWdRUGk8BAKUP/iJ6z/E6UK83uArlxgSA/cKJ nOLluw3UZeFPscpAiwqnXw4KuTy6l1wKgqEzLmywHX3JvxxNQQyHA4oSgXqp64IQ lMdPlMAFx9sqGzBIS9rDC/GQRGPePc6a85316Z6ZlwDHkhpm5zgXFb7nbl4mvjFO BY4Bkv7WmiE2FlE2WIGTuw94MXwY8qW7kk/TpG6rG7COGAGNfbR1Jfc5LF1Y88xq oloxHOQJ6rRhwQBan7OBO3zWF6TG1GcopT+W1KmLSQbY15hcDFe+YU7jz2ilryYy gsOwYyvn3LXsJNNtIZWlElrwIppEMquQIfJy4jq715wjcz1ozSHAkPTWSifNDeA+ 8dyrCzqQDjXi9ADsoZDjLv7sJKrDfonBp3gdTiUJnc1pAGBvyeNNl1vlpt8z5vmw 5bNVn6pLSCxQuTVLcxYBNUlBy3K0RgnJU+SFz5dShN0xSp4vG+H/5Vw08a3den4G xShS0oaFlzQhu5Mf9JXNJTblHvs5gp1d0pAQnu5xlNblNBeD3p4yy73BAqwuFflu wlyM1ZvVeq0OY8yBKqdxIXYzmDUJyT77mrH4jbVUVC2cRrTYa11Govnu6WCXbmdL FsvLHxV/xRuGJu+T1tps1uMSe3dZncEQ6vudyyUcnF27JiMJGBY2gjJZ3xmVVdM9 4DFN9td4AgpnaeNo+9j5 =AREs -----END PGP SIGNATURE-----

On Mon, Oct 5, 2015 at 8:16 AM, Alexander Berntsen
On 05/10/15 11:59, Simon Thompson wrote:
There’s an old fashioned maxim that sums this up in a pithy way: “if it ain’t broke, don’t fix it”. But... it *is* broken.
Somehow, we managed to use Monad before this. That does not sound "broken". -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 05/10/15 15:16, Brandon Allbery wrote:
On Mon, Oct 5, 2015 at 8:16 AM, Alexander Berntsen wrote:
But... it *is* broken. Somehow, we managed to use Monad before this. That does not sound "broken". Just because something is broken does not mean that it is not possible to use it. We use broken things and are subject to broken systems all day every day.
You may of course argue that it is fine for it to be broken. - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJWEnkvAAoJENQqWdRUGk8BdM8P/3f66SDBrDSNLajDyTnBkL3o Ip0FuEn2VuSwXtXhIDYPkacgzMsO56cm1OZeU5LoPH1vMmzNx63VQftaxtk00yaR dfbjpkS64JMSOQGSwcj1B31RjIlhFLlGrSx2OlByIQ1agjZmv4eKmTDkc3QAEvYc HpBvcgSCyMO6WXXPtIem5gqZluHONe+UX8I/EUtJwOUaZCndpsvpSF9xAF7TkUbE Co2X45tk1c6OnLtj7YaPPtDgV8KTlFpvTowvvFM67JP6VlDGX1NAtT0Bfm1/KnEA KmWhpy0Kv4r0bbqOUtcS/oGeduio7bBIWDfnmP6S60SsWfM64AnamblLGbyi87E8 ambN0K4sEZT+NA6jEb4L7eqKf0kaVBYpCj+xG1kFX9Zy3Q2dx7++SNdqC3KmLtDZ 3MtEjoCmVdwBl4NpECkz0roxyeoS38gmroPSpk69kOgsHkbn4xriv26DXumRJuzP l+/fsoyQWPVUB03k+qIemxkfIF6bQI7c6XdACkfoRjFU8/S8FRt+1357MlFGxmEp U165NrvFyQFDa+qQpsz2+UO7hqg5ymuFwKZySmr85jRG7bGMJofN2KDf3a8xQzWp Jn3j3p0CaX1PO11kXJILx80GoPV1p5RDTj+6SeOpa8eg5Ba3mvFhbrQ7da0isxYE MhSyoU0ZhaylCkzjM0lm =h9OY -----END PGP SIGNATURE-----

"Broken" here is hyperbole for "can be significantly improved for very little penalty." On 05/10/15 23:16, Brandon Allbery wrote:
On Mon, Oct 5, 2015 at 8:16 AM, Alexander Berntsen
mailto:alexander@plaimi.net> wrote: On 05/10/15 11:59, Simon Thompson wrote: > There’s an old fashioned maxim that sums this up in a pithy way: > “if it ain’t broke, don’t fix it”. But... it *is* broken.
Somehow, we managed to use Monad before this. That does not sound "broken".
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com mailto:allbery.b@gmail.com ballbery@sinenomine.net mailto:ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net
_______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime

2015-10-05 11:59 GMT+02:00 Simon Thompson
[...] It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. [...]
I wouldn't necessarily call the process "broken", but it's a bit annoying: Because of the constant flux of minor changes in the language and the libraries, I've reached the stage where I'm totally unable to tell if my code will work for the whole GHC 7.x series. The only way I see is doing heavy testing on Travis CI and littering the code with #ifdefs after compilation failures. (BTW: Fun exercise: Try using (<>) and/or (<$>) in conjunction with -Wall. Bonus points for keeping the #ifdefs centralized. No clue how to do that...) This is less than satisfactory IMHO, and I would really prefer some other mode for introducing such changes: Perhaps these should be bundled and released e.g. every 2 years as Haskell2016, Haskell2018, etc. This way some stuff which belongs together (AMP, FTP, kicking out return, etc.) comes in slightly larger, but more sensible chunks. Don't get me wrong: Most of the proposed changes in itself are OK and should be done, it's only the way they are introduced should be improved...

On 2015-10-05 at 15:27:53 +0200, Sven Panne wrote:
2015-10-05 11:59 GMT+02:00 Simon Thompson
: [...] It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. [...]
I wouldn't necessarily call the process "broken", but it's a bit annoying: Because of the constant flux of minor changes in the language and the libraries, I've reached the stage where I'm totally unable to tell if my code will work for the whole GHC 7.x series. The only way I see is doing heavy testing on Travis CI and littering the code with #ifdefs after compilation failures. (BTW: Fun exercise: Try using (<>) and/or (<$>) in conjunction with -Wall. Bonus points for keeping the #ifdefs centralized. No clue how to do that...) This is less than satisfactory IMHO, and I would really prefer some other mode for introducing such changes: Perhaps these should be bundled and released e.g. every 2 years as Haskell2016, Haskell2018, etc. This way some stuff which belongs together (AMP, FTP, kicking out return, etc.) comes in slightly larger, but more sensible chunks.
Don't get me wrong: Most of the proposed changes in itself are OK and should be done, it's only the way they are introduced should be improved...
I think that part of the reason we have seen these changes occur in a "constant flux" rather than in bigger coordinated chunks is that faith in the Haskell Report process was (understandably) abandoned. And without the Haskell Report as some kind of "clock generator" with which to align/bundle related changes into logical units, changes occur whenever they're proposed and agreed upon (which may take several attempts as we've seen with the AMP and others). I hope that the current attempt to revive the Haskell Prime process will give us a chance to clean up the unfinished intermediate `base-4.8` situation we're left with now after AMP, FTP et al, as the next Haskell Report revision provides us with a milestone to work towards. That being said, there's also the desire to have changes field-tested by a wide audience on a wide range before integrating them into a Haskell Report. Also I'm not sure if there would be less complaints if AMP/FTP/MFP/MRP/etc as part of a new Haskell Report would be switched on all at once in e.g. `base-5.0`, breaking almost *every* single package out there at once. For language changes we have a great way to field-test new extensions before integrating them into the Report via `{-# LANGUAGE #-}` pragmas in a nicely modular and composable way (i.e. a package enabling a certain pragma doesn't require other packages to use it as well) which have proven to be quite popular. However, for the library side we lack a comparable mechanism at this point. The closest we have, for instance, to support an isolated Haskell2010 legacy environment is to use RebindableSyntax which IMO isn't good enough in its current form[1]. And then there's the question whether we want a Haskell2010 legacy environment that's isolated or rather shares the types & typeclasses w/ `base`. If we require sharing types and classes, then we may need some facility to implicitly instanciate new superclasses (e.g. implicitly define Functor and Applicative if only a Monad instance is defined). If we don't want to share types & classes, we run into the problem that we can't easily mix packages which depend on different revisions of the standard-library (e.g. one using `base-4.8` and others which depend on a legacy `haskell2010` base-API). One way to solve this could be to mutually exclude depending on both , `base-4.8` and `haskell2010`, in the same install-plan (assuming `haskell2010` doesn't depend itself on `base-4.8`) In any case, I think we will have to think hard how to address language/library change management in the future, especially if the Haskell code-base continues to grow. Even just migrating the code base between Haskell Report revisions is a problem. An extreme example is the Python 2->3 transition which the Python ecosystem is still suffering from today (afaik). Ideas welcome! [1]: IMO, we need something to be used at the definition site providing desugaring rules, rather than requiring the use-site to enable a generalised desugaring mechanism; I've been told that Agda has an interesting solution to this in its base libraries via {-# LANGUAGE BUILTIN ... #-} pragmas. Regards, H.V.Riedel

On October 5, 2015 at 6:00:00 AM, Simon Thompson (s.j.thompson@kent.ac.uk) wrote:
Hello all. I write this to be a little provocative, but …
It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken.
Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future.
Hi Simon. I do in fact think this is provocative :-P I want to object here to your characterization of what has been going on as “simply throwing in changes”. The proposal seems very well and carefully worked through to provide a good migration strategy, even planning to alter the source of GHC to ensure that adequate hints are given for the indefinite transition period. I also want to object to the idea that these changes would have “a profound impact on systems”. As it stands, and I think this is an important criteria in any change, when “phase 2” goes into affect, code that has compiled before may cease to compile until a minor change is made. However, code that continues to compile will continue to compile with the same behavior. Now as to process itself, this is a change to core libraries. It has been proposed on the libraries list, which seems appropriate, and a vigorous discussion has ensued. This seems like a pretty decent process to me thus far. Do you have a better one in mind? —Gershom P.S. as a general point, I sympathize with concerns about breakage resulting from this, but I also think that the migration strategy proposed is good, and if people are concerned about breakage I think it would be useful if they could explain where they feel the migration strategy is insufficient to allay their concerns.

I would like to suggest that the bar for breaking all existing libraries, books, papers, and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge. This proposal falls far short of both bars, to the extent that I am astonished and disappointed it is being seriously discussed – and to general approval, no less – on a date other than April 1. Surely some design flaws have consequences so small that they are not worth fixing. I'll survive if it goes through, obviously, but it will commit me to a bunch of pointless make-work and compatibility ifdefs. I've previously expressed my sense that cross-version compatibility is a big tax on library maintainers. This proposal does not give me confidence that this cost is being taken seriously. Thanks, Bryan.
On Oct 5, 2015, at 7:32 AM, Gershom B
wrote: On October 5, 2015 at 6:00:00 AM, Simon Thompson (s.j.thompson@kent.ac.uk) wrote: Hello all. I write this to be a little provocative, but …
It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken.
Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future.
Hi Simon. I do in fact think this is provocative :-P
I want to object here to your characterization of what has been going on as “simply throwing in changes”. The proposal seems very well and carefully worked through to provide a good migration strategy, even planning to alter the source of GHC to ensure that adequate hints are given for the indefinite transition period.
I also want to object to the idea that these changes would have “a profound impact on systems”. As it stands, and I think this is an important criteria in any change, when “phase 2” goes into affect, code that has compiled before may cease to compile until a minor change is made. However, code that continues to compile will continue to compile with the same behavior.
Now as to process itself, this is a change to core libraries. It has been proposed on the libraries list, which seems appropriate, and a vigorous discussion has ensued. This seems like a pretty decent process to me thus far. Do you have a better one in mind?
—Gershom
P.S. as a general point, I sympathize with concerns about breakage resulting from this, but I also think that the migration strategy proposed is good, and if people are concerned about breakage I think it would be useful if they could explain where they feel the migration strategy is insufficient to allay their concerns. _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime

On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote:
I would like to suggest that the bar for breaking all existing libraries, books, papers, and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge.
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one? As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. So the reason for choosing to not do MRP simultaneous with AMP was precisely to allow a gradual migration path where, sans CPP, people could write code compatible with the last three versions of GHC, as the general criteria has been. So without arguing the necessity or not, I just want to weigh in with a technical opinion that if this goes through, my _estimation_ is that there will be a smooth and relatively painless migration period, the sky will not fall, good teaching material will remain good, those libraries that bitrot will tend to do so for a variety of reasons more significant than this, etc. It is totally reasonable to have a discussion on whether this change is worth it at all. But let’s not overestimate the cost of it just to further tip the scales :-) —gershom

2015-10-05 17:09 GMT+02:00 Gershom B
On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote: [...] As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. [...]
Nope, at least not if you care about -Wall: If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude. So you'll have to use CPP to avoid that import on base >= 4.8, be it from it Data.Functor, Control.Applicative or some compat-* module. And you'll have to use CPP in each and every module using <$> then, unless I miss something obvious. AFAICT all transitioning guides ignore -Wall and friends...

Sven Panne
If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude.
Yes, the proper solution is slightly more complicated than just importing Prelude.Compat [1]. You also have to specify "import Prelude ()" -- or compile with NoImplicitPrelude, if you prefer that kind of thing. Best regards Peter [1] http://hackage.haskell.org/package/base-compat-0.8.2/docs/Prelude-Compat.htm...

I have used Haskell for teaching for years. Until recently, I taught the hierarchy using a semigroup model. Functor <- Apply <- Applicative Apply <- Bind <- Monad https://github.com/NICTA/course/tree/ee8d1a294137c157c13740ac99a23a5dd5870b4... I did this because it means curious students don't need to receive an explanation of the historical accidents of Haskell had I modelled it in accordance with Haskell pre-AMP. Doing so has caused confusion (I tried it, many years ago). Since AMP implementation, I have changed the hierarchy to be the same as Haskell proper, since the benefit has become worth doing. https://github.com/NICTA/course/issues/164 I am in favour of breaking changes as hard as you can, if it achieves a progressive benefit, even a small one, which this does. It also achieves an incidental benefit in the context of teaching. As it stands for industry use, I remove Prelude and model all this properly anyway. It is a significant effort, but way worth it e.g. do not require return to use do-notation. The penalty for having to do this is often under-estimated. "Breaking changes" is over-stated. Patience. I am waiting. On 05/10/15 19:30, Malcolm Wallace wrote:
On other social media forums, I am seeing educators who use Haskell as a vehicle for their main work, but would not consider themselves Haskell researchers, and certainly do not have the time to follow Haskell mailing lists, who are beginning to say that these kinds of annoying breakages to the language, affecting their research and teaching materials, are beginning to disincline them to continue using Haskell. They are feeling like they would be less well disposed to reviewing papers that use Haskell, and less well disposed to collaborating on writing papers that involve Haskell.
Please can the Haskell Prime Committee take into account the views of such "peripheral" users of the language, who after all, form some measure of its recent "success". Haskell is a real-world tool for many people, and breakage of their code, and their sources of information about Haskell, is a powerful disincentive to continue with it.
Regards, Malcolm
On 5 Oct 2015, at 10:05, Malcolm Wallace wrote:
I am also a strong -1 on small changes that break huge numbers of things for somewhat trivial benefits.
Regards, Malcolm
On 2 Oct 2015, at 11:09, Henrik Nilsson wrote:
Hi all,
I have discussed the monad of no return proposal with my colleague Graham Hutton: a long-standing member of the Haskell community, well-known researcher, some 20 years of experience of teaching Haskell to undergraduate students, and author of one of the most successful introductory Haskell textbooks there are.
The upshot of this e-mail is a strong collective -2 from us both on particular proposal, and a general call for much more caution when it comes to breaking changes that are not critically important.
First, on a general note, there has recently been a flurry of breaking changes to the (de facto) Haskell standards. In many cases for very good reasons, but sometimes it seems more in a quest for perfection without due consideration for the consequences. It used to be the case that breaking changes were very rare indeed. And for good reason.
Right now, the main "measure of breakage" in the on-line discussions seems to be how many packages that break in Hackage. Which of course makes sense: the Hackage repository is very important and such a measure is objective, as far as it goes.
But we should not forget that breakage can go far beyond Hackage. For starters, there is *lots* of code that is not in Hackage, yet critically important to its users, however many or few they are. There are more than hundreds of thousands of copies of books out there where that may break in that examples may no longer work. And they cannot be changed. There are countless research papers that may break in the same way. Every single institution that use Haskell in their teaching may have to update their teaching materials (slides, code, lab instructions) for every module that teaches or uses Haskell. And last but not the least, what countless of programmers and students have learned about Haskell over decades, most of whom are *not* power users who can take these changes in their stride, may also break.
Now, of course a language has to evolve, and sometimes breaking backwards compatibility is more or less essential for the long-term benefit of the language. But we should not let perfection be the enemy of the good.
As to this particular proposal, the monad of no return, it does not seem essential to us, but mostly motivated by a quest for "perfection" as defined by a very knowledgeable but in relative terms small group of people.
One argument put forward was that applicative code that uses "return" instead of "pure" should get a less constrained type. But such code is relatively new. The methods of the Monad class have been return and bind for some 25 years. So indeed, as Henning Thielemann wrote: why should not the newer code be adapted and use "pure" instead? In fact, the use of "pure" in such code could serve as a quite useful cue that the code is applicative rather than monadic, especially where applicative do is employed.
Another reason put forward is support for the applicative do syntax. But that is, in standard terms, only a future possibility at this point. Further, the syntax is arguably rather dubious anyway as it, in particular to someone with an imperative background, suggests a sequential reading which is exactly what applicative is not and why it is useful. So from that perspective, using the applicative operators directly, without any syntactic sugar, actually amounts to a much more transparent and honest syntax, even if a bit more verbose in some cases.
The bottom line is that it is premature to put forward support for the applicative do syntax as a rationale for a non-essential breaking change.
Best regards,
Henrik Nilsson and Graham Hutton
-- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk
_______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
participants (11)
-
Alexander Berntsen
-
Brandon Allbery
-
Bryan O'Sullivan
-
Gershom B
-
Herbert Valerio Riedel
-
Malcolm Wallace
-
Michał J Gajda
-
Peter Simons
-
Simon Thompson
-
Sven Panne
-
Tony Morris