Re: [Haskell-cafe] Monad of no `return` Proposal (MRP): Moving `return` out of `Monad`

Hello all. I write this to be a little provocative, but … It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future. If we make claims - I believe rightly - that Haskell is hitting the mainstream, then we need to think about all changes in terms of the costs and benefits of each of them in the widest possible sense. There’s an old fashioned maxim that sums this up in a pithy way: “if it ain’t broke, don’t fix it”. Simon Thompson
On 5 Oct 2015, at 10:47, Michał J Gajda
wrote: Hi,
As a person who used Haskell in all three capacities (for scientific research, for commercial purpose, and to introduce others to benefits of pure and strongly typed programming), I must voice an supportive voice for this change: 1. Orthogonal type classes are easier to explain. 2. Gradual improvements helps us to generalize further, and this in turn makes education easier. 3. Gradual change that break only a little help to prevent either stagnation (FORTRAN) and big breakage (py3k). That keeps us excited.
That would also call to split TCs into their orthogonal elements: return, ap, bind having the basic TC on their own.
So: +1, but only if it is possible to have compatibilty mode. I believe that rebindable syntax should allow us to otherwise make our own prelude, if we want such a split. Then we could test it well before it is used by the base library.
That said, I would appreciate Haskell2010 option just like Haskell98 wad, so that we can compile old programs without changes. Even by using some Compat version of standard library. Would that satisfy need for stability?
PS And since all experts were beginners some time ago, I beg that we do not call them "peripheral". -- Best regards Michał
On Monday, 5 October 2015, Malcolm Wallace
mailto:malcolm.wallace@me.com> wrote: On other social media forums, I am seeing educators who use Haskell as a vehicle for their main work, but would not consider themselves Haskell researchers, and certainly do not have the time to follow Haskell mailing lists, who are beginning to say that these kinds of annoying breakages to the language, affecting their research and teaching materials, are beginning to disincline them to continue using Haskell. They are feeling like they would be (...) -- Pozdrawiam Michał _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Simon Thompson | Professor of Logic and Computation School of Computing | University of Kent | Canterbury, CT2 7NF, UK s.j.thompson@kent.ac.uk | M +44 7986 085754 | W www.cs.kent.ac.uk/~sjt

2015-10-05 11:59 GMT+02:00 Simon Thompson
[...] It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. [...]
I wouldn't necessarily call the process "broken", but it's a bit annoying: Because of the constant flux of minor changes in the language and the libraries, I've reached the stage where I'm totally unable to tell if my code will work for the whole GHC 7.x series. The only way I see is doing heavy testing on Travis CI and littering the code with #ifdefs after compilation failures. (BTW: Fun exercise: Try using (<>) and/or (<$>) in conjunction with -Wall. Bonus points for keeping the #ifdefs centralized. No clue how to do that...) This is less than satisfactory IMHO, and I would really prefer some other mode for introducing such changes: Perhaps these should be bundled and released e.g. every 2 years as Haskell2016, Haskell2018, etc. This way some stuff which belongs together (AMP, FTP, kicking out return, etc.) comes in slightly larger, but more sensible chunks. Don't get me wrong: Most of the proposed changes in itself are OK and should be done, it's only the way they are introduced should be improved...

On Mon, Oct 5, 2015 at 06:43-0700, mantkiew wrote:
Well, there are the *compat packages:
Base-compat Transformers-compat Mtl-compat
Etc. They do centralize the ifdefs and give you compatibility with GHC 7.*. I recently adopted the last two ones and they work like a charm. I am yet to adopt base-compat, so I don't know what the experience is with it.
Hang on a moment, are you saying that all the people writing to argue that these changes would require them to write dozens more #ifdef's actually don't have to write any at all? I never knew what the *-compat packages were all about. If that's what they're designed to do, I have a feeling they have not gotten *nearly* enough exposure. [Apologies for possible cross-posting; this thread jumped into my inbox from I-know-not-where and already has half a dozen CCs attached.]

On Mon, Oct 5, 2015 at 3:18 PM, Bryan Richter wrote:
Hang on a moment, are you saying that all the people writing to argue that these changes would require them to write dozens more #ifdef's actually don't have to write any at all?
Um, no, it usually isn't anything like that. Here's a sampling of some of
the things I've used CPP for in the past few years:
- After GHC 7.4, when using a newtype in FFI imports you need to import
the constructor, i.e. "import Foreign.C.Types(CInt(..))" --- afaik CPP is
the only way to shut up warnings everywhere
- defaultTimeLocale moved from System.Locale to Data.Time.Format in
time-1.5 (no compat package for this, afaik)
- one of many various changes to Typeable in the GHC 7.* series
(deriving works better now, mkTyCon vs mkTyCon3, etc)
- Do I have to hide "catch" from Prelude, or not? It got moved, and
"hiding" gives an error if the symbol you're trying to hide is missing.
Time to break out the CPP (and curse myself for not just using the
qualified import in the first place)
- Do I get monoid functions from Prelude or from Data.Monoid? Same w/
Applicative, Foldable, Word. I don't know where anything is supposed to
live anymore, or which sequence of imports will shut up spurious warnings
on all four versions of GHC I support, so the lowest-friction fix is: break
out the #ifdef spackle
- ==# and friends return Int# instead of Bool after GHC 7.8.1
- To use functions like "tryReadMVar", "unsafeShiftR", and
"atomicModifyIORef'" that are in recent base versions but not older ones
(this is a place where CPP use is actually justified)
--
Gregory Collins

On 6 October 2015 at 11:40, Gregory Collins
defaultTimeLocale moved from System.Locale to Data.Time.Format in time-1.5 (no compat package for this, afaik)
http://hackage.haskell.org/package/time-locale-compat -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

On Mon, Oct 05, 2015 at 05:40:43PM -0700, Gregory Collins wrote:
- defaultTimeLocale moved from System.Locale to Data.Time.Format in time-1.5 (no compat package for this, afaik)

On Mon, Oct 05, 2015 at 05:40:43PM -0700, Gregory Collins wrote:
On Mon, Oct 5, 2015 at 3:18 PM, Bryan Richter wrote:
Hang on a moment, are you saying that all the people writing to argue that these changes would require them to write dozens more #ifdef's actually don't have to write any at all?
Um, no, it usually isn't anything like that. Here's a sampling of some of the things I've used CPP for in the past few years:
- After GHC 7.4, when using a newtype in FFI imports you need to import the constructor, i.e. "import Foreign.C.Types(CInt(..))" --- afaik CPP is the only way to shut up warnings everywhere - defaultTimeLocale moved from System.Locale to Data.Time.Format in time-1.5 (no compat package for this, afaik) - one of many various changes to Typeable in the GHC 7.* series (deriving works better now, mkTyCon vs mkTyCon3, etc) - Do I have to hide "catch" from Prelude, or not? It got moved, and "hiding" gives an error if the symbol you're trying to hide is missing. Time to break out the CPP (and curse myself for not just using the qualified import in the first place) - Do I get monoid functions from Prelude or from Data.Monoid? Same w/ Applicative, Foldable, Word. I don't know where anything is supposed to live anymore, or which sequence of imports will shut up spurious warnings on all four versions of GHC I support, so the lowest-friction fix is: break out the #ifdef spackle - ==# and friends return Int# instead of Bool after GHC 7.8.1 - To use functions like "tryReadMVar", "unsafeShiftR", and "atomicModifyIORef'" that are in recent base versions but not older ones (this is a place where CPP use is actually justified)
In fact I think all of these apart from the FFI one could be solved with a -compat package, could they not?

On Tue, Oct 6, 2015 at 1:39 PM, Tom Ellis < tom-lists-haskell-cafe-2013@jaguarpaw.co.uk> wrote:
In fact I think all of these apart from the FFI one could be solved with a -compat package, could they not?
Who cares? In practice, the programs break and I have to fix them. Most of
the time, CPP is the lowest-friction solution -- if I rely on a -compat
package, first I have to know it exists and that I should use it to fix my
compile error, and then I've added an additional non-platform dependency
that I'm going to have to go back and clean up in 18 months. Usually, to be
honest, *actually* the procedure is that the new RC comes out and I get
github pull requests from hvr@ :-) :-)
In response to the other person who asked "why do you want to support so
many GHC versions anyways?" --- because I don't hate my users, and don't
want to force them to run on the upgrade treadmill if they don't have to?
Our policy is to support the last 4 major GHC versions (or 2 years,
whichever is shorter). And if we support a version of GHC, I want our
libraries to compile on it without warnings, I don't think that should
mystify anyone.
--
Gregory Collins

Hello all, I agree with Henrik, I'm very keen on giving the new Haskell committee a shot. While some may not think that Haskell2010 was a success, I think it would be difficult to argue that Haskell98 was anything but a resounding success (even if you don't think the language was what it could have been!). Haskell98 stabilized the constant changes of the proceeding 7 years. The stability brought with it books and courses, and the agreed-upon base of the language allowed _research_ to flourish as well. Having an agreed base allowed the multiple implementations to experiment with different methods of implementing what the standard laid out. Many of us here learned from those texts or those courses. It's easy online to say that materials being out of date isn't a big deal, but it can turn people off the language when the code they paste into ghci doesn't work. We use Haskell for the compilers course at York; Haskell is the means, not the end, so having to update the materials frequently is a significant cost. It can be difficult to defend the choice of using Haskell when so much time is spent on something that 'isn't the point' of the course. Does that mean that we should never change the language? Of course not, but this constant flux within Haskell is very frustrating. Maybe Haskell2010 wasn't what everyone wanted it to be, but that does not mean the _idea_ of a committee is without merit. Having controlled, periodic changes that are grouped together and thought through as a coherent whole is a very useful thing. One of the insights of the original committee was that there would always be one chair at any point in time. The chair of the committee had final say on any issue. This helped keep the revisions coherent and ensured that Haskell made sense as a whole. Lastly, I'd like to quote Prof. Runciman from almost exactly 22 years ago when the issue of incompatible changes came up. His thoughts were similar to Johan's: On 1993-10-19 at 14:12:30 +0100, Colin Runciman wrote:
As a practical suggestion, if any changes for version 1.3 could make some revision of a 1.2 programs necessary, let's have a precise stand-alone specification of these revisions and how to make them. It had better be short and simple. Many would prefer it to be empty. Perhaps it should be implemented in Haskell compilers?
Overall I don't see the rush for these changes, let's try putting our faith
in a new Haskell committee, whomever it is comprised of.
Best wishes,
José Manuel
P.S. A year ago Prof. Hinze sent me some Miranda code of his from 1995 as I
was studying his thesis. I was able to run the code without issue, allowing
me to be more productive in my research ;-)
On Tue, Oct 6, 2015 at 2:29 PM, Gregory Collins
On Tue, Oct 6, 2015 at 1:39 PM, Tom Ellis < tom-lists-haskell-cafe-2013@jaguarpaw.co.uk> wrote:
In fact I think all of these apart from the FFI one could be solved with a -compat package, could they not?
Who cares? In practice, the programs break and I have to fix them. Most of the time, CPP is the lowest-friction solution -- if I rely on a -compat package, first I have to know it exists and that I should use it to fix my compile error, and then I've added an additional non-platform dependency that I'm going to have to go back and clean up in 18 months. Usually, to be honest, *actually* the procedure is that the new RC comes out and I get github pull requests from hvr@ :-) :-)
In response to the other person who asked "why do you want to support so many GHC versions anyways?" --- because I don't hate my users, and don't want to force them to run on the upgrade treadmill if they don't have to? Our policy is to support the last 4 major GHC versions (or 2 years, whichever is shorter). And if we support a version of GHC, I want our libraries to compile on it without warnings, I don't think that should mystify anyone.
-- Gregory Collins
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

José Manuel Calderón Trilla
Many of us here learned from those texts or those courses. It's easy online to say that materials being out of date isn't a big deal, but it can turn people off the language when the code they paste into ghci doesn't work.
FWIW, I am a newbie, and currently learning a lot from the web. I've had a different feeling. Whenever I read something like "This is historical, and should have been fixed, but isn't yet", it were these sentences that almost turned me off the language, because I had a feeling that there is some stagnation going on.
From the POV of a newbie, I am all for fixing historical hickups. However, I realize that I have different priorities compared to long-time users and people needing to maintain a lot of existing code.
Just my 0.01€ -- CYa, ⡍⠁⠗⠊⠕

Many of my students are distracted by this too. This is why it is turned off and a practical type-class hierarchy is used to demonstrate concepts. As for use in production, it's been used for coming up to 10 years. I will wait. On 08/10/15 18:21, Mario Lang wrote:
José Manuel Calderón Trilla
writes: Many of us here learned from those texts or those courses. It's easy online to say that materials being out of date isn't a big deal, but it can turn people off the language when the code they paste into ghci doesn't work.
FWIW, I am a newbie, and currently learning a lot from the web. I've had a different feeling. Whenever I read something like "This is historical, and should have been fixed, but isn't yet", it were these sentences that almost turned me off the language, because I had a feeling that there is some stagnation going on.
From the POV of a newbie, I am all for fixing historical hickups. However, I realize that I have different priorities compared to long-time users and people needing to maintain a lot of existing code.
Just my 0.01€

(replying to no one in particular) This problem isn't specific to Haskell. In every other language, you have projects that support major versions of toolkits, compilers, libraries and whatnot. And there's already a tool for it: git. Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.

On Wed, Oct 07, 2015 at 01:34:21PM -0400, Michael Orlitzky wrote:
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
I suspect `cabal install your-library` without CPP would explode in the face of (some of) your users though.

On 10/07/2015 01:48 PM, Francesco Ariis wrote:
On Wed, Oct 07, 2015 at 01:34:21PM -0400, Michael Orlitzky wrote:
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
I suspect `cabal install your-library` without CPP would explode in the face of (some of) your users though.
The different branches would have different cabal files saying with which version of GHC they work. If the user has ghc-7.8 installed, then cabal-install (or at least, any decent package manager) should pick the latest version supporting ghc-7.8 to install.

On 07.10 13:34, Michael Orlitzky wrote:
(replying to no one in particular)
This problem isn't specific to Haskell. In every other language, you have projects that support major versions of toolkits, compilers, libraries and whatnot. And there's already a tool for it: git.
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
Isn't this hard to do correctly for libraries with Hackage and Cabal and narrowly versioned dependencies and deep import graphs? E.g. when adding a new feature to the library and merging it back to the ghc-7.8 branch the versions needed for features vs compiler support could end up causing complex dependency clauses for users of such libraries. - Taru Karttunen

On 10/07/2015 02:02 PM, Taru Karttunen wrote:
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
Isn't this hard to do correctly for libraries with Hackage and Cabal and narrowly versioned dependencies and deep import graphs?
It can be...
E.g. when adding a new feature to the library and merging it back to the ghc-7.8 branch the versions needed for features vs compiler support could end up causing complex dependency clauses for users of such libraries.
but can you think of an example where you don't have the same problem with #ifdef? If I need to use a new version of libfoo and the new libfoo only supports ghc-7.10, then I'm screwed either way right?

This sounds like the right approach. To whatever degree existing
tooling makes this difficult, maybe we can fix existing tooling?
On Wed, Oct 7, 2015 at 10:34 AM, Michael Orlitzky
(replying to no one in particular)
This problem isn't specific to Haskell. In every other language, you have projects that support major versions of toolkits, compilers, libraries and whatnot. And there's already a tool for it: git.
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Michael Orlitzky
(replying to no one in particular)
This problem isn't specific to Haskell. In every other language, you have projects that support major versions of toolkits, compilers, libraries and whatnot. And there's already a tool for it: git.
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
I don't find this option terribly appealing. As a maintainer I would far prefer maintaining a bit of CPP than a proliferation of branches, with all of the cherry-picking and potential code divergence that they bring. Really though, the point I've been trying to make throughout is that I think that much of the CPP that we are currently forced to endure isn't even strictly necessary. With a few changes to our treatment of warnings and some new pragmas (aimed at library authors), we can greatly reduce the impact that library interface changes have on users. Cheers, - Ben

On Oct 8, 2015, at 9:55 AM, Ben Gamari
With a few changes to our treatment of warnings and some new pragmas (aimed at library authors), we can greatly reduce the impact that library interface changes have on users.
My loose following of the interweaved threads has led me to this same conclusion. Have you paid close enough attention to list exactly what these changes should be? I have not. But I'd love to find a general solution to the migration problem so that we can continue to tinker with our beloved language without fear of flames burning down the house. Richard

Hi, Am Donnerstag, den 08.10.2015, 13:05 -0400 schrieb Richard Eisenberg:
On Oct 8, 2015, at 9:55 AM, Ben Gamari
wrote: With a few changes to our treatment of warnings and some new pragmas (aimed at library authors), we can greatly reduce the impact that library interface changes have on users.
My loose following of the interweaved threads has led me to this same conclusion. Have you paid close enough attention to list exactly what these changes should be? I have not. But I'd love to find a general solution to the migration problem so that we can continue to tinker with our beloved language without fear of flames burning down the house.
how willing are we to make the compiler smarter and more complicate to make sure old code does what it originally meant to do, as long as it is among the (large number close to 100)% common case? For example, in this case (as was suggested in some trac ticket, I believe), ignore a method definition for a method that * is no longer in the class and * where a normal function is in scope and * these are obvious equivalent where obvious may mean various things, depending on our needs. One could think this further: If the compiler sees a set of Applicative and Monad instances for the same type in the same module, and it has "return = something useful" and "pure = return", can’t it just do what we expect everyone to do manually and change that to "pure = something useful" and remove return? In other words, can we replace pain and hassle for a lot of people by implementation work (and future maintenance cost) for one or a few people? Or will that lead to another hell where code does no longer mean what it says? Greetings, Joachim -- Joachim “nomeata” Breitner mail@joachim-breitner.de • http://www.joachim-breitner.de/ Jabber: nomeata@joachim-breitner.de • GPG-Key: 0xF0FBF51F Debian Developer: nomeata@debian.org

I like the idea of a separate translator that understands how to make the obvious changes required by such minor improvements (remove/add definitions of return, remove/add imports of Applicative, etc), and having that applied by cabal when the code is built.
Then library authors could write normal code conforming to *one* release, instead of figuring out the clever import gymnastics or CPP required to code simultaneously against several almost-compatible interfaces (or forking).
Some meta data in a cabal file could hopefully be all you need to allow your code to be adjustible forward or backward a few dialects.
On 9 October 2015 5:05:43 am AEDT, Joachim Breitner
Hi,
Am Donnerstag, den 08.10.2015, 13:05 -0400 schrieb Richard Eisenberg:
On Oct 8, 2015, at 9:55 AM, Ben Gamari
wrote: With a few changes to our treatment of warnings and some new pragmas (aimed at library authors), we can greatly reduce the impact that library interface changes have on users.
My loose following of the interweaved threads has led me to this same
conclusion. Have you paid close enough attention to list exactly what
these changes should be? I have not. But I'd love to find a general solution to the migration problem so that we can continue to tinker with our beloved language without fear of flames burning down the house.
how willing are we to make the compiler smarter and more complicate to make sure old code does what it originally meant to do, as long as it is among the (large number close to 100)% common case?
For example, in this case (as was suggested in some trac ticket, I believe), ignore a method definition for a method that * is no longer in the class and * where a normal function is in scope and * these are obvious equivalent where obvious may mean various things, depending on our needs.
One could think this further: If the compiler sees a set of Applicative and Monad instances for the same type in the same module, and it has "return = something useful" and "pure = return", can’t it just do what we expect everyone to do manually and change that to "pure = something useful" and remove return?
In other words, can we replace pain and hassle for a lot of people by implementation work (and future maintenance cost) for one or a few people?
Or will that lead to another hell where code does no longer mean what it says?
Greetings, Joachim
-- Joachim “nomeata” Breitner mail@joachim-breitner.de • http://www.joachim-breitner.de/ Jabber: nomeata@joachim-breitner.de • GPG-Key: 0xF0FBF51F Debian Developer: nomeata@debian.org
------------------------------------------------------------------------
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

On 9/10/2015, at 7:05 am, Joachim Breitner
For example, in this case (as was suggested in some trac ticket, I believe), ignore a method definition for a method that * is no longer in the class and * where a normal function is in scope and * these are obvious equivalent where obvious may mean various things, depending on our needs. ... Or will that lead to another hell where code does no longer mean what it says?
One help for avoiding that hell is for the compiler never to deviate from the apparent semantics of the code without SHOUTING about what it's doing. There is also a big difference between the compiler adapting code written for an old interface and tampering with code written for a new interface, even if both involve the same actions on the compiler's part. The former may be helpful, the latter not. I have long felt that version and dependency information should be in a source file. Arguably, the various language pragmas sort of do that for *language* aspects. When the compiler *knows* what language version a module was written for and which library versions, there are many possibilities: - refuse to compile code that's too old - quietly compile the code under the old rules - noisily compile the code under the old rules - quietly adapt the code to the new rules - noisily adapt the code to the new rules Speaking only for myself, dealing with the change in my own code is not going to be a big deal, because it affects the monads I *define*, not the monads I *use*. The main thing I need is very clear warning messages from the compiler.

On Oct 8, 2015, at 1:05 PM, Richard Eisenberg
My loose following of the interweaved threads has led me to this same conclusion. Have you paid close enough attention to list exactly what these changes should be? I have not. But I'd love to find a general solution to the migration problem so that we can continue to tinker with our beloved language without fear of flames burning down the house.
I should have been more explicit. I was more thinking of a multi-tiered warning system, where we decide, as a community, not to be embarrassed by warnings of severity less than X. When putting a DEPRECATED pragma on a definition, we could then give an indication of how soon we expect the definition to be gone. I was not thinking of the sort of "smart" behavior that Joachim wrote about. I want my programs to do exactly what I say -- compilers should only be so clever. Richard

I would like to throw something to the discussion. In UHC's build system a
tool called Shuffle is used:
http://foswiki.cs.uu.nl/foswiki/Ehc/ShuffleDocumentation
It has three nice properties which I think could fit well with the problem:
- You can define variants of code with given numbers and names, and then
ask the tool to produce the output file. Something a bit better than CPP
itself.
- It understands some Haskell semantics. In particular, you can state
functions imported or exported by each chunk (see
http://foswiki.cs.uu.nl/foswiki/Ehc/ShuffleDocumentation#A_1.2_Output_specif...)
and the tool takes care of building up the module declaration on top of the
file.
- It ships with a hook to integrate with Cabal (
https://hackage.haskell.org/package/shuffle-0.1.3.3/docs/Distribution-Simple...
).
Just my two cents.
2015-10-09 3:31 GMT+02:00 Richard Eisenberg
On Oct 8, 2015, at 1:05 PM, Richard Eisenberg
wrote: My loose following of the interweaved threads has led me to this same
conclusion. Have you paid close enough attention to list exactly what these changes should be? I have not. But I'd love to find a general solution to the migration problem so that we can continue to tinker with our beloved language without fear of flames burning down the house.
I should have been more explicit. I was more thinking of a multi-tiered warning system, where we decide, as a community, not to be embarrassed by warnings of severity less than X. When putting a DEPRECATED pragma on a definition, we could then give an indication of how soon we expect the definition to be gone.
I was not thinking of the sort of "smart" behavior that Joachim wrote about. I want my programs to do exactly what I say -- compilers should only be so clever.
Richard _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 10/08/2015 09:55 AM, Ben Gamari wrote:
Michael Orlitzky
writes: (replying to no one in particular)
This problem isn't specific to Haskell. In every other language, you have projects that support major versions of toolkits, compilers, libraries and whatnot. And there's already a tool for it: git.
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
I don't find this option terribly appealing. As a maintainer I would far prefer maintaining a bit of CPP than a proliferation of branches, with all of the cherry-picking and potential code divergence that they bring.
It's really not that bad. You only need a separate branch when things are actually incompatible, so while you may support four versions of GHC, you might only need two branches. And the difference between the branches should be exactly what you have wrapped in an #ifdef right now: a) If you're modifying something in an #ifdef, then the change only goes in one branch, and you don't have to cherry-pick anything. b) If the change wouldn't fall within an #ifdef, then it affects code common to all your branches, and the merge will be trivial. It's annoying to have to do that for every change, so don't. Keep a pile of common changes in your master branch, and then rebase/merge the other branches right before you make a release. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0 iQJ8BAEBCgBmBQJWFruJXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQxNEU5RDcyRDdCMUFGREVGQzBCNDFDMUY2 RjQ4RDNEQTA1QzJEQURCAAoJEG9I09oFwtrbPB0P/37UFFBHqgN/Pspr26bdK4Hp j5TdF5RqP8BeiBg/shJyc1xrq4uB2wQZfrExCwfXssJYz+/Cuiw77MSYkay/ks3s LemyT6wIAypmjDOKHwzXw9mWU7oS9iL3CSPvZB07HLB4JNHpzBC0hAx3ZuMucyg7 xWFWgmlR5i1k069OCs+dgkfXotyt0zGtt17pw8YbX3X29/SOy9Y4K3+L6kfV8pOW dN/3/DIDakBDfLLLJG/pc57xq5GnTd77sCLNHrheWkybB3leW9t00Zq4erjBDyWt O7eO1jjTHoTo/S1iDWYGiy6zPI1dI+jDowUDLrZfIeAURw81ymqbfQlujwqoLB4j kWnaBDpT5JDhKZ3ZMWOcPtCGlUGbXIYh986s22jXfRhvO0dFNwwhbQOQdQMEUN74 XCggw4APIAHnA7lfg2s33bVOJr/d8XumnOCHD+7IEWYc+25lDWr42Ens7LOVJzv1 COh3JKAPbbWFpwmU2yKdowomiZglNZj9QW27e7x33ZU0rLITS2CdV/zmY5TLqf4S v40jJcQMt2ZSCW0X8HBpHdGG6tQxUWcYZR8kpbxoaoQgwwqYa+vN0aXyMd5tG3Bp cHyTDfya+Kt6lVa23kvs2YPXjUAXvSnoSBL646gpYvRvrx6L+T7Dd9UUNFGNg+8k +Dw1hH2mSYJ852ZreY9e =qH2g -----END PGP SIGNATURE-----

Right. With a nest of #ifdefs, you still have the same number of
branches (at least - there may be some combinations of options that
you don't support and it won't be obvious), they're just implicit and
harder to work with.
On Thu, Oct 8, 2015 at 11:52 AM, Michael Orlitzky
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
On 10/08/2015 09:55 AM, Ben Gamari wrote:
Michael Orlitzky
writes: (replying to no one in particular)
This problem isn't specific to Haskell. In every other language, you have projects that support major versions of toolkits, compilers, libraries and whatnot. And there's already a tool for it: git.
Instead of using #ifdef to handle four different compilers, keep a branch for each. Git is designed to make this easy, and it's usually trivial to merge changes from the master branch back into e.g. the ghc-7.8 branch. That way the code in your master branch stays clean. When you want to stop supporting an old GHC, delete that branch.
I don't find this option terribly appealing. As a maintainer I would far prefer maintaining a bit of CPP than a proliferation of branches, with all of the cherry-picking and potential code divergence that they bring.
It's really not that bad. You only need a separate branch when things are actually incompatible, so while you may support four versions of GHC, you might only need two branches.
And the difference between the branches should be exactly what you have wrapped in an #ifdef right now:
a) If you're modifying something in an #ifdef, then the change only goes in one branch, and you don't have to cherry-pick anything.
b) If the change wouldn't fall within an #ifdef, then it affects code common to all your branches, and the merge will be trivial.
It's annoying to have to do that for every change, so don't. Keep a pile of common changes in your master branch, and then rebase/merge the other branches right before you make a release.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0
iQJ8BAEBCgBmBQJWFruJXxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQxNEU5RDcyRDdCMUFGREVGQzBCNDFDMUY2 RjQ4RDNEQTA1QzJEQURCAAoJEG9I09oFwtrbPB0P/37UFFBHqgN/Pspr26bdK4Hp j5TdF5RqP8BeiBg/shJyc1xrq4uB2wQZfrExCwfXssJYz+/Cuiw77MSYkay/ks3s LemyT6wIAypmjDOKHwzXw9mWU7oS9iL3CSPvZB07HLB4JNHpzBC0hAx3ZuMucyg7 xWFWgmlR5i1k069OCs+dgkfXotyt0zGtt17pw8YbX3X29/SOy9Y4K3+L6kfV8pOW dN/3/DIDakBDfLLLJG/pc57xq5GnTd77sCLNHrheWkybB3leW9t00Zq4erjBDyWt O7eO1jjTHoTo/S1iDWYGiy6zPI1dI+jDowUDLrZfIeAURw81ymqbfQlujwqoLB4j kWnaBDpT5JDhKZ3ZMWOcPtCGlUGbXIYh986s22jXfRhvO0dFNwwhbQOQdQMEUN74 XCggw4APIAHnA7lfg2s33bVOJr/d8XumnOCHD+7IEWYc+25lDWr42Ens7LOVJzv1 COh3JKAPbbWFpwmU2yKdowomiZglNZj9QW27e7x33ZU0rLITS2CdV/zmY5TLqf4S v40jJcQMt2ZSCW0X8HBpHdGG6tQxUWcYZR8kpbxoaoQgwwqYa+vN0aXyMd5tG3Bp cHyTDfya+Kt6lVa23kvs2YPXjUAXvSnoSBL646gpYvRvrx6L+T7Dd9UUNFGNg+8k +Dw1hH2mSYJ852ZreY9e =qH2g -----END PGP SIGNATURE----- _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

On 08.10 14:12, David Thomas wrote:
Right. With a nest of #ifdefs, you still have the same number of branches (at least - there may be some combinations of options that you don't support and it won't be obvious), they're just implicit and harder to work with.
Have you got an example of a library that works with branches for GHC-versions while maintaining feature-parity across branches with a versioning scheme that works in practice with Hackage? - Taru Karttunen

No, and I'm not sure just how well existing Hackage tooling/process
matches the workflow (due mostly to ignorance of existing Hackage
tooling/process). To the degree that there's a mismatch, it may have
reason sufficient to abandon the approach - or it may suggest
improvements to tooling/process.
On Thu, Oct 8, 2015 at 3:05 PM, Taru Karttunen
On 08.10 14:12, David Thomas wrote:
Right. With a nest of #ifdefs, you still have the same number of branches (at least - there may be some combinations of options that you don't support and it won't be obvious), they're just implicit and harder to work with.
Have you got an example of a library that works with branches for GHC-versions while maintaining feature-parity across branches with a versioning scheme that works in practice with Hackage?
- Taru Karttunen

2015-10-09 2:34 GMT+02:00 David Thomas
No, and I'm not sure just how well existing Hackage tooling/process matches the workflow (due mostly to ignorance of existing Hackage tooling/process). To the degree that there's a mismatch, it may have reason sufficient to abandon the approach - or it may suggest improvements to tooling/process.
To be honest, I can't really see how git can help with the versioning issue at all. Let's think backwards: In the end, you must give your compiler a single version of your code which works with. The common solution for this is to run something before the actual compilation stage (preprocessing), which picks the right parts from a single source. Let assume that we don't want this preprocessing step, then you still have to give your compiler a single correct version of your sources. How do you want to accomplish that? Shipping separate source packages (be it via Hackage or through some other means) for each version? This would be maintenance hell and much work, especially when there is no 1:1 correspondence between branches and compilers/libraries (as was proposed above). Furthermore, having to remember which stuff has been merged where, solving merge conflicts, introducing new branches when compilers/libraries change, etc. etc. requires extensive bookkeeping beyond git, to such an extent that larger companies normally have several people working full-time on such non-technical stuff. This is definitely not the way to go when you want to encourage people to work on their free time in Haskell. In a nutshell: IMHO having #ifdefs in the code is the lesser evil. If somebody has a better, actually working idea, he can probably become a millionaire quickly by selling this to the industry...

On 10/09/2015 03:29 AM, Sven Panne wrote:
2015-10-09 2:34 GMT+02:00 David Thomas
mailto:davidleothomas@gmail.com>: No, and I'm not sure just how well existing Hackage tooling/process matches the workflow (due mostly to ignorance of existing Hackage tooling/process). To the degree that there's a mismatch, it may have reason sufficient to abandon the approach - or it may suggest improvements to tooling/process.
To be honest, I can't really see how git can help with the versioning issue at all. Let's think backwards: In the end, you must give your compiler a single version of your code which works with. The common solution for this is to run something before the actual compilation stage (preprocessing), which picks the right parts from a single source. Let assume that we don't want this preprocessing step, then you still have to give your compiler a single correct version of your sources. How do you want to accomplish that? Shipping separate source packages (be it via Hackage or through some other means) for each version?
Via Hackage. Let's say my-package is at version 1.7.2 as far as features and bug fixes go; Hackage would then host my-package-1.7.2.708 and my-package-1.7.2.710. The former is uploaded by running cabal upload on branch ghc-7.8 of the source repository, the latter on branch ghc-7.10. The main difference between the branches would be in the .cabal file: on ghc-7.8 branch version: my-package-1.7.2.708 build-depends: ghc <= 7.8.*, on ghc-7.10 branch version: my-package-1.7.2.710 build-depends: ghc == 7.10.*, You can do this with no extra tooling, the only extra work is to run seven commands instead of one at publishing time: git checkout ghc-7.8 git merge master cabal upload git checkout ghc-7.10 git merge master cabal upload git checkout master Obviously it would be better to script this, or even to add support directly to cabal-install and stack.
This would be maintenance hell and much work, especially when there is no 1:1 correspondence between branches and compilers/libraries (as was proposed above). Furthermore, having to remember which stuff has been merged where, solving merge conflicts, introducing new branches when compilers/libraries change, etc. etc. requires extensive bookkeeping beyond git, to such an extent that larger companies normally have several people working full-time on such non-technical stuff. This is definitely not the way to go when you want to encourage people to work on their free time in Haskell.
I really don't see what could possibly cause all that complexity. Conflicts between different GHC branches and master development branch should be minimal, with any care.
In a nutshell: IMHO having #ifdefs in the code is the lesser evil. If somebody has a better, actually working idea, he can probably become a millionaire quickly by selling this to the industry...
The industry can be remarkably reluctant to take up a new idea; nobody has ever been fired for using #ifdef.

2015-10-09 14:57 GMT+02:00 Mario Blažević
[...] You can do this with no extra tooling, the only extra work is to run seven commands instead of one at publishing time:
git checkout ghc-7.8 git merge master cabal upload git checkout ghc-7.10 git merge master cabal upload git checkout master
Let's pray that no merge conflicts will happen. Furthermore, you've omitted the "git push"s to github to let Travis CI tell you what you've forgot. You should better do that before an upload. Ooops, and you forgot to tag the release, too (I mean 7 tags)...
[...] I really don't see what could possibly cause all that complexity. Conflicts between different GHC branches and master development branch should be minimal, with any care.
For toy stuff you might be right, but in any larger project I've seen in the last decades, keeping branches well maintained *is* non-trivial. Normally you already have branches for different releases of your SW you still have to maintain, and with the proposed approach you would effectively multiply the number of those branches by the number of supported platforms/compilers. Much fun proposing that to your manager... This might work if you have only linear development and few platforms, but otherwise it won't scale. Furthermore, having #ifdef-free code is a non-goal in itself: The goal we talk about is ease of maintenance, and as it's proposed, it makes a maintainer's life harder, not easier (you need much more steps + bookkeeping). And the merges e.g. only work automatically when the change in the #ifdef'd code would be trivial, too (adding/removing lines, etc.), so git doesn't offer any advantage.
In a nutshell: IMHO having #ifdefs in the code is the lesser evil. If
somebody has a better, actually working idea, he can probably become a millionaire quickly by selling this to the industry...
The industry can be remarkably reluctant to take up a new idea; nobody has ever been fired for using #ifdef.
If you can provably speed up development time and/or release cycles, selling new ideas is easy: Nobody has been fired for reaching a deadline too early. OTOH making some work easier and at the same time other work more complicated (as is the case in the proposal) will meet some resistance...

On 10/09/2015 09:42 AM, Sven Panne wrote:
For toy stuff you might be right, but in any larger project I've seen in the last decades, keeping branches well maintained *is* non-trivial. Normally you already have branches for different releases of your SW you still have to maintain, and with the proposed approach you would effectively multiply the number of those branches by the number of supported platforms/compilers. Much fun proposing that to your manager... This might work if you have only linear development and few platforms, but otherwise it won't scale.
Furthermore, having #ifdef-free code is a non-goal in itself: The goal we talk about is ease of maintenance, and as it's proposed, it makes a maintainer's life harder, not easier (you need much more steps + bookkeeping). And the merges e.g. only work automatically when the change in the #ifdef'd code would be trivial, too (adding/removing lines, etc.), so git doesn't offer any advantage.
It works, everyone else already does it, the complexity is there whether you branch or not, you'll never have merge conflicts, etc. I'm not sure how much effort you want me to expend convincing you to improve your life. I don't have this problem. Go try it instead of arguing that it can't possibly work.

On 09.10 08:57, Mario Blažević wrote:
Via Hackage. Let's say my-package is at version 1.7.2 as far as features and bug fixes go; Hackage would then host my-package-1.7.2.708 and my-package-1.7.2.710. The former is uploaded by running cabal upload on branch ghc-7.8 of the source repository, the latter on branch ghc-7.10. The main difference between the branches would be in the .cabal file:
on ghc-7.8 branch
version: my-package-1.7.2.708 build-depends: ghc <= 7.8.*,
on ghc-7.10 branch
version: my-package-1.7.2.710 build-depends: ghc == 7.10.*,
You can do this with no extra tooling, the only extra work is to run seven commands instead of one at publishing time:
git checkout ghc-7.8 git merge master cabal upload git checkout ghc-7.10 git merge master cabal upload git checkout master
And then GHC 8.0 is released and your library is broken until you update the cabal file or add a new branch. Which means that all libraries depending on your library refuse to build... This would mean that all libraries would need a new release on each GHC major version? Oh and testing your library on HEAD before release? Not supported? Using any library depending on your library before the release? - Taru Karttunen

On 10/09/2015 10:32 AM, Taru Karttunen wrote:
You can do this with no extra tooling, the only extra work is to run seven commands instead of one at publishing time:
git checkout ghc-7.8 git merge master cabal upload git checkout ghc-7.10 git merge master cabal upload git checkout master
And then GHC 8.0 is released and your library is broken until you update the cabal file or add a new branch. Which means that all libraries depending on your library refuse to build...
This would mean that all libraries would need a new release on each GHC major version? Oh and testing your library on HEAD before release? Not supported? Using any library depending on your library before the release?
This has nothing to do with git. You have that problem anyway. Don't want a version bound on your latest branch? Don't put one there.

On 15-10-09 10:32 AM, Taru Karttunen wrote:
On 09.10 08:57, Mario Blažević wrote:
... on ghc-7.10 branch
version: my-package-1.7.2.710 build-depends: ghc == 7.10.*,
...
And then GHC 8.0 is released and your library is broken until you update the cabal file or add a new branch. Which means that all libraries depending on your library refuse to build...
This would mean that all libraries would need a new release on each GHC major version? Oh and testing your library on HEAD before release? Not supported? Using any library depending on your library before the release?
Relax. Take a deep breath. The above branching scheme extends the PVP guidance to GHC as well; which is to say, it assumes every major GHC release to have a potential to break your package. But if you trust that GHC 8.0 will be backward compatible with 7.10 as far as your code goes, you can change the same .cabal file to
on ghc-7.10 branch
version: my-package-1.7.2.710 build-depends: ghc >= 7.10 && < 8.1,
and if you're confident that GHC will *never* break your code again, you can even have
on ghc-forever branch
version: my-package-1.7.2.0 build-depends: ghc >= 7.10

We write software in teams. Most software development (or at least a large fraction of it) requires that more than one person work on the same codebase simultaneously. This means that in most software development there will *necessarily* be code merges. Every time we merge our work, on branch A, with that of others, on branch B, using git or most other tools we are doing a simple line-based merge on text files. This kind of merge can produce an output that introduces bugs that were not present on either branch A or B even if there are no conflicts. This is terrible! This "merge tool" can create bugs by itself out of thin air! Can we have a better merge tool? Ideally, I would like a merge tool to work like a compiler: If there are no conflicts, the merge tool guarantees the merge introduces no new bugs (other than those already found in A or B). In other words, the output of the merge tool is "merge bug free", if there are no conflicts. Unfortunately, this is rather tall order. Previous research found that for many languages it is impossible to write such a tool [1]. However, this may be asking the wrong question. Let's take a different point of view. Let's assume there will be software merges (and therefore that merge tools will be used in software development), here are better questions to ask: 1. How does the desire for a "compiler-like" merge tool constrain language design? Can we design a language for which such a merge tool is possible? 2. If it is not possible to get the guarantee that no bugs will be introduced during the merge process, what guarantees can we get? Can we at least get that at the module level? Does that constrain the available design space? Specializing those to Haskell: 1. Can we write a "compiler-like" merge tool for Haskell? 2. If not, can we tweak Haskell so that the existence of such a tool becomes possible? Can we at least get that at the module level? Cheers, Dimitri [1] D. Binkley, S. Horwitz, and T. Reps. 1995. Program integration for languages with procedure calls. /ACM Trans. Softw. Eng. Methodol./ 4, 1 (January 1995), 3-35.

On Sun, Oct 11, 2015 at 4:09 AM, Dimitri DeFigueiredo wrote:
Can we write a "compiler-like" merge tool for Haskell?
Merge could be considered to be a combination of diff and patch. And so you might want to merge ASTs, which are typically families of datatypes. Related: Type-Safe Diff for Families of Dataypes: http://www.andres-loeh.de/gdiff-wgp.pdf Regards, Sean

+1 for AST merge. Writing code from text is reasonable, but storing code as
text is lunacy.
On Sat, Oct 10, 2015 at 11:17 PM, Sean Leather
On Sun, Oct 11, 2015 at 4:09 AM, Dimitri DeFigueiredo wrote:
Can we write a "compiler-like" merge tool for Haskell?
Merge could be considered to be a combination of diff and patch. And so you might want to merge ASTs, which are typically families of datatypes. Related: Type-Safe Diff for Families of Dataypes: http://www.andres-loeh.de/gdiff-wgp.pdf
Regards, Sean
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-- Jeffrey Benjamin Brown

1. If you have a merge take place in two lines right next to each other,
say one variable is declared in one and a function processes that variable
in another, if those two are responsible together for exiting a loop, then
you're going to be solving the Halting Problem to show there are no bugs.
2. If you have one function "f(x) = a + bx + cx^2 +..." and another
function that iterates through integers until a solution is found in
integers, then if the internals of either is changed (either the constants
in f, or the search strategy), then you're going to have to be solving
"Hilberts 10th Problem" to understand that there are no bugs introduced,
thought this was interesting as the changes are "further" apart.
At least for the general case, this should be a problem for all Turing
Complete languages. It might be interesting to think about finding sections
of code that are "Total", and perhaps put certain guarantees on that, but
there still might be weird ways that two lines of code can play with each
other beyond that.
I hope this isn't overly pedantic, or that I'm not completely off base.
On Sun, Oct 11, 2015 at 8:10 PM, Jeffrey Brown
+1 for AST merge. Writing code from text is reasonable, but storing code as text is lunacy.
On Sat, Oct 10, 2015 at 11:17 PM, Sean Leather
wrote: On Sun, Oct 11, 2015 at 4:09 AM, Dimitri DeFigueiredo wrote:
Can we write a "compiler-like" merge tool for Haskell?
Merge could be considered to be a combination of diff and patch. And so you might want to merge ASTs, which are typically families of datatypes. Related: Type-Safe Diff for Families of Dataypes: http://www.andres-loeh.de/gdiff-wgp.pdf
Regards, Sean
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-- Jeffrey Benjamin Brown
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

I think both these examples strengthen my argument! :-) There is no problem at all in the second and the first one precisely show us a situation when we would like to know there is a conflict. The guarantee I want is that when merging 2 branches A and B: *If there are no conflicts*, the merge tool guarantees the merge output introduces no new bugs *other than those already found in A or B*. Note: 1. we get to define what a conflict is and 2. we don't care if the bugs were already present in either branch. On the first example, I assume both branches A and B will compile and run before the merge so that we are really only changing the value of the variable on branch A and the definition of the function on branch B (otherwise the branches would not compile before the merge). Because the definition of the function depends on a *specific* value the variable, there is no way to avoid declaring a conflict. But notice that in the more general case where the function puts that value as a parameter there is no conflict, because if there were a bug it would already be present in branch B. So, the merge process is not introducing any bugs by itself. This is exactly what happens in the second example. I assume branch A provides the new definition of f(x) and that in branch B we change the search strategy in the search function g. However, I assume that g would take f as a parameter g:: (Integer -> Integer) -> Integer. In this case, if there were a bug in g in the merged output, then it would already be present in branch B before the merge. Therefore, there is no conflict at all here. In summary, there is no conflict on the second example and I would like to know about the conflict due to the global dependency on the first one. Dimitri On 10/11/15 6:58 PM, Charles Durham wrote:
1. If you have a merge take place in two lines right next to each other, say one variable is declared in one and a function processes that variable in another, if those two are responsible together for exiting a loop, then you're going to be solving the Halting Problem to show there are no bugs.
2. If you have one function "f(x) = a + bx + cx^2 +..." and another function that iterates through integers until a solution is found in integers, then if the internals of either is changed (either the constants in f, or the search strategy), then you're going to have to be solving "Hilberts 10th Problem" to understand that there are no bugs introduced, thought this was interesting as the changes are "further" apart.
At least for the general case, this should be a problem for all Turing Complete languages. It might be interesting to think about finding sections of code that are "Total", and perhaps put certain guarantees on that, but there still might be weird ways that two lines of code can play with each other beyond that.
I hope this isn't overly pedantic, or that I'm not completely off base.
On Sun, Oct 11, 2015 at 8:10 PM, Jeffrey Brown
mailto:jeffbrown.the@gmail.com> wrote: +1 for AST merge. Writing code from text is reasonable, but storing code as text is lunacy.
On Sat, Oct 10, 2015 at 11:17 PM, Sean Leather
mailto:sean.leather@gmail.com> wrote: On Sun, Oct 11, 2015 at 4:09 AM, Dimitri DeFigueiredo wrote:
Can we write a "compiler-like" merge tool for Haskell?
Merge could be considered to be a combination of diff and patch. And so you might want to merge ASTs, which are typically families of datatypes. Related: Type-Safe Diff for Families of Dataypes: http://www.andres-loeh.de/gdiff-wgp.pdf
Regards, Sean
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-- Jeffrey Benjamin Brown
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Hmm, I think for Case 2, that there is a case where there is no bug in A or B, but a bug in the merge. If the search strategy changes from s to s', and the function definition changes from f to f', then I can come up with a scenario where s and s' works for f and s works for both f and f', but s' doesn't work for f'. I believe that Case 1 is pretty much the same. On Mon, Oct 12, 2015 at 3:20 PM, Dimitri DeFigueiredo < defigueiredo@ucdavis.edu> wrote:
I think both these examples strengthen my argument! :-)
There is no problem at all in the second and the first one precisely show us a situation when we would like to know there is a conflict.
The guarantee I want is that when merging 2 branches A and B: *If there are no conflicts*, the merge tool guarantees the merge output introduces no new bugs *other than those already found in A or B*.
Note: 1. we get to define what a conflict is and 2. we don't care if the bugs were already present in either branch.
On the first example, I assume both branches A and B will compile and run before the merge so that we are really only changing the value of the variable on branch A and the definition of the function on branch B (otherwise the branches would not compile before the merge).
Because the definition of the function depends on a *specific* value the variable, there is no way to avoid declaring a conflict. But notice that in the more general case where the function puts that value as a parameter there is no conflict, because if there were a bug it would already be present in branch B. So, the merge process is not introducing any bugs by itself.
This is exactly what happens in the second example. I assume branch A provides the new definition of f(x) and that in branch B we change the search strategy in the search function g. However, I assume that g would take f as a parameter g:: (Integer -> Integer) -> Integer. In this case, if there were a bug in g in the merged output, then it would already be present in branch B before the merge. Therefore, there is no conflict at all here.
In summary, there is no conflict on the second example and I would like to know about the conflict due to the global dependency on the first one.
Dimitri
On 10/11/15 6:58 PM, Charles Durham wrote:
1. If you have a merge take place in two lines right next to each other, say one variable is declared in one and a function processes that variable in another, if those two are responsible together for exiting a loop, then you're going to be solving the Halting Problem to show there are no bugs.
2. If you have one function "f(x) = a + bx + cx^2 +..." and another function that iterates through integers until a solution is found in integers, then if the internals of either is changed (either the constants in f, or the search strategy), then you're going to have to be solving "Hilberts 10th Problem" to understand that there are no bugs introduced, thought this was interesting as the changes are "further" apart.
At least for the general case, this should be a problem for all Turing Complete languages. It might be interesting to think about finding sections of code that are "Total", and perhaps put certain guarantees on that, but there still might be weird ways that two lines of code can play with each other beyond that.
I hope this isn't overly pedantic, or that I'm not completely off base.
On Sun, Oct 11, 2015 at 8:10 PM, Jeffrey Brown
wrote: +1 for AST merge. Writing code from text is reasonable, but storing code as text is lunacy.
On Sat, Oct 10, 2015 at 11:17 PM, Sean Leather <
sean.leather@gmail.com> wrote: On Sun, Oct 11, 2015 at 4:09 AM, Dimitri DeFigueiredo wrote:
Can we write a "compiler-like" merge tool for Haskell?
Merge could be considered to be a combination of diff and patch. And so you might want to merge ASTs, which are typically families of datatypes. Related: Type-Safe Diff for Families of Dataypes: http://www.andres-loeh.de/gdiff-wgp.pdf http://www.andres-loeh.de/gdiff-wgp.pdf
Regards, Sean
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-- Jeffrey Benjamin Brown
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing listHaskell-Cafe@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Hi, I work on a team of ~15 developers and we follow a merge-free process using rebasing. Although in theory it sounds like you'll get the same problems with rebasing as with merging, over 6 years and 100,000+ commits I'm struggling to think of a single bug silently introduced due to a rebase that succeeded without conflicts. It's a cunning bit of social engineering: when merging the work of two developers, it's easy for both of them to disclaim responsibility for any bugs introduced by the merge, but when rebasing the work of developer A on top of that of developer B it's clear that any bugs are dev A's responsibility as their commits come later in the published history than dev B's, and this gives dev A the incentive to check for semantic conflicts properly. That said, it's theoretically possible to introduce bugs with a 'clean' rebase, just as with merging, and I'm interested in your suggestion because of that. The difficulty I see is defining what you mean by "no new bugs". The two branches satisfy two different specifications, and my reading of "no new bugs" is that the merge result satisfies a merged specification. It sounds difficult to automatically merge two specs together in general, although in practice specs are often quite informal and where a rebase looks questionable a merged spec can be invented and verified with human intervention, remembering that the rebasing developer has a personal incentive to get this right. There's two kind of formal spec that I know of in common usage in the industry: types and tests. Using a type system gives you a bunch of theorems about what your program can and cannot do, and this is checked at compile time, and an automated test suite is basically a bunch of properties that your program satisfies which can be verified by running the test suite. Neither of these can capture the real spec exactly, but in practice they turn out to be good enough for many purposes. For those cases where that's not enough you can bring out the heavy artillery of a theorem prover, and this does let you write down the real spec exactly and verify it either side of a merge or rebase. I wouldn't say that approach was in common usage but it does get used where the investment is worth it. Verifying that changes to the type system leave the real spec satisfied sounds quite hard in general. Merging additions to the test suite sounds easy but other changes to the part of the spec embodied in the test suite also sounds hard in general. Of the three, merging changes to an automatically-proved theorem actually feels easiest - my experience is that keeping a (well-written) automatic proof in sync with changes to code is often not that hard, and when it is hard it's normally been because the changes to the code meant the theorem was no longer true. Perhaps enriching the type system to a point where you can write truly useful specs in it is the answer? Thus dependent types. Cheers, David

The 6 years and 100K+ commits data point is pretty impressive! It makes me question how much effort should be devoted to avoiding these "merge-introduced" bugs in the first place. I assume this data point does not capture bugs introduced by merges that were found by unit tests though. I hadn't looked at how to actually go about writing the "bug-free" merge tool (if it becomes possible to write one). However, I think my immediate approach would be different from your suggestion. I was thinking that we could just look at the dependency graph between the definitions. Consider 2 possible (previously suggested) merge scenarios where we define both a variable and a function. In both scenarios: - Branch A changes the "variable" (but does not change the function) - Branch B changes the function The scenarios are different because of the dependency between the function and the variable: 1. Function definition is independent of variable definition https://gist.github.com/dimitri-xyz/f0819c947ecd386b84d6 2. Function definition dependents on variable definition https://gist.github.com/dimitri-xyz/81a56b0ce0929e8513e9 Intuitively, I would like to have a merge conflict in scenario 2, but none in scenario 1. That suggests the following rule: We have a conflict iif a definition *that was changed* in branch B depends (transitively) on a definition *that was change* in branch A or vice-versa. In scenario 1, the definition of the function does not depend on the definition of the variable, so there is no conflict. Notice that 'main' depends on both, but it hasn't changed. In scenario 2, the definition of the functions has changed and it depends on the definition of the variable, so we would have a conflict. I understand this may not be sufficient to ensure no bugs are introduced, but this is my first approximation. The rule just captures the notion that programmer's shouldn't have their assumptions pulled from under their feet as they make changes. I wonder if/how I could claim that 'main' (that depends on both the function and the variable) is "merge-bug free" in the merged version. Cheers, Dimitri On 10/12/15 2:16 PM, David Turner wrote:
Hi,
I work on a team of ~15 developers and we follow a merge-free process using rebasing. Although in theory it sounds like you'll get the same problems with rebasing as with merging, over 6 years and 100,000+ commits I'm struggling to think of a single bug silently introduced due to a rebase that succeeded without conflicts. It's a cunning bit of social engineering: when merging the work of two developers, it's easy for both of them to disclaim responsibility for any bugs introduced by the merge, but when rebasing the work of developer A on top of that of developer B it's clear that any bugs are dev A's responsibility as their commits come later in the published history than dev B's, and this gives dev A the incentive to check for semantic conflicts properly.
That said, it's theoretically possible to introduce bugs with a 'clean' rebase, just as with merging, and I'm interested in your suggestion because of that. The difficulty I see is defining what you mean by "no new bugs". The two branches satisfy two different specifications, and my reading of "no new bugs" is that the merge result satisfies a merged specification. It sounds difficult to automatically merge two specs together in general, although in practice specs are often quite informal and where a rebase looks questionable a merged spec can be invented and verified with human intervention, remembering that the rebasing developer has a personal incentive to get this right.
There's two kind of formal spec that I know of in common usage in the industry: types and tests. Using a type system gives you a bunch of theorems about what your program can and cannot do, and this is checked at compile time, and an automated test suite is basically a bunch of properties that your program satisfies which can be verified by running the test suite. Neither of these can capture the real spec exactly, but in practice they turn out to be good enough for many purposes. For those cases where that's not enough you can bring out the heavy artillery of a theorem prover, and this does let you write down the real spec exactly and verify it either side of a merge or rebase. I wouldn't say that approach was in common usage but it does get used where the investment is worth it.
Verifying that changes to the type system leave the real spec satisfied sounds quite hard in general. Merging additions to the test suite sounds easy but other changes to the part of the spec embodied in the test suite also sounds hard in general. Of the three, merging changes to an automatically-proved theorem actually feels easiest - my experience is that keeping a (well-written) automatic proof in sync with changes to code is often not that hard, and when it is hard it's normally been because the changes to the code meant the theorem was no longer true.
Perhaps enriching the type system to a point where you can write truly useful specs in it is the answer? Thus dependent types.
Cheers,
David

2015-10-13 22:08 GMT+02:00 Dimitri DeFigueiredo
[...] I hadn't looked at how to actually go about writing the "bug-free" merge tool (if it becomes possible to write one).
Isn't the whole approach limited by Rices's theorem? ( https://en.wikipedia.org/wiki/Rice%27s_theorem) In general we have to deal with partial functions, and the property you're talking about is definitely not trivial in the theorem's sense. So this implies 2 choices: * You don't detect all problems caused by merging (no point in putting some effort into that approach, it's basically what we have today with plain 'git merge'). * You have false positives, i.e. the merge tool complains although the merge is OK. So the best you can do AFAICT is to keep the amount of false positives low, hoping that people will see a benefit for the added annoyances.

Yes, you're quite right, it's not uncommon for the result of a rebase
either not to compile or else to fail some set of tests. But then that's
also true of all code changes even by a single developer in isolation.
A dependency analysis is an interesting idea (indeed, it's one of the tools
we use to work out what the rough consequences of a change will be) but
risks both false positives and false negatives. On the false positive side,
as you observe, there'll often be something like 'main' which transitively
depends on two parallel changes. On the false negative side, you can get
conflicting changes that occur in separate programs - for instance the two
sides of a network protocol - and a naive dependency analysis will miss
this kind of problem. Heck, the changes could be in different languages
(think browser-side JS talking via AJAX to server-side Haskell). Or even in
separate repositories, possibly even developed by separate organisations,
so you can't even tell you've done a merge let alone tell that it contains
conflicts!
Because of this sort of reasoning, I'm sceptical that the problem of
'bugs-introduced-by-merges' is really any easier than the general problem
of 'bugs'. It's a continuum, not a clearly-defined subclass of bugs.
I realised that there's another way to justify the unreasonable
effectiveness of rebasing over merging. Assuming (conscientious) developers
A & B making parallel changes:
A1 -> A2 -> .... -> An
B1 -> B2 -> .... -> Bm
Each individual change can be seen to be correct, because that's how
conscientious developers work. But then when it comes to merging, from
their individual points of view it looks like:
A1 -> A2 -> .... -> An -> [all of B's changes in one go]
B1 -> B2 -> .... -> Bm -> [all of A's changes in one go]
It's very hard for either A or B to reason about the correctness of that
last step because it's such a big change. On the other hand, rebasing B's
changes on top of A's looks like this:
A1 -> A2 -> .... -> An -> B1' -> B2' -> .... -> Bm'
Then B can go through their rebased changes and use pretty much the same
reasoning as they were using before to check that they're still correct.
Cheers,
On 13 October 2015 at 21:08, Dimitri DeFigueiredo
The 6 years and 100K+ commits data point is pretty impressive! It makes me question how much effort should be devoted to avoiding these "merge-introduced" bugs in the first place. I assume this data point does not capture bugs introduced by merges that were found by unit tests though.
I hadn't looked at how to actually go about writing the "bug-free" merge tool (if it becomes possible to write one). However, I think my immediate approach would be different from your suggestion. I was thinking that we could just look at the dependency graph between the definitions.
Consider 2 possible (previously suggested) merge scenarios where we define both a variable and a function. In both scenarios: - Branch A changes the "variable" (but does not change the function) - Branch B changes the function
The scenarios are different because of the dependency between the function and the variable: 1. Function definition is independent of variable definition https://gist.github.com/dimitri-xyz/f0819c947ecd386b84d6
2. Function definition dependents on variable definition https://gist.github.com/dimitri-xyz/81a56b0ce0929e8513e9
Intuitively, I would like to have a merge conflict in scenario 2, but none in scenario 1. That suggests the following rule:
We have a conflict iif a definition *that was changed* in branch B depends (transitively) on a definition *that was change* in branch A or vice-versa.
In scenario 1, the definition of the function does not depend on the definition of the variable, so there is no conflict. Notice that 'main' depends on both, but it hasn't changed. In scenario 2, the definition of the functions has changed and it depends on the definition of the variable, so we would have a conflict.
I understand this may not be sufficient to ensure no bugs are introduced, but this is my first approximation. The rule just captures the notion that programmer's shouldn't have their assumptions pulled from under their feet as they make changes. I wonder if/how I could claim that 'main' (that depends on both the function and the variable) is "merge-bug free" in the merged version.
Cheers,
Dimitri
On 10/12/15 2:16 PM, David Turner wrote:
Hi,
I work on a team of ~15 developers and we follow a merge-free process using rebasing. Although in theory it sounds like you'll get the same problems with rebasing as with merging, over 6 years and 100,000+ commits I'm struggling to think of a single bug silently introduced due to a rebase that succeeded without conflicts. It's a cunning bit of social engineering: when merging the work of two developers, it's easy for both of them to disclaim responsibility for any bugs introduced by the merge, but when rebasing the work of developer A on top of that of developer B it's clear that any bugs are dev A's responsibility as their commits come later in the published history than dev B's, and this gives dev A the incentive to check for semantic conflicts properly.
That said, it's theoretically possible to introduce bugs with a 'clean' rebase, just as with merging, and I'm interested in your suggestion because of that. The difficulty I see is defining what you mean by "no new bugs". The two branches satisfy two different specifications, and my reading of "no new bugs" is that the merge result satisfies a merged specification. It sounds difficult to automatically merge two specs together in general, although in practice specs are often quite informal and where a rebase looks questionable a merged spec can be invented and verified with human intervention, remembering that the rebasing developer has a personal incentive to get this right.
There's two kind of formal spec that I know of in common usage in the industry: types and tests. Using a type system gives you a bunch of theorems about what your program can and cannot do, and this is checked at compile time, and an automated test suite is basically a bunch of properties that your program satisfies which can be verified by running the test suite. Neither of these can capture the real spec exactly, but in practice they turn out to be good enough for many purposes. For those cases where that's not enough you can bring out the heavy artillery of a theorem prover, and this does let you write down the real spec exactly and verify it either side of a merge or rebase. I wouldn't say that approach was in common usage but it does get used where the investment is worth it.
Verifying that changes to the type system leave the real spec satisfied sounds quite hard in general. Merging additions to the test suite sounds easy but other changes to the part of the spec embodied in the test suite also sounds hard in general. Of the three, merging changes to an automatically-proved theorem actually feels easiest - my experience is that keeping a (well-written) automatic proof in sync with changes to code is often not that hard, and when it is hard it's normally been because the changes to the code meant the theorem was no longer true.
Perhaps enriching the type system to a point where you can write truly useful specs in it is the answer? Thus dependent types.
Cheers,
David

2015-10-14 10:14 GMT+02:00 David Turner
[...] It's very hard for either A or B to reason about the correctness of that last step because it's such a big change. On the other hand, rebasing B's changes on top of A's looks like this:
A1 -> A2 -> .... -> An -> B1' -> B2' -> .... -> Bm'
Then B can go through their rebased changes and use pretty much the same reasoning as they were using before to check that they're still correct.
And what's even better: If you have a sane set of tests, you can easily find the exact commit where things went wrong in an automated way via 'git bisect'. This is the reason why some companies with large code bases totally ban merge commits and rely on rebasing exclusively. You can even go a step further by e.g. trying to revert the guilty commit and see if things work again without it, again in a totally automated way.

On 14 October 2015 at 11:51, Sven Panne
2015-10-14 10:14 GMT+02:00 David Turner
: [...] It's very hard for either A or B to reason about the correctness of that last step because it's such a big change. On the other hand, rebasing B's changes on top of A's looks like this:
A1 -> A2 -> .... -> An -> B1' -> B2' -> .... -> Bm'
Then B can go through their rebased changes and use pretty much the same reasoning as they were using before to check that they're still correct.
And what's even better: If you have a sane set of tests, you can easily find the exact commit where things went wrong in an automated way via 'git bisect'. This is the reason why some companies with large code bases totally ban merge commits and rely on rebasing exclusively. You can even go a step further by e.g. trying to revert the guilty commit and see if things work again without it, again in a totally automated way.
I'm not sure why you're saying this is impossible with merges; I've done it several times. Git will find the right branch where things went wrong, and then finds the commit on that branch without problems. Erik

2015-10-14 13:44 GMT+02:00 Erik Hesselink
I'm not sure why you're saying this is impossible with merges; I've done it several times. Git will find the right branch where things went wrong, and then finds the commit on that branch without problems.
Well, it might be the case that 'git bisect' alone works, but if you've got lots of tooling sitting on top of your version control (e.g. bots measuring and visualing performance, detecting regressions, etc.), you have a much easier time with a linear history than a DAG-shaped one.

On 14 October 2015 at 15:03, Sven Panne
2015-10-14 13:44 GMT+02:00 Erik Hesselink
: I'm not sure why you're saying this is impossible with merges; I've done it several times. Git will find the right branch where things went wrong, and then finds the commit on that branch without problems.
Well, it might be the case that 'git bisect' alone works, but if you've got lots of tooling sitting on top of your version control (e.g. bots measuring and visualing performance, detecting regressions, etc.), you have a much easier time with a linear history than a DAG-shaped one.
This is getting a bit off topic, but anyway: I'm not even sure about that. What if your tools finds a regression, and you want to revert it. If it's part of a big rebased branch, this is tricky because the whole feature might depend on it. If it's a merge, though, you can just revert the merge. Erik

2015-10-14 15:15 GMT+02:00 Erik Hesselink
This is getting a bit off topic, but anyway: I'm not even sure about that. What if your tools finds a regression, and you want to revert it. If it's part of a big rebased branch, this is tricky because the whole feature might depend on it. If it's a merge, though, you can just revert the merge.
OK, now we're really off-topic, but anyway: But when you revert the merge (which brought in lots of commits at once), you still have no idea which individual commit caused the regression. Perhaps there isn't even a regression on your branch at all, only after the merge. Been there, done that... :-P When you have a multi-million line project with hundreds of people working on it, your bots can't keep up with the constant stream of commits (so they typically batch things, doing more detailed runs later when there is time, if any), and you really have to rely on some kind of continuos integration, things get really complicated. So I understand the policy of taking one problem out of the way, namely non-linear history. That's at least what I've experienced, but your mileage may vary, of course.

On Mon, 5 Oct 2015, Gregory Collins wrote:
Um, no, it usually isn't anything like that. Here's a sampling of some of the things I've used CPP for in the past few years: * After GHC 7.4, when using a newtype in FFI imports you need to import the constructor, i.e. "import Foreign.C.Types(CInt(..))" --- afaik CPP is the only way to shut up warnings everywhere
I import them qualified and then define type CInt = CTypes.CInt sometimes.
* defaultTimeLocale moved from System.Locale to Data.Time.Format in time-1.5 (no compat package for this, afaik)
I was advised to always import time-1.5.
* one of many various changes to Typeable in the GHC 7.* series (deriving works better now, mkTyCon vs mkTyCon3, etc)
I have not used that. Like I have not used the ever changing Template Haskell stuff.
* Do I have to hide "catch" from Prelude, or not? It got moved, and "hiding" gives an error if the symbol you're trying to hide is missing. Time to break out the CPP (and curse myself for not just using the qualified import in the first place)
I think System.IO.Error.catchIOError maintains the old behaviour.
* Do I get monoid functions from Prelude or from Data.Monoid? Same w/ Applicative, Foldable, Word. I don't know where anything is supposed to live anymore, or which sequence of imports will shut up spurious warnings on all four versions of GHC I support, so the lowest-friction fix is: break out the #ifdef spackle
I always import them from the most specific module. Where GHC-7.10 Prelude causes conflicts I even started to import more basic Prelude stuff from modules like Data.Bool, Data.Eq and friends. Horrible, but works without CPP.
* ==# and friends return Int# instead of Bool after GHC 7.8.1
I did not use it. I have also warned that the change might not be a good idea since LLVM knows that its 'i1' type can only have the values 0 and 1 and it does not know that for 'i32' or 'i64'.
* To use functions like "tryReadMVar", "unsafeShiftR", and "atomicModifyIORef'" that are in recent base versions but not older ones (this is a place where CPP use is actually justified)
I have not used them so far. I have solved more complicated cases with conditional Hs-Source-Dirs: http://wiki.haskell.org/Cabal/Developer-FAQ#Adapt_to_different_systems_witho... It's cumbersome but so far I managed most transitions without CPP. (Nonetheless, MRP would require the complicated Hs-Source-Dirs solution for far too many packages.)

All I am saying is that these various *compat packages manage to encapsulate the ifdefs to some degree. Here's the record of my struggle [1]. Then came Edward K. and told me to use transformers-compat [2]. Then Adam B. chimed in suggesting mtl-compat [3]. That's it. All my IFDEFs related to this topic were gone. I looked at base-compat [4] and it's very interesting. The big question is: **Why cannot normal base be written the same way as base-compat? ** It would then automatically provide compatibility across all GHC versions. Michał [1] https://www.reddit.com/r/haskell/comments/3gqqu8/depending_on_both_mtl2131_f... [2] https://www.reddit.com/r/haskell/comments/3gqqu8/depending_on_both_mtl2131_f... [3] https://www.reddit.com/r/haskell/comments/3gqqu8/depending_on_both_mtl2131_f... [4] https://hackage.haskell.org/package/base-compat On Mon, Oct 5, 2015 at 6:18 PM, Bryan Richter wrote:
On Mon, Oct 5, 2015 at 06:43-0700, mantkiew wrote:
Well, there are the *compat packages:
Base-compat Transformers-compat Mtl-compat
Etc. They do centralize the ifdefs and give you compatibility with GHC 7.*. I recently adopted the last two ones and they work like a charm. I am yet to adopt base-compat, so I don't know what the experience is with it.
Hang on a moment, are you saying that all the people writing to argue that these changes would require them to write dozens more #ifdef's actually don't have to write any at all? I never knew what the *-compat packages were all about. If that's what they're designed to do, I have a feeling they have not gotten *nearly* enough exposure.
[Apologies for possible cross-posting; this thread jumped into my inbox from I-know-not-where and already has half a dozen CCs attached.]

On 2015-10-05 at 15:27:53 +0200, Sven Panne wrote:
2015-10-05 11:59 GMT+02:00 Simon Thompson
: [...] It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. [...]
I wouldn't necessarily call the process "broken", but it's a bit annoying: Because of the constant flux of minor changes in the language and the libraries, I've reached the stage where I'm totally unable to tell if my code will work for the whole GHC 7.x series. The only way I see is doing heavy testing on Travis CI and littering the code with #ifdefs after compilation failures. (BTW: Fun exercise: Try using (<>) and/or (<$>) in conjunction with -Wall. Bonus points for keeping the #ifdefs centralized. No clue how to do that...) This is less than satisfactory IMHO, and I would really prefer some other mode for introducing such changes: Perhaps these should be bundled and released e.g. every 2 years as Haskell2016, Haskell2018, etc. This way some stuff which belongs together (AMP, FTP, kicking out return, etc.) comes in slightly larger, but more sensible chunks.
Don't get me wrong: Most of the proposed changes in itself are OK and should be done, it's only the way they are introduced should be improved...
I think that part of the reason we have seen these changes occur in a "constant flux" rather than in bigger coordinated chunks is that faith in the Haskell Report process was (understandably) abandoned. And without the Haskell Report as some kind of "clock generator" with which to align/bundle related changes into logical units, changes occur whenever they're proposed and agreed upon (which may take several attempts as we've seen with the AMP and others). I hope that the current attempt to revive the Haskell Prime process will give us a chance to clean up the unfinished intermediate `base-4.8` situation we're left with now after AMP, FTP et al, as the next Haskell Report revision provides us with a milestone to work towards. That being said, there's also the desire to have changes field-tested by a wide audience on a wide range before integrating them into a Haskell Report. Also I'm not sure if there would be less complaints if AMP/FTP/MFP/MRP/etc as part of a new Haskell Report would be switched on all at once in e.g. `base-5.0`, breaking almost *every* single package out there at once. For language changes we have a great way to field-test new extensions before integrating them into the Report via `{-# LANGUAGE #-}` pragmas in a nicely modular and composable way (i.e. a package enabling a certain pragma doesn't require other packages to use it as well) which have proven to be quite popular. However, for the library side we lack a comparable mechanism at this point. The closest we have, for instance, to support an isolated Haskell2010 legacy environment is to use RebindableSyntax which IMO isn't good enough in its current form[1]. And then there's the question whether we want a Haskell2010 legacy environment that's isolated or rather shares the types & typeclasses w/ `base`. If we require sharing types and classes, then we may need some facility to implicitly instanciate new superclasses (e.g. implicitly define Functor and Applicative if only a Monad instance is defined). If we don't want to share types & classes, we run into the problem that we can't easily mix packages which depend on different revisions of the standard-library (e.g. one using `base-4.8` and others which depend on a legacy `haskell2010` base-API). One way to solve this could be to mutually exclude depending on both , `base-4.8` and `haskell2010`, in the same install-plan (assuming `haskell2010` doesn't depend itself on `base-4.8`) In any case, I think we will have to think hard how to address language/library change management in the future, especially if the Haskell code-base continues to grow. Even just migrating the code base between Haskell Report revisions is a problem. An extreme example is the Python 2->3 transition which the Python ecosystem is still suffering from today (afaik). Ideas welcome! [1]: IMO, we need something to be used at the definition site providing desugaring rules, rather than requiring the use-site to enable a generalised desugaring mechanism; I've been told that Agda has an interesting solution to this in its base libraries via {-# LANGUAGE BUILTIN ... #-} pragmas. Regards, H.V.Riedel

Thanks for splitting this off, as it really deserves its own conversation. % find cryptol -name '*.hs' | xargs egrep '#if.*(MIN_VERSION)|(GLASGOW_HASKELL)' | wc -l 49 % find saw-script -name '*.hs' | xargs egrep '#if.*(MIN_VERSION)|(GLASGOW_HASKELL)' | wc -l 242 I introduced most of these in order to accommodate AMP, and now I learn that there is another proposal that is considered to be part-and-parcel with AMP where I will have to make yet more changes to the same code and presumably introduce another layer of #ifdefs. As proposed, I will spend 2*n hours implementing, testing, and releasing these changes. Had both changes been bundled, it would have been 2*(n+ε). Also I'm not sure if there would be less complaints if AMP/FTP/MFP/MRP/etc
as part of a new Haskell Report would be switched on all at once in e.g. `base-5.0`, breaking almost *every* single package out there at once.
I doubt the number of complaints-per-change would be fewer, but I'm
strongly in favor of moving away from what feels like a treadmill that
doesn't value the time of developers and that doesn't account for the
more-than-sum-of-parts cost of the "constant flux".
Thanks,
Adam
On Mon, Oct 5, 2015 at 7:32 AM, Herbert Valerio Riedel
On 2015-10-05 at 15:27:53 +0200, Sven Panne wrote:
2015-10-05 11:59 GMT+02:00 Simon Thompson
: [...] It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken. [...]
I wouldn't necessarily call the process "broken", but it's a bit annoying: Because of the constant flux of minor changes in the language and the libraries, I've reached the stage where I'm totally unable to tell if my code will work for the whole GHC 7.x series. The only way I see is doing heavy testing on Travis CI and littering the code with #ifdefs after compilation failures. (BTW: Fun exercise: Try using (<>) and/or (<$>) in conjunction with -Wall. Bonus points for keeping the #ifdefs centralized. No clue how to do that...) This is less than satisfactory IMHO, and I would really prefer some other mode for introducing such changes: Perhaps these should be bundled and released e.g. every 2 years as Haskell2016, Haskell2018, etc. This way some stuff which belongs together (AMP, FTP, kicking out return, etc.) comes in slightly larger, but more sensible chunks.
Don't get me wrong: Most of the proposed changes in itself are OK and should be done, it's only the way they are introduced should be improved...
I think that part of the reason we have seen these changes occur in a "constant flux" rather than in bigger coordinated chunks is that faith in the Haskell Report process was (understandably) abandoned. And without the Haskell Report as some kind of "clock generator" with which to align/bundle related changes into logical units, changes occur whenever they're proposed and agreed upon (which may take several attempts as we've seen with the AMP and others).
I hope that the current attempt to revive the Haskell Prime process will give us a chance to clean up the unfinished intermediate `base-4.8` situation we're left with now after AMP, FTP et al, as the next Haskell Report revision provides us with a milestone to work towards.
That being said, there's also the desire to have changes field-tested by a wide audience on a wide range before integrating them into a Haskell Report. Also I'm not sure if there would be less complaints if AMP/FTP/MFP/MRP/etc as part of a new Haskell Report would be switched on all at once in e.g. `base-5.0`, breaking almost *every* single package out there at once.
For language changes we have a great way to field-test new extensions before integrating them into the Report via `{-# LANGUAGE #-}` pragmas in a nicely modular and composable way (i.e. a package enabling a certain pragma doesn't require other packages to use it as well) which have proven to be quite popular.
However, for the library side we lack a comparable mechanism at this point. The closest we have, for instance, to support an isolated Haskell2010 legacy environment is to use RebindableSyntax which IMO isn't good enough in its current form[1]. And then there's the question whether we want a Haskell2010 legacy environment that's isolated or rather shares the types & typeclasses w/ `base`. If we require sharing types and classes, then we may need some facility to implicitly instanciate new superclasses (e.g. implicitly define Functor and Applicative if only a Monad instance is defined). If we don't want to share types & classes, we run into the problem that we can't easily mix packages which depend on different revisions of the standard-library (e.g. one using `base-4.8` and others which depend on a legacy `haskell2010` base-API). One way to solve this could be to mutually exclude depending on both , `base-4.8` and `haskell2010`, in the same install-plan (assuming `haskell2010` doesn't depend itself on `base-4.8`)
In any case, I think we will have to think hard how to address language/library change management in the future, especially if the Haskell code-base continues to grow. Even just migrating the code base between Haskell Report revisions is a problem. An extreme example is the Python 2->3 transition which the Python ecosystem is still suffering from today (afaik). Ideas welcome!
[1]: IMO, we need something to be used at the definition site providing desugaring rules, rather than requiring the use-site to enable a generalised desugaring mechanism; I've been told that Agda has an interesting solution to this in its base libraries via {-# LANGUAGE BUILTIN ... #-} pragmas.
Regards, H.V.Riedel _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

On Mon, Oct 5, 2015 at 5:23 PM, Adam Foltzer
Also I'm not sure if there would be less complaints if AMP/FTP/MFP/MRP/etc as part of a new Haskell Report would be switched on all at once in e.g. `base-5.0`, breaking almost *every* single package out there at once.
I doubt the number of complaints-per-change would be fewer, but I'm strongly in favor of moving away from what feels like a treadmill that doesn't value the time of developers and that doesn't account for the more-than-sum-of-parts cost of the "constant flux".
Broadly speaking, I'm a "fix it now rather than later" sort of person in Haskell because I've seen how long things can linger before finally getting fixed (even when everyone agrees on what the fix should be and agrees that it should be done). However, as I mentioned in the originating thread, I think that —at this point— when it comes to AMP/FTP/MFP/MRP/etc we should really aim for the haskell' committee to work out a comprehensive solution (as soon as possible), and then enact all the changes at once when switching to Haskell201X/base-5.0/whatevs. I understand the motivations for wanting things to be field-tested before making it into the report, but I don't think having a series of rapid incremental changes is the correct approach here. Because we're dealing with the Prelude and the core classes, the amount of breakage (and CPP used to paper over it) here is much higher than our usual treadmill of changes; so we should take that into account when planning how to roll the changes out. -- Live well, ~wren

Having so many #ifdefs isn't by itself a major problem. Yes, it does introduce a small increase in compilation time and the size of the codebase. The real cost is the developer time: every developer has to come up with these these #ifdef clauses from scratch for every change that gets made, tailored to their specific code. As more and more get added, it becomes more and more of a confusing mess. It makes me wonder if this can be automated somehow. It would be nice to have a mechanism to alleviate this cost so that most developers downstream (provided that the code was written in a reasonable manner) only need to make a minimal effort to keep up, while still being able to write code that works for a reasonably large range of GHC versions. The burden of breaking changes right now is on the downstream developers, but perhaps there could be a way to shift most of that upstream to avoid this large duplication of effort. Haskell should be allowed to evolve, but there also needs to be a counterweight mechanism that provides stability in the face of constant changes. It would be something similar in spirit to base-compat, but I don't think a library package alone is powerful enough to solve the problem: a missing 'return' for example is not something a library can just patch in. I don't have any preference for "lots of small changes" vs "one big change": in the former, there is a lot of overhead needed to keep track of and fix these small changes; in the latter, there is a risk of introducing a rift that fragments the community (cf Python 2 vs 3). Maybe something in-between would be the best.

Another problem with #ifdefs (especially machine-generated ones) is that it makes code much harder to read. One of the things I love about Haskell is the ability to read code and literally see an author describe how they're thinking about the domain. #ifdefs make life less fun :) Tom
El 5 oct 2015, a las 21:00, Phil Ruffwind
escribió: Having so many #ifdefs isn't by itself a major problem. Yes, it does introduce a small increase in compilation time and the size of the codebase. The real cost is the developer time: every developer has to come up with these these #ifdef clauses from scratch for every change that gets made, tailored to their specific code. As more and more get added, it becomes more and more of a confusing mess.
It makes me wonder if this can be automated somehow. It would be nice to have a mechanism to alleviate this cost so that most developers downstream (provided that the code was written in a reasonable manner) only need to make a minimal effort to keep up, while still being able to write code that works for a reasonably large range of GHC versions. The burden of breaking changes right now is on the downstream developers, but perhaps there could be a way to shift most of that upstream to avoid this large duplication of effort.
Haskell should be allowed to evolve, but there also needs to be a counterweight mechanism that provides stability in the face of constant changes. It would be something similar in spirit to base-compat, but I don't think a library package alone is powerful enough to solve the problem: a missing 'return' for example is not something a library can just patch in.
I don't have any preference for "lots of small changes" vs "one big change": in the former, there is a lot of overhead needed to keep track of and fix these small changes; in the latter, there is a risk of introducing a rift that fragments the community (cf Python 2 vs 3). Maybe something in-between would be the best. _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

On October 5, 2015 at 6:00:00 AM, Simon Thompson (s.j.thompson@kent.ac.uk) wrote:
Hello all. I write this to be a little provocative, but …
It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken.
Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future.
Hi Simon. I do in fact think this is provocative :-P I want to object here to your characterization of what has been going on as “simply throwing in changes”. The proposal seems very well and carefully worked through to provide a good migration strategy, even planning to alter the source of GHC to ensure that adequate hints are given for the indefinite transition period. I also want to object to the idea that these changes would have “a profound impact on systems”. As it stands, and I think this is an important criteria in any change, when “phase 2” goes into affect, code that has compiled before may cease to compile until a minor change is made. However, code that continues to compile will continue to compile with the same behavior. Now as to process itself, this is a change to core libraries. It has been proposed on the libraries list, which seems appropriate, and a vigorous discussion has ensued. This seems like a pretty decent process to me thus far. Do you have a better one in mind? —Gershom P.S. as a general point, I sympathize with concerns about breakage resulting from this, but I also think that the migration strategy proposed is good, and if people are concerned about breakage I think it would be useful if they could explain where they feel the migration strategy is insufficient to allay their concerns.

I would like to suggest that the bar for breaking all existing libraries, books, papers, and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge. This proposal falls far short of both bars, to the extent that I am astonished and disappointed it is being seriously discussed – and to general approval, no less – on a date other than April 1. Surely some design flaws have consequences so small that they are not worth fixing. I'll survive if it goes through, obviously, but it will commit me to a bunch of pointless make-work and compatibility ifdefs. I've previously expressed my sense that cross-version compatibility is a big tax on library maintainers. This proposal does not give me confidence that this cost is being taken seriously. Thanks, Bryan.
On Oct 5, 2015, at 7:32 AM, Gershom B
wrote: On October 5, 2015 at 6:00:00 AM, Simon Thompson (s.j.thompson@kent.ac.uk) wrote: Hello all. I write this to be a little provocative, but …
It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken.
Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future.
Hi Simon. I do in fact think this is provocative :-P
I want to object here to your characterization of what has been going on as “simply throwing in changes”. The proposal seems very well and carefully worked through to provide a good migration strategy, even planning to alter the source of GHC to ensure that adequate hints are given for the indefinite transition period.
I also want to object to the idea that these changes would have “a profound impact on systems”. As it stands, and I think this is an important criteria in any change, when “phase 2” goes into affect, code that has compiled before may cease to compile until a minor change is made. However, code that continues to compile will continue to compile with the same behavior.
Now as to process itself, this is a change to core libraries. It has been proposed on the libraries list, which seems appropriate, and a vigorous discussion has ensued. This seems like a pretty decent process to me thus far. Do you have a better one in mind?
—Gershom
P.S. as a general point, I sympathize with concerns about breakage resulting from this, but I also think that the migration strategy proposed is good, and if people are concerned about breakage I think it would be useful if they could explain where they feel the migration strategy is insufficient to allay their concerns. _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime

On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote:
I would like to suggest that the bar for breaking all existing libraries, books, papers, and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge.
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one? As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. So the reason for choosing to not do MRP simultaneous with AMP was precisely to allow a gradual migration path where, sans CPP, people could write code compatible with the last three versions of GHC, as the general criteria has been. So without arguing the necessity or not, I just want to weigh in with a technical opinion that if this goes through, my _estimation_ is that there will be a smooth and relatively painless migration period, the sky will not fall, good teaching material will remain good, those libraries that bitrot will tend to do so for a variety of reasons more significant than this, etc. It is totally reasonable to have a discussion on whether this change is worth it at all. But let’s not overestimate the cost of it just to further tip the scales :-) —gershom

Perhaps we should weigh the +1 and -1s in this thread with the number of
lines of Haskell written by the voter? ;)
On Mon, Oct 5, 2015 at 5:09 PM, Gershom B
I would like to suggest that the bar for breaking all existing
On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote: libraries, books, papers,
and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge.
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP.
So the reason for choosing to not do MRP simultaneous with AMP was precisely to allow a gradual migration path where, sans CPP, people could write code compatible with the last three versions of GHC, as the general criteria has been.
So without arguing the necessity or not, I just want to weigh in with a technical opinion that if this goes through, my _estimation_ is that there will be a smooth and relatively painless migration period, the sky will not fall, good teaching material will remain good, those libraries that bitrot will tend to do so for a variety of reasons more significant than this, etc.
It is totally reasonable to have a discussion on whether this change is worth it at all. But let’s not overestimate the cost of it just to further tip the scales :-)
—gershom _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

On Mon, 5 Oct 2015, Johan Tibell wrote:
Perhaps we should weigh the +1 and -1s in this thread with the number of lines of Haskell written by the voter? ;)
My prefered measure would the number of Haskell packages hosted at hub.darcs.net. :-)

+1 I think this idea is good and should not be taken lightly. I'm a newcomer to the community and currently hold a grand total of *zero* open source contributions. Obviously, I would like to change this soon, but I think it is very *unfair* and makes absolutely no sense to have the standard one person one vote rule for decisions involving the libraries. Let the code produced vote. Maybe weight them by downloads? Dimitri On 10/5/15 9:12 AM, Johan Tibell wrote:
Perhaps we should weigh the +1 and -1s in this thread with the number of lines of Haskell written by the voter? ;)
On Mon, Oct 5, 2015 at 5:09 PM, Gershom B
mailto:gershomb@gmail.com> wrote: On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com mailto:bos@serpentine.com) wrote: > I would like to suggest that the bar for breaking all existing libraries, books, papers, > and lecture notes should be very high; and that the benefit associated with such a breaking > change should be correspondingly huge. >
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP.
So the reason for choosing to not do MRP simultaneous with AMP was precisely to allow a gradual migration path where, sans CPP, people could write code compatible with the last three versions of GHC, as the general criteria has been.
So without arguing the necessity or not, I just want to weigh in with a technical opinion that if this goes through, my _estimation_ is that there will be a smooth and relatively painless migration period, the sky will not fall, good teaching material will remain good, those libraries that bitrot will tend to do so for a variety of reasons more significant than this, etc.
It is totally reasonable to have a discussion on whether this change is worth it at all. But let’s not overestimate the cost of it just to further tip the scales :-)
—gershom _______________________________________________ Libraries mailing list Libraries@haskell.org mailto:Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

I think code-based weighting should not be considered at all. There's a
non-trivial amount of proprietary code that wouldn't be counted, and likely
can't be accounted for accurately. I have some unknown amount of code at my
previous employer (which does include Monad instances) that now has to be
maintained by others. I may have said +1 (and I'm reconsidering), but the
current maintainers probably don't want the extra make-work this involves.
There's an argument that this proposal involves very little up-front
breakage. This is true but disingenuous, because the breakage will still
happen in the future unless changes are made and committed. The entire
maintenance load should be considered, not just the immediate load.
Bryan, nothing I've seen since I started using Haskell makes me think that
the libraries committee or ghc devs value back compatibility much at all.
My favorite example is RecursiveDo, which was deprecated in favor of DoRec,
only for that to be reversed in the next ghc release. Of course that was
more irritating than breaking, but it's indicative of the general value
placed on maintenance programming.
On 12:28, Mon, Oct 5, 2015 Dimitri DeFigueiredo
+1
I think this idea is good and should not be taken lightly. I'm a newcomer to the community and currently hold a grand total of *zero* open source contributions. Obviously, I would like to change this soon, but I think it is very *unfair* and makes absolutely no sense to have the standard one person one vote rule for decisions involving the libraries.
Let the code produced vote. Maybe weight them by downloads?
Dimitri
On 10/5/15 9:12 AM, Johan Tibell wrote:
Perhaps we should weigh the +1 and -1s in this thread with the number of lines of Haskell written by the voter? ;)
On Mon, Oct 5, 2015 at 5:09 PM, Gershom B
wrote: I would like to suggest that the bar for breaking all existing
On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote: libraries, books, papers,
and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge.
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP.
So the reason for choosing to not do MRP simultaneous with AMP was precisely to allow a gradual migration path where, sans CPP, people could write code compatible with the last three versions of GHC, as the general criteria has been.
So without arguing the necessity or not, I just want to weigh in with a technical opinion that if this goes through, my _estimation_ is that there will be a smooth and relatively painless migration period, the sky will not fall, good teaching material will remain good, those libraries that bitrot will tend to do so for a variety of reasons more significant than this, etc.
It is totally reasonable to have a discussion on whether this change is worth it at all. But let’s not overestimate the cost of it just to further tip the scales :-)
—gershom _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Haskell-Cafe mailing listHaskell-Cafe@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

There's an argument that this proposal involves very little up-front breakage. This is true but disingenuous, because the breakage will still happen in the future unless changes are made and committed. The entire maintenance load should be considered, not just the immediate load.
I don't have much to say about breakage (and thus I've refrained from voting), as a maintainer of just 7 packages of which exactly zero are compatible with anything else than the latest GHC (well, no one complained, I'm rather lazy, and if it's any defense I can't even run GHC 7.10.1 on my Arch Linux box because of inappropriate versions of dynamically linked libraries), but a honest question: does this proposal entail anything else than replacing every "return" with "pure" and conditional compilation of "return = pure" in Monad instances? Best regards, Marcin Mrotek

I'm a strong +1 on this (and on Edward's addendum to it), and I object to the characterization of this change as having only trivial benefits. Simplicity and correctness are never trivial. In my view, disagreement on *how* this change is made (migration strategy, etc.) is completely reasonable, but disagreement on *whether* to make it is not. There have been a lot of objections based on the idea that learners will consult books that are out of date, but the number of learners affected by this is dwarfed by the number of learners who will use updated materials and be confused by this strange historical artifact. Permanently-enshrined historical artifacts accrete forever and cause linear confusion, whereas outdated materials are inevitably replaced such that the amount of confusion remains constant. There was also an argument that Haskell has a “window of opportunity”, implying that breakages are more likely to cost us future valued community members than historical artifacts are. I couldn't disagree more. If it weren't for Haskell's past willingness to make changes when we learned better ways of doing things, I doubt I would presently be using the language. I would much rather add a marginal community member with a strong preference for cleanliness, simplicity, and correctness than one with a strong preference against making occasional small changes to their code. ᐧ On Mon, Oct 5, 2015 at 6:28 PM, Dimitri DeFigueiredo < defigueiredo@ucdavis.edu> wrote:
+1
I think this idea is good and should not be taken lightly. I'm a newcomer to the community and currently hold a grand total of *zero* open source contributions. Obviously, I would like to change this soon, but I think it is very *unfair* and makes absolutely no sense to have the standard one person one vote rule for decisions involving the libraries.
Let the code produced vote. Maybe weight them by downloads?
Dimitri
On 10/5/15 9:12 AM, Johan Tibell wrote:
Perhaps we should weigh the +1 and -1s in this thread with the number of lines of Haskell written by the voter? ;)
On Mon, Oct 5, 2015 at 5:09 PM, Gershom B
wrote: I would like to suggest that the bar for breaking all existing
On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote: libraries, books, papers,
and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge.
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP.
So the reason for choosing to not do MRP simultaneous with AMP was precisely to allow a gradual migration path where, sans CPP, people could write code compatible with the last three versions of GHC, as the general criteria has been.
So without arguing the necessity or not, I just want to weigh in with a technical opinion that if this goes through, my _estimation_ is that there will be a smooth and relatively painless migration period, the sky will not fall, good teaching material will remain good, those libraries that bitrot will tend to do so for a variety of reasons more significant than this, etc.
It is totally reasonable to have a discussion on whether this change is worth it at all. But let’s not overestimate the cost of it just to further tip the scales :-)
—gershom _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Haskell-Cafe mailing listHaskell-Cafe@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 05/10/15 20:50, Nathan Bouscal wrote:
There have been a lot of objections based on the idea that learners will consult books that are out of date, but the number of learners affected by this is dwarfed by the number of learners who will use updated materials and be confused by this strange historical artifact. Permanently-enshrined historical artifacts accrete forever and cause linear confusion, whereas outdated materials are inevitably replaced such that the amount of confusion remains constant. Thank you for making this point
I would be very saddened if the appeal to history (i.e. technical debt) would halt our momentum. That's what happens to most things both in and out of computer science. And it's honestly depressing. - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJWEvnNAAoJENQqWdRUGk8BrqEQAKcWj2Gv/4gVzTq++m1lU+r1 9TxdBXG+V+y66yyqpZAv4tOOCVtkDYR6/qUGtpYO5cdmh8mYKh5PUvb/p/l1AUQl Ug8gVO+u+yvwkVif8Jhhl+e8JqYGPgH6+lUvA8VE47VNkYGKMsNlXFYPik8Sc22w 6EhS7SRhR57quOclQw2NRIxS4F3ZqE7YKkXETId9QBtder9e6OEYdc4pQivcr46H FHzK3ybRF80U/3lKivPFo/114ICrS0l/Mneqf2ITLso6HFAZXhms5RzuSOaxLSbI xAV2k9gRv6cPWdMgx7DCjiOOsNc78peAcqwlEdQ5dJWGs5fu70hsKqNAL3LYCmRC YTcC2F1kJmuKYucHzfDLFYiVgbn03ehZkkx4b9NFQyHwj8rBNn4E4JspjOR/ej9w p3e3lGCj/Voouq+bIb5AAlp01Bioxew/+ewQeI739js9b9LE0wZQvFbYfngxdmf4 Z7IADHsfou7xtiChXbSkOlOEI3mDYTXXxeTSmF/OY7HVnCReCKtVa0Aj5j9G116V LrMeUegOFMazlbpyG2GGvp7zD/3xTH3v6zpNcj8ijsCIXtch7ygebA5ecXZ0m30s y6VoVMPkQtHdAaaO5qi7MY+/cSNAiJdEcKR4hSZxPrFqUsiOJ006FMhh1PcSRjBx 3IMLL+8mPsvTnfWDj+NY =Mwbu -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 +1 On 06/10/15 08:29, Alexander Berntsen wrote:
On 05/10/15 20:50, Nathan Bouscal wrote:
There have been a lot of objections based on the idea that learners will consult books that are out of date, but the number of learners affected by this is dwarfed by the number of learners who will use updated materials and be confused by this strange historical artifact. Permanently-enshrined historical artifacts accrete forever and cause linear confusion, whereas outdated materials are inevitably replaced such that the amount of confusion remains constant. Thank you for making this point
I would be very saddened if the appeal to history (i.e. technical debt) would halt our momentum. That's what happens to most things both in and out of computer science. And it's honestly depressing. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJWEvo3AAoJEFkczQCvdvv0BQMH/2oxlo35Y4oFw2ZYMMjmb8es Asm7IrQCOoYRH10mgFhZiNPsZakRrEBsk6QD6iyIk6DGGoA7YBuQ2DwOE3Hzr2Ih jelQbB0Lqs6P3bj5EibZ1qad+g0zR9RWkjp+7R+zsbhQ3cu4WITT0EI5nwLIacE3 xKmT9WuWcd166rRJrpFSs7gCzQwtWPHropQgnXttx/85Uw6zxA//EimUqsIaLPI7 Yu8IlxqpbICgq7uWni1f1EVajNEIk4qewezrlahrJuBMcBqZ7jAknMIO06UIGFjv ZZm5lBRc+c++XmwSAVdo+5StiILzHftrlqsW3dVoJLvFFKdKYRDhd3auA+PB/Oo= =ND0C -----END PGP SIGNATURE-----

IMO, the "tech debt" you're talking about feels very small. We've already made the change that return = pure by default. The historical baggage that this proposal cleans up is just the fact that legacy code is able to define its own "return" without breaking (which must be the same as the definition of pure anyway). I am also moving from +0.5 to +0 on this. Tom
El 5 oct 2015, a las 18:29, Alexander Berntsen
escribió: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
On 05/10/15 20:50, Nathan Bouscal wrote: There have been a lot of objections based on the idea that learners will consult books that are out of date, but the number of learners affected by this is dwarfed by the number of learners who will use updated materials and be confused by this strange historical artifact. Permanently-enshrined historical artifacts accrete forever and cause linear confusion, whereas outdated materials are inevitably replaced such that the amount of confusion remains constant. Thank you for making this point
I would be very saddened if the appeal to history (i.e. technical debt) would halt our momentum. That's what happens to most things both in and out of computer science. And it's honestly depressing. - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iQIcBAEBCgAGBQJWEvnNAAoJENQqWdRUGk8BrqEQAKcWj2Gv/4gVzTq++m1lU+r1 9TxdBXG+V+y66yyqpZAv4tOOCVtkDYR6/qUGtpYO5cdmh8mYKh5PUvb/p/l1AUQl Ug8gVO+u+yvwkVif8Jhhl+e8JqYGPgH6+lUvA8VE47VNkYGKMsNlXFYPik8Sc22w 6EhS7SRhR57quOclQw2NRIxS4F3ZqE7YKkXETId9QBtder9e6OEYdc4pQivcr46H FHzK3ybRF80U/3lKivPFo/114ICrS0l/Mneqf2ITLso6HFAZXhms5RzuSOaxLSbI xAV2k9gRv6cPWdMgx7DCjiOOsNc78peAcqwlEdQ5dJWGs5fu70hsKqNAL3LYCmRC YTcC2F1kJmuKYucHzfDLFYiVgbn03ehZkkx4b9NFQyHwj8rBNn4E4JspjOR/ej9w p3e3lGCj/Voouq+bIb5AAlp01Bioxew/+ewQeI739js9b9LE0wZQvFbYfngxdmf4 Z7IADHsfou7xtiChXbSkOlOEI3mDYTXXxeTSmF/OY7HVnCReCKtVa0Aj5j9G116V LrMeUegOFMazlbpyG2GGvp7zD/3xTH3v6zpNcj8ijsCIXtch7ygebA5ecXZ0m30s y6VoVMPkQtHdAaaO5qi7MY+/cSNAiJdEcKR4hSZxPrFqUsiOJ006FMhh1PcSRjBx 3IMLL+8mPsvTnfWDj+NY =Mwbu -----END PGP SIGNATURE----- _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 - -- I am going to do some logging, yay! data Logs a = Logs a [a] deriving (Eq, Show) - -- one log message singlelog :: a -> Logs a singlelog a = Logs a [] - -- two log messages twologs :: a -> a -> Logs a twologs a1 a2 = Logs a1 [a2] class Semigroup a where (<>) :: a -> a -> a - -- I can append logs instance Semigroup (Logs a) where Logs h1 t1 <> Logs h2 t2 = Logs h1 (t1 ++ h2 : t2) - -- I can map on Logs instance Functor Logs where fmap f (Logs h t) = Logs (f h) (fmap f t) - -- I will collect logs with a value data WriteLogs l a = WriteLogs (Logs l) a deriving (Eq, Show) - -- I can map the pair of logs and a value instance Functor (WriteLogs l) where fmap f (WriteLogs l a) = WriteLogs l (f a) singlewritelog :: l -> a -> WriteLogs l a singlewritelog l a = WriteLogs (singlelog l) a - -- Monad without return class Bind f where (-<<) :: (a -> f b) -> f a -> f b - -- Can I Applicativate WriteLogs? Let's see. instance Applicative (WriteLogs l) where -- Well that was easy. WriteLogs l1 f <*> WriteLogs l2 a = WriteLogs (l1 <> l2) (f a) pure a = WriteLogs (error "wait, what goes here?") a -- Oh I guess I cannot Applicativate WriteLogs, but I can Apply them! - -- Well there goes that idea. - -- instance Monad (WriteLogs l) where - -- Wait a minute, can I bind WriteLogs? instance Bind (WriteLogs l) where -- Of course I can! f -<< WriteLogs l1 a = let WriteLogs l2 b = f a in WriteLogs (l1 <> l2) b - -- OK here goes ... myprogram :: WriteLogs String Int myprogram = -- No instance for (Monad (WriteLogs String)) -- RAR!, why does do-notation require extraneous constraints?! -- Oh that's right, Haskell is broken. -- Oh well, I guess I need to leave Prelude turned off and rewrite the base libraries. do a <- singlewritelog "message" 18 b <- WriteLogs (twologs "hi" "bye") 73 WriteLogs (singlelog "exit") (a * b) - -- One day, one day soon, I can move on. On 06/10/15 11:20, amindfv@gmail.com wrote:
IMO, the "tech debt" you're talking about feels very small. We've already made the change that return = pure by default. The historical baggage that this proposal cleans up is just the fact that legacy code is able to define its own "return" without breaking (which must be the same as the definition of pure anyway). I am also moving from +0.5 to +0 on this.
Tom
El 5 oct 2015, a las 18:29, Alexander Berntsen
escribió: On 05/10/15 20:50, Nathan Bouscal wrote: There have been a lot of objections based on the idea that learners will consult books that are out of date, but the number of learners affected by this is dwarfed by the number of learners who will use updated materials and be confused by this strange historical artifact. Permanently-enshrined historical artifacts accrete forever and cause linear confusion, whereas outdated materials are inevitably replaced such that the amount of confusion remains constant. Thank you for making this point
I would be very saddened if the appeal to history (i.e. technical debt) would halt our momentum. That's what happens to most things both in and out of computer science. And it's honestly depressing.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJWEylQAAoJEFkczQCvdvv08KMIAIGEwj6dQqnk8Z3zjFC1Vpvb LTXdnzlcXCMXmIdzr9RaSGUKo52b3BPaP6EgFDJm8U/CJYQ/X8FAyy0gmKVJlru4 JQc4Y+CcGqz7+UwfYRlOOJDFNscigvGDj33N3hp3G/HuWfvllWJSx9n7gTqSrnXS W/jTDN3sntJWiCdC+A5rLoqzH3eZ2LhwB0iL26DSfE1OLPyBK2kignKCjnMtRbEq xY5vjx7xLQKzApRARIrBdDNVuXVRy+QQyGTGmdOKaLscNNrMzewcUr8LZLRG+6W7 RKA12y3etcVXGRlACHNn67mUKJIlKWX5PZSVsj07SZNWp3eyDctHfKuqYonfgJU= =G8G2 -----END PGP SIGNATURE-----

On Mon, Oct 5, 2015 at 8:09 AM, Gershom B
My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
This is a multi-year, "boil the ocean"-style project, affecting literally
every Haskell user, and the end result after all of this labor is going to
be... a slightly spiffier bike shed?
Strongly -1 from me also. My experience over the last couple of years is
that every GHC release breaks my libraries in annoying ways that require
CPP to fix:
~/personal/src/snap λ find . -name '*.hs' | xargs egrep
'#if.*(MIN_VERSION)|(GLASGOW_HASKELL)' | wc -l
64
As a user this is another bikeshedding change that is not going to benefit
me at all. Maintaining a Haskell library can be an exasperating exercise of
running on a treadmill to keep up with busywork caused by changes to the
core language and libraries. My feeling is starting to become that the
libraries committee is doing as much (if not more) to *cause* problems and
work for me than it is doing to improve the common infrastructure.
G
--
Gregory Collins

On Mon, Oct 5, 2015 at 8:34 PM, Gregory Collins
On Mon, Oct 5, 2015 at 8:09 AM, Gershom B
wrote: My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
This is a multi-year, "boil the ocean"-style project, affecting literally every Haskell user, and the end result after all of this labor is going to be... a slightly spiffier bike shed?
Strongly -1 from me also. My experience over the last couple of years is that every GHC release breaks my libraries in annoying ways that require CPP to fix:
~/personal/src/snap λ find . -name '*.hs' | xargs egrep '#if.*(MIN_VERSION)|(GLASGOW_HASKELL)' | wc -l 64
As a user this is another bikeshedding change that is not going to benefit me at all. Maintaining a Haskell library can be an exasperating exercise of running on a treadmill to keep up with busywork caused by changes to the core language and libraries. My feeling is starting to become that the libraries committee is doing as much (if not more) to *cause* problems and work for me than it is doing to improve the common infrastructure.
On the libraries I maintain and have a copy of on my computer right now: 329

I'm writing a book (http://haskellbook.com/) with my coauthor. It is up to
date with GHC 7.10. AMP made things better, not harder, with respect to
teaching Haskell. BBP required some explanation of "ignore this type, we're
asserting a different one", but the positives are still better than the
negatives.
Please don't use existing or forthcoming books as an excuse to do or not-do
things. Do what's right for the users of the language.
On Mon, Oct 5, 2015 at 2:01 PM, Johan Tibell
On Mon, Oct 5, 2015 at 8:34 PM, Gregory Collins
wrote: On Mon, Oct 5, 2015 at 8:09 AM, Gershom B
wrote: My understanding of the argument here, which seems to make sense to me, is that the AMP already introduced a significant breaking change with regards to monads. Books and lecture notes have already not caught up to this, by and large. Hence, by introducing a further change, which _completes_ the general AMP project, then by the time books and lecture notes are all updated, they will be able to tell a much nicer story than the current one?
This is a multi-year, "boil the ocean"-style project, affecting literally every Haskell user, and the end result after all of this labor is going to be... a slightly spiffier bike shed?
Strongly -1 from me also. My experience over the last couple of years is that every GHC release breaks my libraries in annoying ways that require CPP to fix:
~/personal/src/snap λ find . -name '*.hs' | xargs egrep '#if.*(MIN_VERSION)|(GLASGOW_HASKELL)' | wc -l 64
As a user this is another bikeshedding change that is not going to benefit me at all. Maintaining a Haskell library can be an exasperating exercise of running on a treadmill to keep up with busywork caused by changes to the core language and libraries. My feeling is starting to become that the libraries committee is doing as much (if not more) to *cause* problems and work for me than it is doing to improve the common infrastructure.
On the libraries I maintain and have a copy of on my computer right now: 329
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
-- Chris Allen Currently working on http://haskellbook.com

On 2015-10-05 at 21:01:16 +0200, Johan Tibell wrote:
On Mon, Oct 5, 2015 at 8:34 PM, Gregory Collins
[...]
Strongly -1 from me also. My experience over the last couple of years is that every GHC release breaks my libraries in annoying ways that require CPP to fix:
~/personal/src/snap λ find . -name '*.hs' | xargs egrep '#if.*(MIN_VERSION)|(GLASGOW_HASKELL)' | wc -l 64
[...]
On the libraries I maintain and have a copy of on my computer right now: 329
Although this was already pointed out to you in a response to a Tweet of yours, I'd like to expand on this here to clarify: You say that you stick to the 3-major-ghc-release support-window convention for your libraries. This is good, because then you don't need any CPP at all! Here's why: So when GHC 8.2 is released, your support-window requires you to support GHC 7.10 and GHC 8.0 in addition to GHC 8.2. At this point you'll be happy that you can start dropping those #ifdefs you added for GHC 7.10 in your code in order to adapt to FTP & AMP. And when you do *that*, you can also drop all your `return = pure` methods overrides. (Because we prepared for MRP already in GHC 7.10 by introducing the default implementation for `return`!) This way, you don't need to introduce any CPP whatsoever due to MRP! Finally, since we're not gonna remove `return` in GHC 8.2 anyway, as GHC 8.2 was just the *earliest theoretical* possible GHC in which this *could* have happened. Realistically, this would happen at a much later point, say GHC 8.6 or even later! Therefore, the scheme above would actually work for 5-year time-windows! And there's even an idea on the table to have a lawful `return = pure` method override be tolerated by GHC even when `return` has already moved out of `Monad`! PS: I'm a bit disappointed you seem to dismiss this proposal right away categorically without giving us a chance to address your concerns. The proposal is not a rigid all-or-nothing thing that can't be tweaked and revised. That's why we're having these proposal-discussions in the first place (rather than doing blind +1/-1 polls), so we can hear everyone out and try to maximise the agreement (even if we will never reach 100% consensus on any proposal). So please, keep on discussing!

Herbert, unfortunately your logic would break if there is another invasive library change somewhere between 7.10 and 8.2 as that would require introducing a whole another set of #ifdef's. I've been using GHC since 6.2. Since then new versions of GHC have been breaking builds of foundational packages every 1-2 releases. I'm actually all for decisive and unapologetic language evolution, but there should be a safety net so there's less risk of breakage. And, the main sentiment in the discussion (which, I admit, I have followed very loosely) seems to be that #ifdef's are a poor choice for such a net. So, forgive me if that has been discussed, but what has happened to the `haskell2010` package that is supposed to provide a compatibility layer for the standard library? Are people using it? Why hasn't it been updated since March 2014? Is it really impossible to provide a legacy Haskell2010 base library compatibility layer with AMP in play? Perhaps it's my ignorance speaking, but I really think if packages like `haskell2010` and `haskell98` were actively maintained and used, we wouldn't have issues like that. Then you could say: "if you depend on the GHC `base` package directly, your portability troubles are well deserved". On 10/06/2015 03:10 AM, Herbert Valerio Riedel wrote:
So when GHC 8.2 is released, your support-window requires you to support GHC 7.10 and GHC 8.0 in addition to GHC 8.2.
At this point you'll be happy that you can start dropping those #ifdefs you added for GHC 7.10 in your code in order to adapt to FTP & AMP.

Dear all, Executive Summary: Please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee While we can discuss the extent of additional breakage MRP would cause, the fact remains it is a further breaking change. A survey of breakage to books as Herbert did is certainly valuable (thanks!), but much breakage will (effectively) remain unquantifiable. It is also clear from the discussions over the last couple of weeks, on the Haskell libraries list as well as various other forums and social media, that MRP is highly contentions. This begs two questions: 1. Is the Haskell Libraries list and informal voting process really an appropriate, or even acceptable, way to adopt such far-reaching changes to what effectively amounts to Haskell itself? 2. Why the hurry to push MRP through? As to question 1, to Graham Hutton's and my knowledge, the libraries list and its voting process was originally set up for 3rd-party libraries in fptools. It seems to have experienced some form of "mission creep" since. Maybe that is understandable given that there was no obvious alternative as HaskellPrime has been defunct for a fair few years. But, as has been pointed out in a number of postings, a lot of people with very valuable perspectives are also very busy, and thus likely to miss a short discussion period (as has happened in the past in relation to the Burning the Bridges proposal) and also have very little time for engaging in long and complicated e-mail discussions that, from their perspective, happen at a completely random point in time and for which they thus have not had a chance to set aside time even if they wanted to participate. Just as one data point, AMP etc. mostly passed Graham and me by simply because a) we were too busy to notice and b) we simply didn't think there was a mandate for such massive overhauls outside of a process like HaskellPrime. And we are demonstrably not alone. This brings us to question 2. Now that HaskellPrime is being resurrected, why the hurry to push MRP through? Surely HaskellPrime is the forum where breaking changes like MRP should be discussed, allowing as much time as is necessary and allowing for an as wide range of perspectives as possible to properly be taken into account? The need to "field test" MRP prior to discussing it in HaskellPrime has been mentioned. Graham and I are very sceptical. In the past, at least in the past leading up to Haskell 2010 or so, the community at large was not roped in as involuntary field testers. If MRP is pushed through now, with a resurrection of HaskellPrime being imminent, Graham and I strongly believe that risks coming across to a very large part of the Haskell community as preempting proper process by facing the new HaskellPrime committee with (yet another) fait accompli. Therefore, please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee, which is where it properly belongs. Otherwise, the Haskell community itself might be one of the things that MRP breaks. Best regards, /Henrik -- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

To question 1 my answer is NO! I think voting to decide these kind of issues a terrible idea; we might as well throw dice. -----Original Message----- From: Haskell-Cafe [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Henrik Nilsson Sent: 06 October 2015 12:33 To: haskell-prime@haskell.org List; Haskell Libraries; haskell cafe Subject: Re: [Haskell-cafe] Monad of no `return` Proposal (MRP): Moving `return` out of `Monad` Dear all, Executive Summary: Please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee While we can discuss the extent of additional breakage MRP would cause, the fact remains it is a further breaking change. A survey of breakage to books as Herbert did is certainly valuable (thanks!), but much breakage will (effectively) remain unquantifiable. It is also clear from the discussions over the last couple of weeks, on the Haskell libraries list as well as various other forums and social media, that MRP is highly contentions. This begs two questions: 1. Is the Haskell Libraries list and informal voting process really an appropriate, or even acceptable, way to adopt such far-reaching changes to what effectively amounts to Haskell itself? 2. Why the hurry to push MRP through? As to question 1, to Graham Hutton's and my knowledge, the libraries list and its voting process was originally set up for 3rd-party libraries in fptools. It seems to have experienced some form of "mission creep" since. Maybe that is understandable given that there was no obvious alternative as HaskellPrime has been defunct for a fair few years. But, as has been pointed out in a number of postings, a lot of people with very valuable perspectives are also very busy, and thus likely to miss a short discussion period (as has happened in the past in relation to the Burning the Bridges proposal) and also have very little time for engaging in long and complicated e-mail discussions that, from their perspective, happen at a completely random point in time and for which they thus have not had a chance to set aside time even if they wanted to participate. Just as one data point, AMP etc. mostly passed Graham and me by simply because a) we were too busy to notice and b) we simply didn't think there was a mandate for such massive overhauls outside of a process like HaskellPrime. And we are demonstrably not alone. This brings us to question 2. Now that HaskellPrime is being resurrected, why the hurry to push MRP through? Surely HaskellPrime is the forum where breaking changes like MRP should be discussed, allowing as much time as is necessary and allowing for an as wide range of perspectives as possible to properly be taken into account? The need to "field test" MRP prior to discussing it in HaskellPrime has been mentioned. Graham and I are very sceptical. In the past, at least in the past leading up to Haskell 2010 or so, the community at large was not roped in as involuntary field testers. If MRP is pushed through now, with a resurrection of HaskellPrime being imminent, Graham and I strongly believe that risks coming across to a very large part of the Haskell community as preempting proper process by facing the new HaskellPrime committee with (yet another) fait accompli. Therefore, please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee, which is where it properly belongs. Otherwise, the Haskell community itself might be one of the things that MRP breaks. Best regards, /Henrik -- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe This email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please delete all copies and notify the sender immediately. You may wish to refer to the incorporation details of Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at http://www.standardchartered.com/en/incorporation-details.html Insofar as this communication contains any market commentary, the market commentary has been prepared by sales and/or trading desk of Standard Chartered Bank or its affiliate. It is not and does not constitute research material, independent research, recommendation or financial advice. Any market commentary is for information purpose only and shall not be relied for any other purpose, and is subject to the relevant disclaimers available at http://wholesalebanking.standardchartered.com/en/utility/Pages/d-mkt.aspx Insofar as this e-mail contains the term sheet for a proposed transaction, by responding affirmatively to this e-mail, you agree that you have understood the terms and conditions in the attached term sheet and evaluated the merits and risks of the transaction. We may at times also request you to sign on the term sheet to acknowledge in respect of the same. Please visit http://wholesalebanking.standardchartered.com/en/capabilities/financialmarke... for important information with respect to derivative products.

I was always under the impression that +1/-1 was just a quick
indicator of opinion, not a vote, and that it was the core libraries
committee that would make the final call if enough consensus was
reached to enact the change.
Erik
On 6 October 2015 at 13:32, Henrik Nilsson
Dear all,
Executive Summary: Please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee
While we can discuss the extent of additional breakage MRP would cause, the fact remains it is a further breaking change. A survey of breakage to books as Herbert did is certainly valuable (thanks!), but much breakage will (effectively) remain unquantifiable.
It is also clear from the discussions over the last couple of weeks, on the Haskell libraries list as well as various other forums and social media, that MRP is highly contentions.
This begs two questions:
1. Is the Haskell Libraries list and informal voting process really an appropriate, or even acceptable, way to adopt such far-reaching changes to what effectively amounts to Haskell itself?
2. Why the hurry to push MRP through?
As to question 1, to Graham Hutton's and my knowledge, the libraries list and its voting process was originally set up for 3rd-party libraries in fptools. It seems to have experienced some form of "mission creep" since. Maybe that is understandable given that there was no obvious alternative as HaskellPrime has been defunct for a fair few years. But, as has been pointed out in a number of postings, a lot of people with very valuable perspectives are also very busy, and thus likely to miss a short discussion period (as has happened in the past in relation to the Burning the Bridges proposal) and also have very little time for engaging in long and complicated e-mail discussions that, from their perspective, happen at a completely random point in time and for which they thus have not had a chance to set aside time even if they wanted to participate.
Just as one data point, AMP etc. mostly passed Graham and me by simply because a) we were too busy to notice and b) we simply didn't think there was a mandate for such massive overhauls outside of a process like HaskellPrime. And we are demonstrably not alone.
This brings us to question 2. Now that HaskellPrime is being resurrected, why the hurry to push MRP through? Surely HaskellPrime is the forum where breaking changes like MRP should be discussed, allowing as much time as is necessary and allowing for an as wide range of perspectives as possible to properly be taken into account?
The need to "field test" MRP prior to discussing it in HaskellPrime has been mentioned. Graham and I are very sceptical. In the past, at least in the past leading up to Haskell 2010 or so, the community at large was not roped in as involuntary field testers.
If MRP is pushed through now, with a resurrection of HaskellPrime being imminent, Graham and I strongly believe that risks coming across to a very large part of the Haskell community as preempting proper process by facing the new HaskellPrime committee with (yet another) fait accompli.
Therefore, please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee, which is where it properly belongs. Otherwise, the Haskell community itself might be one of the things that MRP breaks.
Best regards,
/Henrik
-- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk
This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.
This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

On 2015-10-06 at 14:06:11 +0200, Erik Hesselink wrote:
I was always under the impression that +1/-1 was just a quick indicator of opinion, not a vote, and that it was the core libraries committee that would make the final call if enough consensus was reached to enact the change.
I'd like to point out, that the core libraries committee ought to continue to do so (as hinted at in [1]) in its function as a Haskell Prime sub-Committee (c.f. sub-teams in the Rust community[2]). While there will be surely overlap of interests, contributions, cross-reviewing and discussion, the principal task and responsibility of the new sought members is to concentrate on the language part of the Haskell Report where quite a bit of work is awaiting them. Cheers, hvr [1]: https://mail.haskell.org/pipermail/haskell-prime/2015-September/003936.html [2]: https://www.rust-lang.org/team.html

On Oct 6, 2015 7:32 AM, "Henrik Nilsson"
Executive Summary: Please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee
Many more people are on this mailing list than will be chosen for the committee. Those who are not chosen have useful perspectives as well.
1. Is the Haskell Libraries list and informal voting process really an appropriate, or even acceptable, way to adopt such far-reaching changes to what effectively amounts to Haskell itself?
As others have said, no one wants that.
But, as has been pointed out in a number of postings, a lot of people with very valuable perspectives are also very busy, and thus likely to miss a short discussion period (as has happened in the past in relation to the Burning the Bridges proposal)
The Foldable/Traversable BBP indeed was not as well discussed as it should have been. AMP, on the other hand, was discussed extensively and publicly for months. I understand that some people need months of notice to prepare to participate in a discussion. Unfortunately, I don't think those people can always be included. Life moves too quickly for that. I do think it might be valuable to set up a moderated, extremely low volume mailing list for discussion of only the most important changes, with its messages forwarded to the general list.
The need to "field test" MRP prior to discussing it in HaskellPrime has been mentioned. Graham and I are very sceptical. In the past, at least in the past leading up to Haskell 2010 or so, the community at large was not roped in as involuntary field testers.
No, and Haskell 2010 was, by most measures, a failure. It introduced no new language features (as far as I can recall) and only a few of the most conservative library changes imaginable. Standard Haskell has stagnated since 1998, 17 years ago. Haskell 2010 did not reflect the Haskell people used in their research our practical work then, and I think people are justified in their concern that the next standard may be similarly disappointing. One of the major problems is the (understandable, and in many ways productive) concentration of development effort in a single compiler. When there is only one modern Haskell implementation that is commonly used, it's hard to know how changes to the standard will affect other important implementation techniques, and therefore hard to justify any substantial changes. That was true in 2010, and it is, if anything, more true now.
Therefore, please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee, which is where it properly belongs. Otherwise, the Haskell community itself might be one of the things that MRP breaks.
I hope not. Haskell has gained an awful lot from the researchers, teachers, and developers who create it and use it. I hope we can work out an appropriate balance of inclusion, caution, and speed.

The change to make Monad a special case of Applicative has been
a long time coming, and it has been a long time coming precisely
because it's going to break things.
I feel ambivalent about this. As soon as the question was
raised, it was clear that it *would have been* nicer had
Haskell been this way originally. I'm not looking forward to
the consequences of the change.
On 7/10/2015, at 12:32 am, Henrik Nilsson
While we can discuss the extent of additional breakage MRP would cause, the fact remains it is a further breaking change. A survey of breakage to books as Herbert did is certainly valuable (thanks!), but much breakage will (effectively) remain unquantifiable.
This suggests one way in which one of the costs of the change could be reduced. Not just to survey the breakage, but to actually put revised material up on the Haskell.org web site, with a prominent link from the home page.

2015-10-05 17:09 GMT+02:00 Gershom B
On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote: [...] As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. [...]
Nope, at least not if you care about -Wall: If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude. So you'll have to use CPP to avoid that import on base >= 4.8, be it from it Data.Functor, Control.Applicative or some compat-* module. And you'll have to use CPP in each and every module using <$> then, unless I miss something obvious. AFAICT all transitioning guides ignore -Wall and friends...

On 5 October 2015 at 20:58, Sven Panne
2015-10-05 17:09 GMT+02:00 Gershom B
: [...] As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. [...]
Nope, at least not if you care about -Wall: If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude. So you'll have to use CPP to avoid that import on base >= 4.8, be it from it Data.Functor, Control.Applicative or some compat-* module. And you'll have to use CPP in each and every module using <$> then, unless I miss something obvious. AFAICT all transitioning guides ignore -Wall and friends...
Does the hack mentioned on the GHC trac [1] work for this? It seems a bit fragile but that page says it works and it avoids CPP. Erik [1] https://ghc.haskell.org/trac/ghc/wiki/Migration/7.10#GHCsaysTheimportof...is...

On Mon, Oct 5, 2015 at 9:02 PM, Erik Hesselink
On 5 October 2015 at 20:58, Sven Panne
wrote: 2015-10-05 17:09 GMT+02:00 Gershom B
: [...] As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP +
MRP.
One can also write code, sans CPP, compatible with pre- and post- AMP. [...]
Nope, at least not if you care about -Wall: If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude. So you'll have to use CPP to avoid that import on base >= 4.8, be it from it Data.Functor, Control.Applicative or some compat-* module. And you'll have to use CPP in each and every module using <$> then, unless I miss something obvious. AFAICT all transitioning guides ignore -Wall and friends...
Does the hack mentioned on the GHC trac [1] work for this? It seems a bit fragile but that page says it works and it avoids CPP.
No it doesn't, if you also don't want closed import lists (which you should).

Sven Panne
2015-10-05 17:09 GMT+02:00 Gershom B
: On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com) wrote: [...] As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. [...]
Nope, at least not if you care about -Wall: If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude. So you'll have to use CPP to avoid that import on base >= 4.8, be it from it Data.Functor, Control.Applicative or some compat-* module. And you'll have to use CPP in each and every module using <$> then, unless I miss something obvious. AFAICT all transitioning guides ignore -Wall and friends...
This is a fair point that comes up fairly often. The fact that CPP is required to silence redundant import warnings is quite unfortunate. Others languages have better stories in this area. One example is Rust, which has a quite flexible `#[allow(...)]` pragma which can be used to acknowledge and silence a wide variety of warnings and lints [1]. I can think of a few ways (some better than others) how we might introduce a similar idea for import redundancy checks in Haskell, 1. Attach a `{-# ALLOW redundant_import #-}` pragma to a definition, -- in Control.Applicative {-# ALLOW redundant_import (<$>) #-} (<$>) :: (a -> b) -> f a -> f b (<$>) = fmap asking the compiler to pretend that any import of the symbol did not exist when looking for redundant imports. This would allow library authors to appropriately mark definitions when they are moved, saving downstream users from having to make any change whatsoever. 2. Or alternatively we could make this a idea a bit more precise, -- in Control.Applicative {-# ALLOW redundant_import Prelude.(<$>) #-} (<$>) :: (a -> b) -> f a -> f b (<$>) = fmap Which would ignore imports of `Control.Applicative.(<$>)` only if `Prelude.(<$>)` were also in scope. 3. Attach a `{-# ALLOW redundancy_import #-}` pragma to an import, import {-# ALLOW redundant_import #-} Control.Applicative -- or perhaps import Control.Applicative {-# ALLOW redundant_import Control.Applicative #-} allowing the user to explicitly state that they are aware that this import may be redundant. 4. Attach a `{-# ALLOW redundancy_import #-}` pragma to a name in an import list, import Control.Applicative ((<$>) {-# ALLOW redundant_import #-}) allowing the user to explicitly state that they are aware that this imported function may be redundant. In general I'd like to reiterate that many of the comments in this thread describe genuine sharp edges in our language which have presented a real cost in developer time during the AMP and and FTP transitions. I think it is worth thinking of ways to soften these edges; we may be surprised how easy it is to fix some of them. - Ben [1] https://doc.rust-lang.org/stable/reference.html#lint-check-attributes

Ben Gamari
This is a fair point that comes up fairly often. The fact that CPP is required to silence redundant import warnings is quite unfortunate. Others languages have better stories in this area. One example is Rust, which has a quite flexible `#[allow(...)]` pragma which can be used to acknowledge and silence a wide variety of warnings and lints [1].
I can think of a few ways (some better than others) how we might introduce a similar idea for import redundancy checks in Haskell,
1. Attach a `{-# ALLOW redundant_import #-}` pragma to a definition, ... 2. Or alternatively we could make this a idea a bit more precise, {-# ALLOW redundant_import Prelude.(<$>) #-} ... 3. Attach a `{-# ALLOW redundancy_import #-}` pragma to an import, import {-# ALLOW redundant_import #-} Control.Applicative ... 4. Attach a `{-# ALLOW redundancy_import #-}` pragma to a name in an import list, import Control.Applicative ((<$>) {-# ALLOW redundant_import #-})
What I don't like about this solution is how specific it is -- the gut instinct that it can't be the last such extension, if we were to start replacing CPP piecemeal. And after a while, we'd accomodate a number of such extensions.. and they would keep coming.. until it converges to a trainwreck. I think that what instead needs to be done, is this: 1. a concerted effort to summarize *all* uses of CPP in Haskell code 2. a bit of forward thinking, to include desirables that would be easy to get with a more generic solution Personally, I think that any such effort, approached with generality that would be truly satisfying, should inevitably converge to an AST-level mechanism. ..but then I remember how CPP can be used to paper over incompatible syntax changes.. Hmm.. -- с уважениeм / respectfully, Косырев Серёга

On Tue, Oct 6, 2015 at 4:44 AM, Ben Gamari
Sven Panne
writes: 2015-10-05 17:09 GMT+02:00 Gershom B
: On October 5, 2015 at 10:59:35 AM, Bryan O'Sullivan (bos@serpentine.com ) wrote: [...] As for libraries, it has been pointed out, I believe, that without CPP one can write instances compatible with AMP, and also with AMP + MRP. One can also write code, sans CPP, compatible with pre- and post- AMP. [...]
Nope, at least not if you care about -Wall: If you take e.g. (<$>) which is now part of the Prelude, you can't simply import some compatibility module, because GHC might tell you (rightfully) that that import is redundant, because (<$>) is already visible through the Prelude. So you'll have to use CPP to avoid that import on base >= 4.8, be it from it Data.Functor, Control.Applicative or some compat-* module. And you'll have to use CPP in each and every module using <$> then, unless I miss something obvious. AFAICT all transitioning guides ignore -Wall and friends...
This is a fair point that comes up fairly often. The fact that CPP is required to silence redundant import warnings is quite unfortunate. Others languages have better stories in this area. One example is Rust, which has a quite flexible `#[allow(...)]` pragma which can be used to acknowledge and silence a wide variety of warnings and lints [1].
I can think of a few ways (some better than others) how we might introduce a similar idea for import redundancy checks in Haskell,
1. Attach a `{-# ALLOW redundant_import #-}` pragma to a definition,
-- in Control.Applicative {-# ALLOW redundant_import (<$>) #-} (<$>) :: (a -> b) -> f a -> f b (<$>) = fmap
asking the compiler to pretend that any import of the symbol did not exist when looking for redundant imports. This would allow library authors to appropriately mark definitions when they are moved, saving downstream users from having to make any change whatsoever.
2. Or alternatively we could make this a idea a bit more precise,
-- in Control.Applicative {-# ALLOW redundant_import Prelude.(<$>) #-} (<$>) :: (a -> b) -> f a -> f b (<$>) = fmap
Which would ignore imports of `Control.Applicative.(<$>)` only if `Prelude.(<$>)` were also in scope.
One obvious solution I haven't seen mentioned is the ability to add nonexistent identifier to a hiding clause (these identifiers might presumably exist in some other version of the import): import Prelude hiding ((<$>)) I can see the argument for marking such imports with a pragma, though it gets a bit ugly. -Jan-Willem Maessen

I am also a -1, and I share Bryan's concerns. What worries me most is that we have started to see very valuable members of our community publicly state that they are reducing their community involvement. While technical disagreement has something to do with their decisions, I suspect that it is the /process/ by which these decisions have been made that is pushing them away. Whatever our individual stances on AMP/FTP/MRP, I hope we can all agree that any process that has this effect needs a hard look. I consider myself a member of the Haskell community, but like Henrik and Graham, I have not actively followed the libraries list. I was take by surprise by AMP. When FTP appeared, I didn't speak up because I felt the outcome was inevitable. I should have spoken up nonetheless; I am speaking up now. In effect, only those who actively follow the libraries list have had a voice in these decisions. Maybe that is what the community wants. I hope not. How then can people like me (and Henrik and Graham) have a say without committing to actively following the libraries list? We have a method to solve this: elected representatives. Right now the Core Libraries Committee elects its own members; perhaps it is time for that to change. Let me throw out a few straw man proposals. Proposal 1: Move to community election of the members of the Core Libraries Committee. Yes, I know this would have its own issues. Proposal 2: After a suitable period of discussion on the libraries list, the Core Libraries Committee will summarize the arguments for and against a proposal and post it, along with a (justified) preliminary decision, to a low-traffic, announce-only email list. After another suitable period of discussion, they will issue a final decision. What is a suitable period of time? Perhaps that depends on the properties of the proposal, such as whether it breaks backwards compatibility. Proposal 3: A decision regarding any proposal that significantly affects backwards compatibility is within the purview of the Haskell Prime Committee, not the Core Libraries Committee. Now I am not saying I feel strongly that all (or any) of these proposals should be enacted (in fleshed out form), but I do think they are all worth discussing. Cheers, Geoff On 10/05/2015 10:59 AM, Bryan O'Sullivan wrote:
I would like to suggest that the bar for breaking all existing libraries, books, papers, and lecture notes should be very high; and that the benefit associated with such a breaking change should be correspondingly huge.
This proposal falls far short of both bars, to the extent that I am astonished and disappointed it is being seriously discussed – and to general approval, no less – on a date other than April 1. Surely some design flaws have consequences so small that they are not worth fixing.
I'll survive if it goes through, obviously, but it will commit me to a bunch of pointless make-work and compatibility ifdefs. I've previously expressed my sense that cross-version compatibility is a big tax on library maintainers. This proposal does not give me confidence that this cost is being taken seriously.
Thanks, Bryan.
On Oct 5, 2015, at 7:32 AM, Gershom B
wrote: On October 5, 2015 at 6:00:00 AM, Simon Thompson (s.j.thompson@kent.ac.uk) wrote: Hello all. I write this to be a little provocative, but …
It’s really interesting to have this discussion, which pulls in all sorts of well-made points about orthogonality, teaching, the evolution of the language and so on, but it simply goes to show that the process of evolving Haskell is profoundly broken.
Other languages do evolve, but in a managed and reflective way. Simply throwing in changes that would have a profound impact on systems that are commercially and potentially safety critical in an à la carte, offhand, way seems like a breakdown of the collective responsibility of the Haskell community to its users and, indirectly, to its future. Hi Simon. I do in fact think this is provocative :-P
I want to object here to your characterization of what has been going on as “simply throwing in changes”. The proposal seems very well and carefully worked through to provide a good migration strategy, even planning to alter the source of GHC to ensure that adequate hints are given for the indefinite transition period.
I also want to object to the idea that these changes would have “a profound impact on systems”. As it stands, and I think this is an important criteria in any change, when “phase 2” goes into affect, code that has compiled before may cease to compile until a minor change is made. However, code that continues to compile will continue to compile with the same behavior.
Now as to process itself, this is a change to core libraries. It has been proposed on the libraries list, which seems appropriate, and a vigorous discussion has ensued. This seems like a pretty decent process to me thus far. Do you have a better one in mind?
—Gershom
P.S. as a general point, I sympathize with concerns about breakage resulting from this, but I also think that the migration strategy proposed is good, and if people are concerned about breakage I think it would be useful if they could explain where they feel the migration strategy is insufficient to allay their concerns.

Hi all, Geoffrey Mainland wrote;
What worries me most is that we have started to see very valuable members of our community publicly state that they are reducing their community involvement.
That worries me too. A lot. To quote myself from an earlier e-mail in this thread:
Therefore, please let us defer further discussion and ultimate decision on MRP to the resurrected HaskellPrime committee, which is where it properly belongs. Otherwise, the Haskell community itself might be one of the things that MRP breaks.
Geoffrey further wrote:
Proposal 3: A decision regarding any proposal that significantly affects backwards compatibility is within the purview of the Haskell Prime Committee, not the Core Libraries Committee.
I thus definitely support this, at least for anything related to the libraries covered by the Haskell report. Indeed, I strongly suspect that many people who did not actively follow the libraries discussions did so because they simply did not think that changes to the central libraries as defined in the Haskell report, at least not breaking changes, were in the remit of the libraries committee, and were happy to leave discussions on any other libraries to the users of those libraries. And as a consequence they were taken by surprise by AMP etc. So before breaking anything more, that being code, research papers, books, what people have learned, or even the community itself, it is time to very carefully think about what the appropriate processes should be for going forward. Best, /Henrik -- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

I'd say, of course, libraries covered by the Haskell report are not in the remit of the libraries committee. -----Original Message----- From: Libraries [mailto:libraries-bounces@haskell.org] On Behalf Of Henrik Nilsson Sent: 21 October 2015 09:25 To: Geoffrey Mainland; Bryan O'Sullivan; Gershom B Cc: Henrik.Nilsson@nottingham.ac.uk; haskell-prime@haskell.org List; Graham Hutton; Haskell Libraries; haskell cafe Subject: Re: [Haskell-cafe] Monad of no `return` Proposal (MRP): Moving `return` out of `Monad` Hi all, Geoffrey Mainland wrote;
What worries me most is that we have started to see very valuable > members of our community publicly state that they are reducing their > community involvement.
That worries me too. A lot. To quote myself from an earlier e-mail in this thread:
Therefore, please let us defer further discussion and > ultimate decision on MRP to the resurrected > HaskellPrime committee, which is where it properly > belongs. Otherwise, the Haskell community itself might > be one of the things that MRP breaks.
Geoffrey further wrote:
Proposal 3: A decision regarding any proposal that significantly > affects backwards compatibility is within the purview of the Haskell > Prime Committee, not the Core Libraries Committee.
I thus definitely support this, at least for anything related to the libraries covered by the Haskell report. Indeed, I strongly suspect that many people who did not actively follow the libraries discussions did so because they simply did not think that changes to the central libraries as defined in the Haskell report, at least not breaking changes, were in the remit of the libraries committee, and were happy to leave discussions on any other libraries to the users of those libraries. And as a consequence they were taken by surprise by AMP etc. So before breaking anything more, that being code, research papers, books, what people have learned, or even the community itself, it is time to very carefully think about what the appropriate processes should be for going forward. Best, /Henrik -- Henrik Nilsson School of Computer Science The University of Nottingham nhn@cs.nott.ac.uk This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries This email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please delete all copies and notify the sender immediately. You may wish to refer to the incorporation details of Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at http://www.standardchartered.com/en/incorporation-details.html Insofar as this communication contains any market commentary, the market commentary has been prepared by sales and/or trading desk of Standard Chartered Bank or its affiliate. It is not and does not constitute research material, independent research, recommendation or financial advice. Any market commentary is for information purpose only and shall not be relied for any other purpose, and is subject to the relevant disclaimers available at http://wholesalebanking.standardchartered.com/en/utility/Pages/d-mkt.aspx Insofar as this e-mail contains the term sheet for a proposed transaction, by responding affirmatively to this e-mail, you agree that you have understood the terms and conditions in the attached term sheet and evaluated the merits and risks of the transaction. We may at times also request you to sign on the term sheet to acknowledge in respect of the same. Please visit http://wholesalebanking.standardchartered.com/en/capabilities/financialmarke... for important information with respect to derivative products.

Hello, On 2015-10-21 at 02:39:57 +0200, Geoffrey Mainland wrote: [...]
In effect, only those who actively follow the libraries list have had a voice in these decisions. Maybe that is what the community wants. I hope not. How then can people like me (and Henrik and Graham) have a say without committing to actively following the libraries list?
We have a method to solve this: elected representatives. Right now the Core Libraries Committee elects its own members; perhaps it is time for that to change.
[...]
Proposal 1: Move to community election of the members of the Core Libraries Committee. Yes, I know this would have its own issues.
How exactly do public elections of representatives address the problem that some people feel left out? Have you considered nominating yourself or somebody else you have confidence in for the core libraries committee? You'd still have to find somebody to represent your interests, regardless of whether the committee is self-elected or direct-elected. Here's some food for thought regarding language design by voting or its indirect form via a directly elected language committee: Back in February there was a large-scale survey which resulted (see [2] for more details) in a rather unequivocal 4:1 majority *for* going through with the otherwise controversial FTP implementation. If the community elections would result in a similar spirit, you'd could easily end up with a similarly 4:1 pro-change biased committee. Would you consider that a well balanced committee formation?
Proposal 2: After a suitable period of discussion on the libraries list, the Core Libraries Committee will summarize the arguments for and against a proposal and post it, along with a (justified) preliminary decision, to a low-traffic, announce-only email list. After another suitable period of discussion, they will issue a final decision. What is a suitable period of time? Perhaps that depends on the properties of the proposal, such as whether it breaks backwards compatibility.
That generally sounds like a good compromise, if this actually helps reaching the otherwise unreachable parts of the community and have their voices heard.
Proposal 3: A decision regarding any proposal that significantly affects backwards compatibility is within the purview of the Haskell Prime Committee, not the Core Libraries Committee.
I don't see how that would change much. The prior Haskell Prime Committee has been traditionally self-elected as well. So it's just the label of the committee you'd swap out. In the recent call of nominations[1] for Haskell Prime, the stated area of work for the new nominations was to take care of the *language* part, because that's what we are lacking the workforce for. Since its creation for the very purpose of watching over the core libraries, the core-libraries-committee has been almost exclusively busy with evaluating and deciding about changes to the `base` library and overseeing their implementation. Transferring this huge workload to the new Haskell Prime committee members who have already their hands full with revising the language part would IMO just achieve to reduce the effectiveness of the upcoming Haskell Prime committee, and therefore increase the risk of failure in producing an adequate new Haskell Report revision. Regards, H.V.Riedel [1]: https://mail.haskell.org/pipermail/haskell-prime/2015-September/003936.html [2]: https://mail.haskell.org/pipermail/haskell-cafe/2015-February/118336.html

On Wed, Oct 21, 2015 at 6:56 AM Herbert Valerio Riedel
Proposal 1: Move to community election of the members of the Core Libraries Committee. Yes, I know this would have its own issues. How exactly do public elections of representatives address the problem that some people feel left out?
The issue of people feeling left out is addressed by the second part of his proposal, a low-volume (presumably announcements only) list where changes that are being seriously considered can be announced, along with pointers to the discussion areas. That way, the overhead of getting notices is near zero, and everyone can then decide whether to invest time in the
Back in February there was a large-scale survey which resulted (see [2] for more details) in a rather unequivocal 4:1 majority *for* going through with the otherwise controversial FTP implementation. If the community elections would result in a similar spirit, you'd could easily end up with a similarly 4:1 pro-change biased committee. Would you consider that a well balanced committee formation?
This shows two areas of confusion. The first is that the point of representation isn't to be well-balanced, or fair, or any such thing. it's to be representative of the community. Or at least, of some aspect of the community. Whether or not this is a problem and how to fix it are hard political problems that I doubt we're going to solve. The second is that the composition of the committee matters beyond the aspect they are supposed to represent. For instance, if the process doesn't leave final decisions in the hands of the committee, but in a general vote (just a for instance, not a proposal) then the balance or fairness of the committee is irrelevant, so long as the community trusts them to administer the process properly. In other words, we need to figure out exactly what the job of the committee is going to be before we start worrying about what kind of composition it should have. As for the issue of libraries vs. language, I think the same process should apply to both, though it might be administered by different groups in order to spread the workload around.

Hello, > > On 2015-10-21 at 02:39:57 +0200, Geoffrey Mainland wrote: > > [...]
In effect, only those who actively follow the libraries list have had a >> voice in these decisions. Maybe that is what the community wants. I hope >> not. How then can people like me (and Henrik and Graham) have a say >> without committing to actively following the
or somebody else you have confidence in for the core libraries > committee? You'd still have to find somebody to represent your > interests, regardless of whether the committee is self-elected or >
Proposal 2: After a suitable period of discussion on the libraries list, >> the Core Libraries Committee will summarize the arguments for and >> against a proposal and post it, along with a (justified) preliminary >> decision, to a low-traffic, announce-only email list. After another >> suitable period of discussion, they will issue a final decision. What is a suitable period of time? Perhaps that depends on the properties of
On 10/21/2015 07:55 AM, Herbert Valerio Riedel wrote: libraries list? >> >> We have a method to solve this: elected representatives. Right now the >> Core Libraries Committee elects its own members; perhaps it is time for >> that to change. > > [...] > >> Proposal 1: Move to community election of the members of the Core >> Libraries Committee. Yes, I know this would have its own issues. > > How exactly do public elections of representatives address the problem > that some people feel left out? Have you considered nominating yourself direct-elected. > > Here's some food for thought regarding language design by voting or its > indirect form via a directly elected language committee: > > Back in February there was a large-scale survey which resulted (see [2] > for more details) in a rather unequivocal 4:1 majority *for* going > through with the otherwise controversial FTP implementation. If the > community elections would result in a similar spirit, you'd could easily > end up with a similarly 4:1 pro-change biased committee. Would you > consider that a well balanced committee formation? Thanks, all good points. It is quite possible that direct elections would produce the exact same committee. I wouldn't see that as a negative outcome at all! At least that committee would have been put in place by direct election; I would see that as strengthening their mandate. I am very much aware of the February survey. I wonder if Proposal 2, had it been in place at the time, would have increased participation in the survey. The recent kerfuffle has caught the attention of many people who don't normally follow the libraries list. Proposal 1 is an attempt to give them a voice. Yes, they would still need to find a candidate to represent their interests. If we moved to direct elections, I would consider running. However, my preference is that Proposal 3 go through in some form, in which case my main concern would be the Haskell Prime committee, and unfortunately nominations for that committee have already closed. the >> proposal, such as whether it breaks backwards compatibility. > > That generally sounds like a good compromise, if this actually helps > reaching the otherwise unreachable parts of the community and have their
voices heard.
Proposal 3: A decision regarding any proposal that significantly affects >> backwards compatibility is within the purview of the Haskell Prime Committee, not the Core Libraries Committee. > > I don't see how that would change much. The prior Haskell Prime > Committee has been
My hope is that a low-volume mailing list would indeed reach a wider audience. traditionally self-elected as well. So it's just the > label of the committee you'd swap out. > > In the recent call of nominations[1] for Haskell Prime, the stated area > of work for the new nominations was to take care of the *language* part, > because that's what we are lacking the workforce for. > > Since its creation for the very purpose of watching over the core > libraries, the core-libraries-committee has been almost exclusively busy > with evaluating and deciding about changes to the `base` library and > overseeing their implementation. Transferring this huge workload to the > new Haskell Prime committee members who have already their hands full > with revising the language part would IMO just achieve to reduce the > effectiveness of the upcoming Haskell Prime committee, and therefore > increase the risk of failure in producing an adequate new Haskell Report > revision. My understanding is that much of the work of the core libraries committee does not "significantly affect backwards compatibility," at least not to the extent that MRP does. If this is the case, the bulk of their workload would not be transferred to the new Haskell Prime committee. Is my understanding incorrect? The intent of Proposal 3 was to transfer only a small fraction of the issues that come before the core libraries committee to the Haskell Prime committee. In any case, we would certainly need to clarify what "significantly affects backwards compatibility" means. Perhaps we should consider direct elections for the Haskell Prime committee as well as changing their mandate to include some subset of the changes proposed to libraries covered by the Haskell Report. My understanding of the current state of affairs is that the Haskell Prime committee is charged with producing a new report, but the core libraries committee is in charge of the library part of that report. Is that correct? Cheers, Geoff
Regards, > H.V.Riedel > > [1]: https://mail.haskell.org/pipermail/haskell-prime/2015-September/003936.html [2]: https://mail.haskell.org/pipermail/haskell-cafe/2015-February/118336.html

The committee was formed from a pool of suggestions supplied to SPJ that
represented a fairly wide cross-section of the community.
Simon initially offered both myself and Johan Tibell the role of co-chairs.
Johan ultimately declined.
In the end, putting perhaps too simple a spin on it, the initial committee
was selected: Michael Snoyman for commercial interest, Mark Lentczner
representing the needs of the Platform and implementation concerns, Brent
Yorgey on the theory side, Doug Beardsley representing practitioners,
Joachim Breitner had expressed interest in working on split base, which at
the time was a much more active concern, Dan Doel represented a decent
balance of theory and practice.
Since then we had two slots open up on the committee, and precisely two
self-nominations to fill them, which rather simplified the selection
process. Brent and Doug rotated out and Eric Mertens and Luite Stegeman
rotated in.
Technically, yes, we are self-selected going forward, based on the
precedent of the haskell.org committee and haskell-prime committees, but
you'll note this hasn't actually been a factor yet as there hasn't been any
decision point reached where that has affected a membership decision.
-Edward
On Wed, Oct 21, 2015 at 8:31 AM, Geoffrey Mainland
Hello, > > On 2015-10-21 at 02:39:57 +0200, Geoffrey Mainland wrote: > > [...]
In effect, only those who actively follow the libraries list have had a >> voice in these decisions. Maybe that is what the community wants. I hope >> not. How then can people like me (and Henrik and Graham) have a say >> without committing to actively following the
or somebody else you have confidence in for the core libraries > committee? You'd still have to find somebody to represent your > interests, regardless of whether the committee is self-elected or >
On 10/21/2015 07:55 AM, Herbert Valerio Riedel wrote: libraries list? >> >> We have a method to solve this: elected representatives. Right now the >> Core Libraries Committee elects its own members; perhaps it is time for >> that to change. > > [...] > >> Proposal 1: Move to community election of the members of the Core >> Libraries Committee. Yes, I know this would have its own issues. > > How exactly do public elections of representatives address the problem > that some people feel left out? Have you considered nominating yourself direct-elected. > > Here's some food for thought regarding language design by voting or its > indirect form via a directly elected language committee: > > Back in February there was a large-scale survey which resulted (see [2] > for more details) in a rather unequivocal 4:1 majority *for* going > through with the otherwise controversial FTP implementation. If the > community elections would result in a similar spirit, you'd could easily > end up with a similarly 4:1 pro-change biased committee. Would you > consider that a well balanced committee formation?
Thanks, all good points.
It is quite possible that direct elections would produce the exact same committee. I wouldn't see that as a negative outcome at all! At least that committee would have been put in place by direct election; I would see that as strengthening their mandate.
I am very much aware of the February survey. I wonder if Proposal 2, had it been in place at the time, would have increased participation in the survey.
The recent kerfuffle has caught the attention of many people who don't normally follow the libraries list. Proposal 1 is an attempt to give them a voice. Yes, they would still need to find a candidate to represent their interests. If we moved to direct elections, I would consider running. However, my preference is that Proposal 3 go through in some form, in which case my main concern would be the Haskell Prime committee, and unfortunately nominations for that committee have already closed.
Proposal 2: After a suitable period of discussion on the libraries list, >> the Core Libraries Committee will summarize the arguments for and
a suitable period of time? Perhaps that depends on the properties of
against a proposal and post it, along with a (justified) preliminary >> decision, to a low-traffic, announce-only email list. After another >> suitable period of discussion, they will issue a final decision. What is the >> proposal, such as whether it breaks backwards compatibility. > > That generally sounds like a good compromise, if this actually helps > reaching the otherwise unreachable parts of the community and have their voices heard.
My hope is that a low-volume mailing list would indeed reach a wider audience.
Proposal 3: A decision regarding any proposal that significantly affects >> backwards compatibility is within the purview of the Haskell Prime Committee, not the Core Libraries Committee. > > I don't see how that would change much. The prior Haskell Prime > Committee has been traditionally self-elected as well. So it's just the > label of the committee you'd swap out. > > In the recent call of nominations[1] for Haskell Prime, the stated area > of work for the new nominations was to take care of the *language* part, > because that's what we are lacking the workforce for. > > Since its creation for the very purpose of watching over the core > libraries, the core-libraries-committee has been almost exclusively busy > with evaluating and deciding about changes to the `base` library and > overseeing their implementation. Transferring this huge workload to the > new Haskell Prime committee members who have already their hands full > with revising the language part would IMO just achieve to reduce the > effectiveness of the upcoming Haskell Prime committee, and therefore > increase the risk of failure in producing an adequate new Haskell Report > revision.
My understanding is that much of the work of the core libraries committee does not "significantly affect backwards compatibility," at least not to the extent that MRP does. If this is the case, the bulk of their workload would not be transferred to the new Haskell Prime committee. Is my understanding incorrect?
The intent of Proposal 3 was to transfer only a small fraction of the issues that come before the core libraries committee to the Haskell Prime committee. In any case, we would certainly need to clarify what "significantly affects backwards compatibility" means.
Perhaps we should consider direct elections for the Haskell Prime committee as well as changing their mandate to include some subset of the changes proposed to libraries covered by the Haskell Report. My understanding of the current state of affairs is that the Haskell Prime committee is charged with producing a new report, but the core libraries committee is in charge of the library part of that report. Is that correct?
Cheers, Geoff
Regards, > H.V.Riedel > > [1]: https://mail.haskell.org/pipermail/haskell-prime/2015-September/003936.html [2]: https://mail.haskell.org/pipermail/haskell-cafe/2015-February/118336.html
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Thanks for the background, Edward. I don't mean to question the composition of the committee, only to start a discussion about how the community might handle the selection process going forward. I apologize if I was not clear about that. As I said below, if a direct vote resulted in the same committee we would have had under the current system, I would consider that a success! We may also see a larger nomination pool in the future :) Cheers, Geoff On 10/21/2015 03:54 PM, Edward Kmett wrote:
The committee was formed from a pool of suggestions supplied to SPJ that represented a fairly wide cross-section of the community.
Simon initially offered both myself and Johan Tibell the role of co-chairs. Johan ultimately declined.
In the end, putting perhaps too simple a spin on it, the initial committee was selected: Michael Snoyman for commercial interest, Mark Lentczner representing the needs of the Platform and implementation concerns, Brent Yorgey on the theory side, Doug Beardsley representing practitioners, Joachim Breitner had expressed interest in working on split base, which at the time was a much more active concern, Dan Doel represented a decent balance of theory and practice.
Since then we had two slots open up on the committee, and precisely two self-nominations to fill them, which rather simplified the selection process. Brent and Doug rotated out and Eric Mertens and Luite Stegeman rotated in.
Technically, yes, we are self-selected going forward, based on the precedent of the haskell.org http://haskell.org committee and haskell-prime committees, but you'll note this hasn't actually been a factor yet as there hasn't been any decision point reached where that has affected a membership decision.
-Edward
On Wed, Oct 21, 2015 at 8:31 AM, Geoffrey Mainland
mailto:mainland@apeiron.net> wrote: On 10/21/2015 07:55 AM, Herbert Valerio Riedel wrote: > Hello, > > On 2015-10-21 at 02:39:57 +0200, Geoffrey Mainland wrote: > > [...] > >> In effect, only those who actively follow the libraries list have had a >> voice in these decisions. Maybe that is what the community wants. I hope >> not. How then can people like me (and Henrik and Graham) have a say >> without committing to actively following the libraries list? >> >> We have a method to solve this: elected representatives. Right now the >> Core Libraries Committee elects its own members; perhaps it is time for >> that to change. > > [...] > >> Proposal 1: Move to community election of the members of the Core >> Libraries Committee. Yes, I know this would have its own issues. > > How exactly do public elections of representatives address the problem > that some people feel left out? Have you considered nominating yourself > or somebody else you have confidence in for the core libraries > committee? You'd still have to find somebody to represent your > interests, regardless of whether the committee is self-elected or > direct-elected. > > Here's some food for thought regarding language design by voting or its > indirect form via a directly elected language committee: > > Back in February there was a large-scale survey which resulted (see [2] > for more details) in a rather unequivocal 4:1 majority *for* going > through with the otherwise controversial FTP implementation. If the > community elections would result in a similar spirit, you'd could easily > end up with a similarly 4:1 pro-change biased committee. Would you > consider that a well balanced committee formation?
Thanks, all good points.
It is quite possible that direct elections would produce the exact same committee. I wouldn't see that as a negative outcome at all! At least that committee would have been put in place by direct election; I would see that as strengthening their mandate.
I am very much aware of the February survey. I wonder if Proposal 2, had it been in place at the time, would have increased participation in the survey.
The recent kerfuffle has caught the attention of many people who don't normally follow the libraries list. Proposal 1 is an attempt to give them a voice. Yes, they would still need to find a candidate to represent their interests. If we moved to direct elections, I would consider running. However, my preference is that Proposal 3 go through in some form, in which case my main concern would be the Haskell Prime committee, and unfortunately nominations for that committee have already closed.
>> Proposal 2: After a suitable period of discussion on the libraries list, >> the Core Libraries Committee will summarize the arguments for and >> against a proposal and post it, along with a (justified) preliminary >> decision, to a low-traffic, announce-only email list. After another >> suitable period of discussion, they will issue a final decision. What is >> a suitable period of time? Perhaps that depends on the properties of the >> proposal, such as whether it breaks backwards compatibility. > > That generally sounds like a good compromise, if this actually helps > reaching the otherwise unreachable parts of the community and have their > voices heard.
My hope is that a low-volume mailing list would indeed reach a wider audience.
>> Proposal 3: A decision regarding any proposal that significantly affects >> backwards compatibility is within the purview of the Haskell Prime >> Committee, not the Core Libraries Committee. > > I don't see how that would change much. The prior Haskell Prime > Committee has been traditionally self-elected as well. So it's just the > label of the committee you'd swap out. > > In the recent call of nominations[1] for Haskell Prime, the stated area > of work for the new nominations was to take care of the *language* part, > because that's what we are lacking the workforce for. > > Since its creation for the very purpose of watching over the core > libraries, the core-libraries-committee has been almost exclusively busy > with evaluating and deciding about changes to the `base` library and > overseeing their implementation. Transferring this huge workload to the > new Haskell Prime committee members who have already their hands full > with revising the language part would IMO just achieve to reduce the > effectiveness of the upcoming Haskell Prime committee, and therefore > increase the risk of failure in producing an adequate new Haskell Report > revision.
My understanding is that much of the work of the core libraries committee does not "significantly affect backwards compatibility," at least not to the extent that MRP does. If this is the case, the bulk of their workload would not be transferred to the new Haskell Prime committee. Is my understanding incorrect?
The intent of Proposal 3 was to transfer only a small fraction of the issues that come before the core libraries committee to the Haskell Prime committee. In any case, we would certainly need to clarify what "significantly affects backwards compatibility" means.
Perhaps we should consider direct elections for the Haskell Prime committee as well as changing their mandate to include some subset of the changes proposed to libraries covered by the Haskell Report. My understanding of the current state of affairs is that the Haskell Prime committee is charged with producing a new report, but the core libraries committee is in charge of the library part of that report. Is that correct?
Cheers, Geoff
> Regards, > H.V.Riedel > > [1]: https://mail.haskell.org/pipermail/haskell-prime/2015-September/003936.html > [2]: https://mail.haskell.org/pipermail/haskell-cafe/2015-February/118336.html

Apologies for the previous mailer-mangled "draft"... On 10/21/2015 07:55 AM, Herbert Valerio Riedel wrote:
Hello,
On 2015-10-21 at 02:39:57 +0200, Geoffrey Mainland wrote:
[...]
In effect, only those who actively follow the libraries list have had a voice in these decisions. Maybe that is what the community wants. I hope not. How then can people like me (and Henrik and Graham) have a say without committing to actively following the libraries list?
We have a method to solve this: elected representatives. Right now the Core Libraries Committee elects its own members; perhaps it is time for that to change. [...]
Proposal 1: Move to community election of the members of the Core Libraries Committee. Yes, I know this would have its own issues. How exactly do public elections of representatives address the problem that some people feel left out? Have you considered nominating yourself or somebody else you have confidence in for the core libraries committee? You'd still have to find somebody to represent your interests, regardless of whether the committee is self-elected or direct-elected.
Here's some food for thought regarding language design by voting or its indirect form via a directly elected language committee:
Back in February there was a large-scale survey which resulted (see [2] for more details) in a rather unequivocal 4:1 majority *for* going through with the otherwise controversial FTP implementation. If the community elections would result in a similar spirit, you'd could easily end up with a similarly 4:1 pro-change biased committee. Would you consider that a well balanced committee formation?
Thanks, all good points. It is quite possible that direct elections would produce the exact same committee. I wouldn't see that as a negative outcome at all! At least that committee would have been put in place by direct election; I would see that as strengthening their mandate. I am very much aware of the February survey. I wonder if Proposal 2, had it been in place at the time, would have increased participation in the survey. The recent kerfuffle has caught the attention of many people who don't normally follow the libraries list. Proposal 1 is an attempt to give them a voice. Yes, they would still need to find a candidate to represent their interests. If we moved to direct elections, I would consider running. However, my preference is that Proposal 3 go through in some form, in which case my main concern would be the Haskell Prime committee, and unfortunately nominations for that committee have already closed.
Proposal 2: After a suitable period of discussion on the libraries list, the Core Libraries Committee will summarize the arguments for and against a proposal and post it, along with a (justified) preliminary decision, to a low-traffic, announce-only email list. After another suitable period of discussion, they will issue a final decision. What is a suitable period of time? Perhaps that depends on the properties of the proposal, such as whether it breaks backwards compatibility. That generally sounds like a good compromise, if this actually helps reaching the otherwise unreachable parts of the community and have their voices heard.
My hope is that a low-volume mailing list would indeed reach a wider audience.
Proposal 3: A decision regarding any proposal that significantly affects backwards compatibility is within the purview of the Haskell Prime Committee, not the Core Libraries Committee. I don't see how that would change much. The prior Haskell Prime Committee has been traditionally self-elected as well. So it's just the label of the committee you'd swap out.
In the recent call of nominations[1] for Haskell Prime, the stated area of work for the new nominations was to take care of the *language* part, because that's what we are lacking the workforce for.
Since its creation for the very purpose of watching over the core libraries, the core-libraries-committee has been almost exclusively busy with evaluating and deciding about changes to the `base` library and overseeing their implementation. Transferring this huge workload to the new Haskell Prime committee members who have already their hands full with revising the language part would IMO just achieve to reduce the effectiveness of the upcoming Haskell Prime committee, and therefore increase the risk of failure in producing an adequate new Haskell Report revision.
My understanding is that much of the work of the core libraries committee does not "significantly affect backwards compatibility," at least not to the extent that MRP does. If this is the case, the bulk of their workload would not be transferred to the new Haskell Prime committee. Is my understanding incorrect? The intent of Proposal 3 was to transfer only a small fraction of the issues that come before the core libraries committee to the Haskell Prime committee. In any case, we would certainly need to clarify what "significantly affects backwards compatibility" means. Perhaps we should consider direct elections for the Haskell Prime committee as well as changing their mandate to include some subset of the changes proposed to libraries covered by the Haskell Report. My understanding of the current state of affairs is that the Haskell Prime committee is charged with producing a new report, but the core libraries committee is in charge of the library part of that report. Is that correct? Cheers, Geoff
Regards, H.V.Riedel
[1]: https://mail.haskell.org/pipermail/haskell-prime/2015-September/003936.html [2]: https://mail.haskell.org/pipermail/haskell-cafe/2015-February/118336.html
participants (53)
-
Adam Foltzer
-
Alejandro Serrano Mena
-
Alexander Berntsen
-
amindfv@gmail.com
-
Andrey Chudnov
-
Augustsson, Lennart
-
Ben
-
Ben Gamari
-
Bryan O'Sullivan
-
Bryan Richter
-
Charles Durham
-
Christopher Allen
-
David Feuer
-
David Thomas
-
David Turner
-
Dimitri DeFigueiredo
-
Edward Kmett
-
Erik Hesselink
-
Francesco Ariis
-
Geoffrey Mainland
-
Gershom B
-
Gregory Collins
-
Henning Thielemann
-
Henrik Nilsson
-
Herbert Valerio Riedel
-
Herbert Valerio Riedel
-
Ivan Lazar Miljenovic
-
Jan-Willem Maessen
-
Jeffrey Brown
-
Joachim Breitner
-
Johan Tibell
-
John Lato
-
José Manuel Calderón Trilla
-
Kosyrev Serge
-
mantkiew@gsd.uwaterloo.ca
-
Marcin Mrotek
-
Mario Blažević
-
Mario Blažević
-
Mario Lang
-
Michael Orlitzky
-
Michał Antkiewicz
-
Mike Meyer
-
Nathan Bouscal
-
Phil Ruffwind
-
Richard A. O'Keefe
-
Richard Eisenberg
-
Sean Leather
-
Simon Thompson
-
Sven Panne
-
Taru Karttunen
-
Tom Ellis
-
Tony Morris
-
wren romano