Re: MRP, 3-year-support-window, and the non-requirement of CPP

On 2015-10-06 at 10:10:01 +0200, Johan Tibell wrote: [...]
You say that you stick to the 3-major-ghc-release support-window convention for your libraries. This is good, because then you don't need any CPP at all! Here's why:
[...]
So what do I have to write today to have my Monad instances be:
* Warning free - Warnings are useful. Turning them off or having spurious warnings both contribute to bugs.
Depends on the warnings. Some warnings are more of an advisory kind (hlint-ish). I wouldn't consider redundant imports a source of bugs. Leaving off a top-level signature shouldn't be a source of correctness bugs either. Also, warnings about upcoming changes (i.e. deprecation warnings) also are not necessarily a bug, but rather a way to raise awareness of API changes. At the other end of the spectrum are more serious warnings I'd almost consider errors, such as failing to define a non-defaulting method or violating a MINIMAL pragma specification, as that can lead to bottoms either in the form of runtime errors or even worse, hanging computations (in case of cyclic definitions). IMO, GHC should classify its warnings into severities/categories or introduce some compromise between -Wall and not-Wall.
* Use imports that either are qualified or have explicit import lists - Unqualified imports makes code more likely to break when dependencies add exports. * Don't use CPP.
That being said, as how to write your Monad instances today with GHC 7.10 w/o CPP, while supporting at least GHC 7.4/7.6/7.8/7.10: This *does* work (admittedly for an easy example, but this can be generalised): --8<---------------cut here---------------start------------->8--- module MyMaybe where import Control.Applicative (Applicative(..)) import Prelude (Functor(..), Monad(..), (.)) -- or alternatively: `import qualified Prelude as P` data Maybe' a = Nothing' | Just' a instance Functor Maybe' where fmap f (Just' v) = Just' (f v) fmap _ Nothing' = Nothing' instance Applicative Maybe' where pure = Just' f1 <*> f2 = f1 >>= \v1 -> f2 >>= (pure . v1) instance Monad Maybe' where Nothing' >>= _ = Nothing' Just' x >>= f = f x return = pure -- "deprecated" since GHC 7.10 --8<---------------cut here---------------end--------------->8--- This example above compiles -Wall-clean and satisfies all your 3 stated requirements afaics. I do admit this probably not what you had in mind. But to be consistent, if you want to avoid unqualified imports at all costs in order to have full control of what gets imported into your namespace, you shouldn't tolerate an implicit unqualified wildcard `Prelude` import either. As `Prelude` can, even if seldom, gain new exports as well (or even loose them -- `interact` *could* be a candidate for removal at some point).
Neither AMP or MRP includes a recipe for this in their proposal.
That's because -Wall-hygiene (w/o opting out of harmless) warnings across multiple GHC versions is not considered a show-stopper. In the specific case of MRP, I can offer you a Wall-perfect transition scheme by either using `ghc-options: -fno-mrp-warnings` in your cabal-file, or if that doesn't satisfy you, we can delay phase1 (i.e. redundant return-def warnings) to GHC 8.2: Now you can continue to write `return = pure` w/o GHC warning bothering you until GHC 8.2, at which point the 3-year-window will reach to GHC 7.10. Then starting with GHC 8.2 you can drop the `return` definition, and keep your More generally though, we need more language-features and/or modifications to the way GHC triggers warnings to make such refactorings/changes to the libraries -Wall-perfect as well. Beyond what Ben already suggested in another post, there was also the more general suggestion to implicitly suppress warnings when you explicitly name an import. E.g. import Control.Applicative (Applicative(..)) would suppress the redundant-import warning for Applicative via Prelude, because we specifically requested Applicative, so we don't mind that Prelude re-exports the same symbol.
AMP got one post-facto on the Wiki. It turns out that the workaround there didn't work (we tried it in Cabal and it conflicted with one of the above requirements.)
Yes, that unqualified `import Prelude`-last trick mentioned on the Wiki breaks down for more complex imports with (redundant) explicit import lists. However, the Maybe-example above works at the cost of a wordy Prelude-import, but it's more robust, as you pin down exactly which symbol you expect to get from each module.
The problem by discussions is that they are done between two groups with quite a difference in experience. On one hand you have people like Bryan, who have considerable contributions to the Haskell ecosystem and much experience in large scale software development (e.g. from Facebook). On the other hand you have people who don't. That's okay. We've all been at the latter group at some point of our career. [...]
At the risk of stating the obvious: I don't think it matters from which group a given argument comes from as its validity doesn't depend on the messenger. Neither does it matter whether an argument is repeated several times or stated only once. Also, every argument deserves to be considered regardless of its origin or frequency. -- hvr

I hit "send" too early, so here's the incomplete section completed: On 2015-10-06 at 18:47:08 +0200, Herbert Valerio Riedel wrote: [...]
In the specific case of MRP, I can offer you a Wall-perfect transition scheme by either using `ghc-options: -fno-mrp-warnings` in your cabal-file, or if that doesn't satisfy you, we can delay phase1 (i.e. redundant return-def warnings) to GHC 8.2:
Now you can continue to write `return = pure` w/o GHC warning bothering you until GHC 8.2, at which point the 3-year-window will reach to GHC 7.10.
Then starting with GHC 8.2 you can drop the `return` definition, and
...keep supporting a 3-year-window back till GHC 7.10 (which incorporates AMP and doesn't need `return` explicitly defined anymore) without CPP. And since you don't define `return` anymore, you don't get hit by the MRP warning either, which would start with GHC 8.2. GHC can keep providing as long as we want it to, and consider `return` being an extra method of `Monad` simply a GHC-ism. Future Haskell books and learning materials will hopefully be based on the next Haskell Report incorporating the AMP and stop referring to the historical `return` accident (which I consider badly named anyway from a pedagogically perspective). Code written unaware of `return` being a method of Monad will work anyway just fine. Do you see any problems with this scheme?

Hi,
On 6 October 2015 at 19:03, Herbert Valerio Riedel
In the specific case of MRP, I can offer you a Wall-perfect transition scheme by either using `ghc-options: -fno-mrp-warnings` in your cabal-file, [...]
Apropos, is there a similar option for AMP warnings? I would rather use that than CPP.

Hi, On 2015-10-06 at 21:32:19 +0200, Mikhail Glushenkov wrote:
On 6 October 2015 at 19:03, Herbert Valerio Riedel
wrote: In the specific case of MRP, I can offer you a Wall-perfect transition scheme by either using `ghc-options: -fno-mrp-warnings` in your cabal-file, [...]
Apropos, is there a similar option for AMP warnings? I would rather use that than CPP.
Sure, added in GHC 7.8: | In GHC 7.10, Applicative will become a superclass of Monad, | potentially breaking a lot of user code. To ease this transition, GHC | now generates warnings when definitions conflict with the | Applicative-Monad Proposal (AMP). | | A warning is emitted if a type is an instance of Monad but not of | Applicative, MonadPlus but not Alternative, and when a local function | named join, <*> or pure is defined. | | The warnings are enabled by default, and can be controlled using the | new flag -f[no-]warn-amp. However, if you use that now with GHC 7.10 you get a warning: | $ ghc-7.10.2 -fno-warn-amp | | on the commandline: Warning: | -fno-warn-amp is deprecated: it has no effect, and will be removed in GHC 7.12 so you you'll need to guard that with something like if impl(ghc == 7.8.*) ghc-options: -fno-warn-amp

I hit "send" too early, so here's the incomplete section completed: On 2015-10-06 at 18:47:08 +0200, Herbert Valerio Riedel wrote: [...]
In the specific case of MRP, I can offer you a Wall-perfect transition scheme by either using `ghc-options: -fno-mrp-warnings` in your cabal-file, or if that doesn't satisfy you, we can delay phase1 (i.e. redundant return-def warnings) to GHC 8.2:
Now you can continue to write `return = pure` w/o GHC warning bothering you until GHC 8.2, at which point the 3-year-window will reach to GHC 7.10.
Then starting with GHC 8.2 you can drop the `return` definition, and
...keep supporting a 3-year-window back till GHC 7.10 (which incorporates AMP and doesn't need `return` explicitly defined anymore) without CPP. And since you don't define `return` anymore, you don't get hit by the MRP warning either, which would start with GHC 8.2. GHC can keep providing as long as we want it to, and consider `return` being an extra method of `Monad` simply a GHC-ism. Future Haskell books and learning materials will hopefully be based on the next Haskell Report incorporating the AMP and stop referring to the historical `return` accident (which I consider badly named anyway from a pedagogically perspective). Code written unaware of `return` being a method of Monad will work anyway just fine. Do you see any problems with this scheme?

2015-10-06 18:47 GMT+02:00 Herbert Valerio Riedel
[...] That being said, as how to write your Monad instances today with GHC 7.10 w/o CPP, while supporting at least GHC 7.4/7.6/7.8/7.10: This *does* work (admittedly for an easy example, but this can be generalised):
--8<---------------cut here---------------start------------->8--- module MyMaybe where
import Control.Applicative (Applicative(..)) import Prelude (Functor(..), Monad(..), (.)) -- or alternatively: `import qualified Prelude as P` [...] --8<---------------cut here---------------end--------------->8---
This example above compiles -Wall-clean and satisfies all your 3 stated requirements afaics. I do admit this probably not what you had in mind.
OK, so the trick is that you're effectively hiding Applicative from the Prelude (which might be a no-op). This "works" somehow, but is not satisfactory IMHO for several reasons: * If you explicitly import all entities from Prelude, your import list will typically get *very* long and unreadable. Furthermore, if that's the suggested technique, what's the point of having a Prelude at all? * Some people see qualified imports as the holy grail, but having to prefix tons of things with "P." is IMHO very ugly. Things are even worse for operators: The whole notion of operators in itself is totally useless and superfluous *except* for a single reason: Readability. And exactly that gets destroyed when you have to qualify them, so I would (sadly) prefer some #ifdef hell, if that gives me readable code elsewhere. * With the current trend of moving things to the Prelude, I can envision a not-so-distant future where the whole Control.Applicative module will be deprecated. As it is now, it's mostly superfluous and/or contains only stuff which might better live somewhere else.
[...] That's because -Wall-hygiene (w/o opting out of harmless) warnings across multiple GHC versions is not considered a show-stopper.
That's your personal POV, I'm more leaning towards "-Wall -Werror". I've seen too many projects where neglecting warning over an extended period of time made fixing them basically impossible at the end. Anyway, I think that a sane ecosystem should allow *both* POVs, the sloppy one and the strict one.
[...] Beyond what Ben already suggested in another post, there was also the more general suggestion to implicitly suppress warnings when you explicitly name an import. E.g.
import Control.Applicative (Applicative(..))
would suppress the redundant-import warning for Applicative via Prelude, because we specifically requested Applicative, so we don't mind that Prelude re-exports the same symbol. [...]
Uh, oh... That would be bad, because one normally wants to see redundant imports. Without the compiler telling me, how should I find out which are redundant? Manually trying to remove them step by step? :-/ Cheers, S.

On 2015-10-06 at 19:41:51 +0200, Sven Panne wrote:
2015-10-06 18:47 GMT+02:00 Herbert Valerio Riedel
: [...] That being said, as how to write your Monad instances today with GHC 7.10 w/o CPP, while supporting at least GHC 7.4/7.6/7.8/7.10: This *does* work (admittedly for an easy example, but this can be generalised):
--8<---------------cut here---------------start------------->8--- module MyMaybe where
import Control.Applicative (Applicative(..)) import Prelude (Functor(..), Monad(..), (.)) -- or alternatively: `import qualified Prelude as P` [...] --8<---------------cut here---------------end--------------->8---
This example above compiles -Wall-clean and satisfies all your 3 stated requirements afaics. I do admit this probably not what you had in mind.
OK, so the trick is that you're effectively hiding Applicative from the Prelude (which might be a no-op). This "works" somehow, but is not satisfactory IMHO for several reasons:
[...] Btw, I've also seen the trick below, in which you use the aliased `A.` prefix just once so GHC considers the import non-redundant, and don't have to suffer from prefixed operators in the style of `A.<*>`. Is this any better? --8<---------------cut here---------------start------------->8--- import Control.Applicative as A (Applicative(..)) data Maybe' a = Nothing' | Just' a instance Functor Maybe' where fmap f (Just' v) = Just' (f v) fmap _ Nothing' = Nothing' instance A.Applicative Maybe' where pure = Just' f1 <*> f2 = f1 >>= \v1 -> f2 >>= (pure . v1) instance Monad Maybe' where Nothing' >>= _ = Nothing' Just' x >>= f = f x return = pure -- "deprecated" since GHC 7.10 --8<---------------cut here---------------end--------------->8--- -- hvr

2015-10-07 9:35 GMT+02:00 Herbert Valerio Riedel
Btw, I've also seen the trick below, in which you use the aliased `A.` prefix just once so GHC considers the import non-redundant, and don't have to suffer from prefixed operators in the style of `A.<*>`.
Is this any better? [...]
While not perfect, it's much better than having to fiddle around with Prelude imports. Although there's the slight danger that somebody else (or the author 1 year later) looks at the code and has a WTF-moment... ;-) To be honest, while it's somehow obvious how it works when you read it, I've never seen that trick. Perhaps stuff like this belongs into some general "Porting Guide", along with its alternatives. It's general enough that it should not be buried in some AMP/FTP/return/... transitioning guide. Cheers, S.

On Wed, Oct 7, 2015 at 3:35 AM, Herbert Valerio Riedel
--8<---------------cut here---------------start------------->8--- import Control.Applicative as A (Applicative(..))
data Maybe' a = Nothing' | Just' a
instance Functor Maybe' where fmap f (Just' v) = Just' (f v) fmap _ Nothing' = Nothing'
instance A.Applicative Maybe' where pure = Just' f1 <*> f2 = f1 >>= \v1 -> f2 >>= (pure . v1)
instance Monad Maybe' where Nothing' >>= _ = Nothing' Just' x >>= f = f x
return = pure -- "deprecated" since GHC 7.10 --8<---------------cut here---------------end--------------->8---
Alternately, import Control.Applicative import Prelude data Maybe' a = Nothing' | Just' a instance Functor Maybe' where fmap f (Just' v) = Just' (f v) fmap _ Nothing' = Nothing' instance Applicative Maybe' where
-- hvr _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime

On Tue, Oct 6, 2015 at 1:41 PM, Sven Panne
2015-10-06 18:47 GMT+02:00 Herbert Valerio Riedel
: [...] That's because -Wall-hygiene (w/o opting out of harmless) warnings
across multiple GHC versions is not considered a show-stopper.
That's your personal POV, I'm more leaning towards "-Wall -Werror". I've seen too many projects where neglecting warning over an extended period of time made fixing them basically impossible at the end. Anyway, I think that a sane ecosystem should allow *both* POVs, the sloppy one and the strict one.
Note: You haven't been able to upload a package that has -Werror turned on in the cabal file for a couple of years now -- even if it is only turned on on the test suite, so any -Werror discipline you choose to enforce is purely local. -Edward

On 6 Oct 2015, at 17:47, Herbert Valerio Riedel wrote:
The problem by discussions is that they are done between two groups with quite a difference in experience. On one hand you have people like Bryan, who have considerable contributions to the Haskell ecosystem and much experience in large scale software development (e.g. from Facebook). On the other hand you have people who don't. That's okay. We've all been at the latter group at some point of our career. [...]
At the risk of stating the obvious: I don't think it matters from which group a given argument comes from as its validity doesn't depend on the messenger.
In that case, I think you are misunderstanding the relevance of Johan's argument here. Let me try to phrase it differently. Some people who can reasonably claim to have experience with million-line plus codebases are warning that this change is too disruptive, and makes maintenance harder than it ought to be. On the other hand, of the people who say the change is not really disruptive, none of them have (yet?) made claims to have experience of the maintenance of extremely large-scale codebases. The authority of the speaker does matter in technical arguments of this nature: people without the relevant experience are simply unqualified to make guesses about the impact. Regards, Malcolm

Do those participating in this thread think sentiments like this are
constructive or inclusive? Is this how we encourage participation from
newer members of the community?
Framing this debate in terms of a programming pecking order is
unprofessional. Many times, those higher in the ranks will prefer a more
conservative approach, as experienced surgeons once resisted the
introduction of the autoclave.
The problem isn't the change; it's what the change costs you. Provide data
and make your case. Talk about what it _costs_ you, show evidence for that
cost, and describe what would make the change acceptable. Do it without
talking down to a constructed "other" of the people who've neglected to
make the same status display you've injected into this conversation. That
could be valuable input to the discussion, so we would could weigh costs
and benefits as a community.
There _are_ costs associated with going ahead with MRP, especially for
those with large 1mm LOC industrial codebases. This is partly why I'm
lukewarm on the change, but I believe it needs to happen sooner or later
and waiting for more 1mm LOC codebases to be born isn't going to make it
any better. The suggestions that we consider the example of 2to3 I believe
have been more constructive, particularly since we have this lovely
language which lends itself so nicely to static analysis anyway.
On Tue, Oct 6, 2015 at 2:02 PM, Malcolm Wallace
On 6 Oct 2015, at 17:47, Herbert Valerio Riedel wrote:
The problem by discussions is that they are done between two groups with quite a difference in experience. On one hand you have people like
who have considerable contributions to the Haskell ecosystem and much experience in large scale software development (e.g. from Facebook). On
Bryan, the
other hand you have people who don't. That's okay. We've all been at the latter group at some point of our career. [...]
At the risk of stating the obvious: I don't think it matters from which group a given argument comes from as its validity doesn't depend on the messenger.
In that case, I think you are misunderstanding the relevance of Johan's argument here. Let me try to phrase it differently. Some people who can reasonably claim to have experience with million-line plus codebases are warning that this change is too disruptive, and makes maintenance harder than it ought to be. On the other hand, of the people who say the change is not really disruptive, none of them have (yet?) made claims to have experience of the maintenance of extremely large-scale codebases. The authority of the speaker does matter in technical arguments of this nature: people without the relevant experience are simply unqualified to make guesses about the impact.
Regards, Malcolm _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
-- Chris Allen Currently working on http://haskellbook.com

On Tue, Oct 6, 2015 at 3:02 PM, Malcolm Wallace
On 6 Oct 2015, at 17:47, Herbert Valerio Riedel wrote:
At the risk of stating the obvious: I don't think it matters from which group a given argument comes from as its validity doesn't depend on the messenger.
In that case, I think you are misunderstanding the relevance of Johan's argument here. Let me try to phrase it differently. Some people who can reasonably claim to have experience with million-line plus codebases are warning that this change is too disruptive, and makes maintenance harder than it ought to be. On the other hand, of the people who say the change is not really disruptive, none of them have (yet?) made claims to have experience of the maintenance of extremely large-scale codebases.
Very well. Let me offer a view from the "other side of the fence." I personally maintain about 1.3 million lines of Haskell, and over 120 packages on hackage. It took me less than a half a day to get everything running with 7.10, and about two days to build -Wall clean. In that first day I actually had to spend vastly more time fixing things related to changes in Typeable, template-haskell and a tweaked corner case in the typechecker than anything AMP/FTP related. In the end I had to add two type signatures. Most of the patches to go -Wall clean looked like +#if __GLASGOW_HASKELL__ < 710 import Control.Applicative import Data.Monoid +#endif Maybe 10% were more complicated. -Edward

Dear all, I think this discussion has gotten quite heated for reasons not related to the concrete MRP proposal, which, to be honest, I considered quite modest in terms of both scope and impact. Instead, I think it is a proxy for lots of remaining frustration and anxiety over the poor handling over the Foldable Traversable Proposal. I would like to remind everyone that due to the broad discussions and concerns over the proposal, a very rare, careful poll of Haskell users was taken, announced broadly in many channels. [1] The poll, overwhelmingly, revealed a mandate for the FTP. The breakdown of that mandate was 87% in favor among hobbyists and 79% in favor among non-hobbyists (who constituted a majority of those polled). I. Generalities That said, even the _best_ poll was not a substitute for a better earlier discussion. The handling of the AMP and FTP, which I think was heroic in terms of minimizing breakage while accomplishing long-desired change also still could have been better. As a whole, the work accomplished the mandate of allowing code to be written backwards-compatible without requiring CPP. However, it did not also seek to prevent warnings. This in itself was an enormous step forward from changes in the past which have _not_ even managed to prevent the need for CPP. At the time, I think it was not recognized how much desire there would be for things that were _both_ CPP free and _also_ warning-free for 3 releases. I think one of the great elements of progress in the current discussion is that there is now a proposal on the table which recognizes this, and seeks to accomplish this change in accordance with this desire. It is not the world’s most important change, but the recognition that change should seek to be both CPP _and_ warning free is a good recognition, and I’m sure it will be taken into account in future proposals as well. I don’t think it is useful to continue to have abstract discussions on the conflict between desire for incremental improvement versus the need to minimize pain on maintainers. We might as well continue to argue about the need for purely functional programming versus the need to print “hello world” to the console. Rather, we should put our collective minds together as collaborators and colleagues to accomplish _both_, and to come up with solutions that should work for everyone. To the extent this discussion has been about that, I think it has been useful and positive. However, to the extent this discussion insists, on either side, on the shallow idea that we must treat “improvement” versus “stability” as irreconcilable factions in necessary conflict, then I fear it will be a missed opportunity. II. Particulars With that in mind, I think the _concrete_ voices of concern have been the most useful. Gregory Collins’ list of issues requiring CPP should be very sobering. Of note, I think they point to areas where the core libraries committee has not paid _enough_ attention (or perhaps has not been sufficiently empowered: recall that not all core libraries fall under its maintenance [2]). Things like the newtype FFI issue, the changes to prim functions, the splitup of old-time and the changes to exception code were _not_ vetted as closely as the AMP and FTP were, or as the MRP is currently being. I don’t know all the reasons for this, but I suspect they just somewhat slipped under the radar. In any case, if all those changes were as carefully engineered as the MRP proposal has been, then imho things would have been much smoother. So, while this discussion may be frustrating, it nonetheless in some ways provides a model of how people have sought to do better and be more proactive with careful discussion of changes. This is much appreciated. Personally, since the big switch to extensible exceptions back prior in 6.10, and since the split-base nonsense prior to that, very few changes to the core libraries have really caused too much disruption in my code. Since then, the old-time cleanup was the worst, and the big sin there was that time-locale-compat was only written some time after the fact by a helpful third-party contributor and not engineered from the start. (I will note that the time library is one of the core libraries that is _not_ maintained by the core libraries committee). Outside of that, the most disruptive changes to my code that I can recall have been from changes to the aeson library over the years — particularly but not only regarding its handling of doubles. I don’t begrudge these changes — they iteratively arrived at a _much_ better library than had they not been made. [3] After than, I made a few changes regarding Happstack and Snap API changes if I recall. Additionally, the addition of “die” to System.Exit caused a few name clashes. My point is simply that there are many packages outside of base that also move, and “real” users with “real” code will these days often have quite a chain of dependencies, and will encounter movement and change from across many of them. So if we say “base never changes” that does not mean “packages will never break” — it just means that base will not have the same opportunity to improve that other packages do, which will eventually lead to frustration, just as it did in the past and in the leadup to the BBP. III. Discussions Further, since there has been much discussion of a window of opportunity, I would like to offer a counterpoint to the (sound) advice that we take into consideration voices with long experience in Haskell. The window of opportunity is, by definition, regarding takeup of Haskell by new users. And so if newer users favor certain changes, then it is good evidence that those changes will help with uptake among other new users. So, if they are good changes on their own, then the fact that they are appealing to newer users should be seen as a point in their favor, rather than a reason to dismiss those opinions. But if we are in a situation where we see generations of adopters pitted against one another, then we already have deeper problems that need to be sorted out. Regarding where and how to have these discussions — the decision was made some time ago (I believe at the start of the initial Haskell Prime process if not sooner, so circa 2009?) that the prime committee would focus on language extensions and not library changes, and that those changes would be delegated to the libraries@ list. The lack of structure to the libraries@ list is what prompted the creation of the libraries committee, whose ultimately responsibility it is to decide on and shepherd through these changes, in consultation with others and ideally driven by broad consensus. Prior to this structure, things broke even more, imho, and simultaneously the things that were widely desired were still not implemented. So I thank the libraries committee for their good work so far. So, it may be that the process of community discussion on core libraries changes is not best suited for the libraries@ list. But if not there, Where? I worry that the proliferation of lists will not improve things here. Those involved with Haskell have multiplied (this is good). The voices to take into account have multiplied (this is good). Necessarily, this means that there will just be _more_ stuff, and making sure that everyone can filter to just that part they want to is difficult. Here, perhaps, occasional libraries-related summary addenda to the ghc newsletter could be appropriate? Or is there another venue we should look towards? “Chair’s reports” to the Haskell Weekly News maybe? IV. Summing up We should bear in mind after all that this is just about cleaning up a redundant typeclass method (albeit one in a very prominent place) and hardly the hill anyone would want to die on [4]. Nonetheless, I think it would be a good sign of progress and collaboration if we can find a way to implement a modest change like this in a way that everyone finds acceptable vis a vis a sufficiently slow pace, the lack of a need for CPP and the lack of any induced warnings. On the other hand, other opportunities will doubtless present themselves in the future. Best, Gershom [1] https://mail.haskell.org/pipermail/libraries/2015-February/025009.html [2] https://wiki.haskell.org/Library_submissions#The_Core_Libraries [3] and in any case I am sure Bryan would be the last to want us to treat him as some sort of “guru” on these matters. [4] for those in search of better hills to die on, this is a list of some good ones: http://www.theawl.com/2015/07/hills-to-die-on-ranked P.S. In case there is any question, this email, as all emails I write that do not state otherwise, is not being written in any particular capacity regarding the various infra-related hats I wear, but is just an expression of my own personal views.

There's been a lot of complaints about the way things have worked in the past, with things going to fast, or to slow, or the right people not being heard, or notices not going to the right places, etc. As far as I can tell, the current process is that someone make a proposal on some mailing list, that gets debated by those who find out about it, maybe a wiki page gets set up and announced to those who already know about the proposal, and then it either happens or not. There's actually quite a bit of experience with dealing with such. I've dealt with the IETF RFC process and the Python PEP process, and both of them worked better than that. So I think we can do better. I'd like to suggest we adapt something a bit more formal that any process that changes a developer-visible API should have to go through. Note that bug fixes, most security fixes, and other things that don't change the API wouldn't be subject to this requirement. However, any change in an API, even one that's 100% backward compatible, not possibly breaking any code, would be. Initial thoughts on scope are anything in the Haskell Platform, as that seems to be a minimal definition of "Haskell ecosystem". Further, anything on hackage should be able to avail itself of the process. My concrete, though very broad proposal, with lots of details to be filled in yet, is that we need: 1) A wiki page that lists all proposals being considered, along with their current status and other relevant information. 2) A set of requirements a proposal must meet in order to be listed on that page. 3) An announcements list that only has announcements of things being added to the list. Anybody who has time to vote on a proposal should have time to be on this list. 4) An editorial group responsible for maintaining the list and providing guidance on meeting the requirements to get on it. The first three are easy. The fourth one is the killer. Somebody to do the work is the stumbling block for most proposals. This doesn't require deep technical knowledge of Haskell or the current ecosystem, but the ability to implement a process and judge things based on form and not content, Since it's my proposal, I'll volunteer as the first editor. Hopefully, others with better reputation will alsob e available. If adopted, the first two things on the list need to be a description of the process, followed shortly by a description of the requirements to be met.

On Tue, Oct 6, 2015 at 7:24 PM, Mike Meyer
I've dealt with the IETF RFC process and the Python PEP process, and both of them worked better than that.
While both those are good examples of mostly working organizations shepherding foundational technical standard(s) along... there is one thing more important than their processes: Their stance. Both organizations have a very strong engineering discipline of keeping deployed things working without change. I don't think it is enough to simply model their process. Until about three years ago, the Haskell community also had such a discipline. It has been steadily eroding over the last few years. - Mark

On Wed, Oct 7, 2015 at 1:45 AM Mark Lentczner
On Tue, Oct 6, 2015 at 7:24 PM, Mike Meyer
wrote: I've dealt with the IETF RFC process and the Python PEP process, and both of them worked better than that.
While both those are good examples of mostly working organizations shepherding foundational technical standard(s) along... there is one thing more important than their processes: Their stance. Both organizations have a very strong engineering discipline of keeping deployed things working without change. I don't think it is enough to simply model their process.
Well, until Python 3, anyway. My goal wasn't to recreate the engineering discipline that deployed things keep working without change as you upgrade the ecosystem, it's to provide a mechanism so the community can more easily engage with the evolution of the ecosystem. Hopefully this will make it easier for the community to move things forward in a desirable manner. But it's a process, and leaves the question of whether the desire is for more stability or a less stagnant language up to the users of the process. I don't necessarily want to model the IETF or PEP processes. Those are a starting point. I tried to abstract the initial points out enough that the final result could be either one of them, or something totally unrelated that's a better fit for the Haskell community.

On Tue, Oct 6, 2015 at 6:18 PM, Gershom B
With that in mind, I think the _concrete_ voices of concern have been the most useful. Gregory Collins’ list of issues requiring CPP should be very sobering. Of note, I think they point to areas where the core libraries committee has not paid _enough_ attention (or perhaps has not been sufficiently empowered: recall that not all core libraries fall under its maintenance [2]). Things like the newtype FFI issue, the changes to prim functions, the splitup of old-time and the changes to exception code were _not_ vetted as closely as the AMP and FTP were, or as the MRP is currently being. I don’t know all the reasons for this, but I suspect they just somewhat slipped under the radar.
In fact, more often than I would like, I can recall arguing against a particular change https://www.mail-archive.com/ghc-devs@haskell.org/msg02133.html on the grounds that it would break user code, and in Haskell land this is a battle I usually lose. Usually the argument on the other side boils down to expediency or hygiene/aesthetics -- it's *difficult* to engineer a change to some core infra in a way that minimizes impact on people downstream, and it takes a long time. Often "this change is going to cause a small amount of work for all of my users" is something that seems to not be taken into consideration at all. For this particular proposal, every user will have some small amount of work *w* to do (to read the change notes, understand why 'return' is going away, train yourself to use "pure" now instead of "return" like you've been using for 15 years, etc). It might feel like *w* is small and so the change isn't burdensome, but *n* is literally everyone who uses the language, so the total work *w***n* is going to amount to quite a few person-hours. I just want to make sure that everyone is keeping that in mind and weighing that effort against the benefits. Outside of that, the most disruptive changes to my code that I can recall
have been from changes to the aeson library over the years — particularly but not only regarding its handling of doubles. I don’t begrudge these changes — they iteratively arrived at a _much_ better library than had they not been made. [3] After than, I made a few changes regarding Happstack and Snap API changes if I recall. Additionally, the addition of “die” to System.Exit caused a few name clashes. My point is simply that there are many packages outside of base that also move, and “real” users with “real” code will these days often have quite a chain of dependencies, and will encounter movement and change from across many of them. So if we say “base never changes” that does not mean “packages will never break” — it just means that base will not have the same opportunity to improve that other packages do, which will eventually lead to frustration, just as it did in the past and in the leadup to the BBP.
Culturally, we have a problem with library authors of all stripes being too cavalier about breaking user programs: we definitely lean towards "move fast and break things" vs "stay stable and don't make work for users". As you write more and more Haskell code, you depend on more and more of these libraries, and this means that once you go beyond a certain threshold you will be spending a significant amount of your time just running to keep up with the treadmill. Personally I just don't have enough time for writing Haskell code as I used to (or I would like), so I would say for me that the treadmill tax is now probably exceeding 50% of my total hours invested. Greg

On 7 October 2015 at 18:09, Gregory Collins
For this particular proposal, every user will have some small amount of work w to do (to read the change notes, understand why 'return' is going away, train yourself to use "pure" now instead of "return" like you've been using for 15 years, etc).
While I don't think it detracts from your argument, it seems you misread the original proposal. At no point will it remove `return` completely. It would be moved out of the `Monad` class and be made into a top-level definition instead, so you would still be able to use it. Erik

On Wed, Oct 7, 2015 at 9:38 AM, Erik Hesselink
While I don't think it detracts from your argument, it seems you misread the original proposal. At no point will it remove `return` completely. It would be moved out of the `Monad` class and be made into a top-level definition instead, so you would still be able to use it.
Then why bother? If you don't intend to regard code that uses "return" as old, out-dated, in need of updating, etc.... If you don't intend to correct people on #haskell to use pure instead of return... If you don't tsk tsk all mentions of it in books.... If you don't intend to actually deprecate it. Why bother? But seriously, why do you think that "you would still be able to use it"? That is true for only the simplest of code - and untrue for anyone who has a library that defines a Monad - or anyone who has a library that they want to keep "up to date". Do you really want to have a library where all your "how to use this" code has return in the examples? Shouldn't now be pure? Do I now need -XCPP just for Haddock? and my wiki page? And what gets shown in Hackage? This is just a nightmare for a huge number of libraries, and especially many commonly used ones. Why bother!

On Wed, Oct 7, 2015 at 4:43 PM, Mark Lentczner
If you don't intend to actually deprecate it. Why bother?
But seriously, why do you think that "you would still be able to use it"? That is true for only the simplest of code - and untrue for anyone who has a library that defines a Monad - or anyone who has a library that they want to keep "up to date". Do you really want to have a library where all your "how to use this" code has return in the examples? Shouldn't now be pure? Do I now need -XCPP just for Haddock? and my wiki page? And what gets shown in Hackage? This is just a nightmare for a huge number of libraries, and especially many commonly used ones.
Why bother!
This is explained in the original proposal. In particular, it eliminates opportunities for errors and simplifies ApplicativeDo. I don’t believe anyone has proposed removing return from base. The only proposed change is turning return into a stand-alone function instead of a method in Monad. There is no proposal for removing return.

The part of the MRP proposal that I actively care about because it fixes a
situation that *actually causes harm* is moving (>>) to the top level.
Why?
Right now (*>) and (>>) have different default definitions. This means that
code runs often with different asymptotics depending on which one you pick.
Folks often define one but not the other.
This means that the performance of mapM_ and traverse_ needlessly differ.
It means that we can't simply weaken the type constraint on mapM_ and
sequence_ to Applicative, it as a knock-on consequence it means we can't
migrate mapM and sequence out of Traversable to top level definitions and
thereby simply provide folks with more efficient parallelizable mapping
when they reach for the 'obvious tool'.
return itself lurking in the class doesn't matter to me all that much as it
doesn't break anybody's asymptotics and it already has a sensible
definition in terms of pure as a default, so effectively you can write code
as if MRP was already in effect today. It is a wart, but one that could be
burned off on however long a time table we want if we choose to proceed.
-Edward
On Wed, Oct 7, 2015 at 5:13 PM, Mark Lentczner
On Wed, Oct 7, 2015 at 9:38 AM, Erik Hesselink
wrote: While I don't think it detracts from your argument, it seems you misread the original proposal. At no point will it remove `return` completely. It would be moved out of the `Monad` class and be made into a top-level definition instead, so you would still be able to use it.
Then why bother? If you don't intend to regard code that uses "return" as old, out-dated, in need of updating, etc.... If you don't intend to correct people on #haskell to use pure instead of return... If you don't tsk tsk all mentions of it in books.... If you don't intend to actually deprecate it. Why bother?
But seriously, why do you think that "you would still be able to use it"? That is true for only the simplest of code - and untrue for anyone who has a library that defines a Monad - or anyone who has a library that they want to keep "up to date". Do you really want to have a library where all your "how to use this" code has return in the examples? Shouldn't now be pure? Do I now need -XCPP just for Haddock? and my wiki page? And what gets shown in Hackage? This is just a nightmare for a huge number of libraries, and especially many commonly used ones.
Why bother!
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

On Sat, 2015-10-10 at 15:25 -0400, Edward Kmett wrote:
The part of the MRP proposal that I actively care about because it fixes a situation that *actually causes harm* is moving (>>) to the top level.
Sorry if I'm missing something, but moving (>>) is not part of the proposal. At least it is not mentioned on the wiki page: https://ghc.haskell.org/trac/ghc/wiki/Proposal/MonadOfNoReturn Is the wiki outdated?
return itself lurking in the class doesn't matter to me all that much as it doesn't break anybody's asymptotics and it already has a sensible definition in terms of pure as a default, so effectively you can write code as if MRP was already in effect today. It is a wart, but one that could be burned off on however long a time table we want if we choose to proceed.
So the cost of not moving `return` to the top level is zero? For me the cost of moving it is pretty small, just an hour or two. Probably recompiling all the dependencies when switching to newer version of GHC will take longer. (Actually I'm still using 7.8 at work.) But the cost is definitely nonzero. The proposal (as written on the wiki page) provides two arguments for the change: There is no reason to include `return` into the next standard. That is true. But we can leave `return` is `GHC` as a compiler specific extension for backward compatibility, can't we? The second argument is `ApplicativeDo`, but I don't see the point. Breaking existing code "in order to benefit existing code" looks a bit strange. Could someone please clarify what is the cost of not moving `return` out of `Monad`? Sorry if it is already answered somewhere else, it is hard to find anything in such the huge email thread. Thanks, Yuras.

On Sat, Oct 10, 2015 at 4:12 PM, Yuras Shumovich
On Sat, 2015-10-10 at 15:25 -0400, Edward Kmett wrote:
The part of the MRP proposal that I actively care about because it fixes a situation that *actually causes harm* is moving (>>) to the top level.
Sorry if I'm missing something, but moving (>>) is not part of the proposal. At least it is not mentioned on the wiki page:
https://ghc.haskell.org/trac/ghc/wiki/Proposal/MonadOfNoReturn
Is the wiki outdated?
It arose during the original thread discussing the MRP but wasn't included in the 'proposal as written' that was sent out. https://mail.haskell.org/pipermail/libraries/2015-September/026129.html In many ways that proposal would do better 'on its own' than as part of the MRP.
return itself lurking in the class doesn't matter to me all that much
as it doesn't break anybody's asymptotics and it already has a sensible definition in terms of pure as a default, so effectively you can write code as if MRP was already in effect today. It is a wart, but one that could be burned off on however long a time table we want if we choose to proceed.
So the cost of not moving `return` to the top level is zero?
For me the cost of moving it is pretty small, just an hour or two. Probably recompiling all the dependencies when switching to newer version of GHC will take longer. (Actually I'm still using 7.8 at work.) But the cost is definitely nonzero.
The proposal (as written on the wiki page) provides two arguments for the change:
There is no reason to include `return` into the next standard. That is true.
Nobody is saying that we should remove return from the language. The proposal was to move it out of the class -- eventually. Potentially on a very very long time line. But we can leave `return` is `GHC` as a compiler specific extension for
backward compatibility, can't we?
This is effectively the status quo. There is a default definition of return in terms of pure today. The longer we wait the more tenable this proposal gets in many ways as fewer and fewer people start trying to support compilers versions below 7.10. Today isn't that day. There are some niggling corner cases around viewing its continued existence as a compiler "extension" though, even just around the behavior when you import the class with Monad(..) you get more or less than you'd expect. Could someone please clarify what is the cost of not moving `return` out of
`Monad`?
The cost of doing nothing is maintaining a completely redundant member inside the class for all time and an ever-so-slightly more expensive dictionaries for Monad, so retaining return in the class does no real harm operationally. While I'm personally somewhat in favor of its eventual migration on correctness grounds and believe it'd be nice to be able to justify the state of the world as more than a series of historical accidents when I put on my libraries committee hat I have concerns. I'm inclined to say at the least that IF we do decide to proceed on this, at least the return component should be on a long time horizon, with a clock tied to the release of a standard, say a Haskell2020. I stress IF, because I haven't had a chance to go through and do any sort of detailed tally or poll to get a sense of if there is a sufficient mandate. There is enough of a ruckus being raised that it is worth proceeding cautiously if we proceed at all. -Edward

On Sat, 2015-10-10 at 16:39 -0400, Edward Kmett wrote:
On Sat, Oct 10, 2015 at 4:12 PM, Yuras Shumovich < shumovichy@gmail.com> wrote:
There is no reason to include `return` into the next standard. That is true.
Nobody is saying that we should remove return from the language. The proposal was to move it out of the class -- eventually. Potentially on a very very long time line.
Yes, I meant there were no reason to include `return` into `Monad` class in the next standard.
There are some niggling corner cases around viewing its continued existence as a compiler "extension" though, even just around the behavior when you import the class with Monad(..) you get more or less than you'd expect.
Indeed that is a good argument.
The cost of doing nothing is maintaining a completely redundant member inside the class for all time
Well, it is just a single line of code. Of course, as any other technical dept, it can beat you later, e.g. it can make some other modification harder. But in the worst case it will cost one more deprecation circle.
and an ever-so-slightly more expensive dictionaries for Monad
Do you mean that moving `return` to the top level will give us noticeable performance improvement?
so retaining return in the class does no real harm operationally.
IMO that is the reason for the "ruckus". Thank you for the detailed answer. Yuras
While I'm personally somewhat in favor of its eventual migration on correctness grounds and believe it'd be nice to be able to justify the state of the world as more than a series of historical accidents when I put on my libraries committee hat I have concerns.
I'm inclined to say at the least that IF we do decide to proceed on this, at least the return component should be on a long time horizon, with a clock tied to the release of a standard, say a Haskell2020. I stress IF, because I haven't had a chance to go through and do any sort of detailed tally or poll to get a sense of if there is a sufficient mandate. There is enough of a ruckus being raised that it is worth proceeding cautiously if we proceed at all.
-Edward
participants (13)
-
Christopher Allen
-
Edward Kmett
-
Erik Hesselink
-
Gershom B
-
Gregory Collins
-
Herbert Valerio Riedel
-
Malcolm Wallace
-
Manuel Gómez
-
Mark Lentczner
-
Mike Meyer
-
Mikhail Glushenkov
-
Sven Panne
-
Yuras Shumovich