
Friends, I'm taking a step back from day-to-day library work. There are two main reasons I use Haskell: on one hand I find writing Haskell educational and fun. On the other I hope to make it a viable alternative to existing mainstream languages. With recent changes to our core libraries, and the general direction these are moving in, I believe we're moving away from becoming a viable alternative to those mainstream languages. This has some practical implications for how I spend my Haskell hacking time. Much of what I do is maintaining and working on libraries that are needed for real world usage, but that aren't that interesting to work on. I've lost the motivation to work on these. I've decided to take a step back from the core maintenance work on cabal, network, containers, and a few others* starting now. I've already found replacement maintainers for these. I still plan to hack on random side projects, including GHC, and to continue coming to Haskell events and conference, just with a shorter bug backlog to worry about. :) -- Johan Tibell * For now I will still hack on unordered-containers and ekg, as there are some things I'd like to experiment with there.

Johan, Thank you for your hard work maintaining several much used and very essential packages. I’m sad to see you leave as the maintainer, but I understand your sentiment. -- Lennart From: Libraries [mailto:libraries-bounces@haskell.org] On Behalf Of Johan Tibell Sent: 20 October 2015 14:59 To: ghc-devs@haskell.org; Haskell Libraries; cabal-devel@haskell.org Subject: Taking a step back Friends, I'm taking a step back from day-to-day library work. There are two main reasons I use Haskell: on one hand I find writing Haskell educational and fun. On the other I hope to make it a viable alternative to existing mainstream languages. With recent changes to our core libraries, and the general direction these are moving in, I believe we're moving away from becoming a viable alternative to those mainstream languages. This has some practical implications for how I spend my Haskell hacking time. Much of what I do is maintaining and working on libraries that are needed for real world usage, but that aren't that interesting to work on. I've lost the motivation to work on these. I've decided to take a step back from the core maintenance work on cabal, network, containers, and a few others* starting now. I've already found replacement maintainers for these. I still plan to hack on random side projects, including GHC, and to continue coming to Haskell events and conference, just with a shorter bug backlog to worry about. :) -- Johan Tibell * For now I will still hack on unordered-containers and ekg, as there are some things I'd like to experiment with there. This email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please delete all copies and notify the sender immediately. You may wish to refer to the incorporation details of Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at http://www.standardchartered.com/en/incorporation-details.html Insofar as this communication contains any market commentary, the market commentary has been prepared by sales and/or trading desk of Standard Chartered Bank or its affiliate. It is not and does not constitute research material, independent research, recommendation or financial advice. Any market commentary is for information purpose only and shall not be relied for any other purpose, and is subject to the relevant disclaimers available at http://wholesalebanking.standardchartered.com/en/utility/Pages/d-mkt.aspx Insofar as this e-mail contains the term sheet for a proposed transaction, by responding affirmatively to this e-mail, you agree that you have understood the terms and conditions in the attached term sheet and evaluated the merits and risks of the transaction. We may at times also request you to sign on the term sheet to acknowledge in respect of the same. Please visit http://wholesalebanking.standardchartered.com/en/capabilities/financialmarke... for important information with respect to derivative products.

Thank you for all the work you've done to evolve the Haskell ecosystem.
Your steady hand will be missed.
On Tue, Oct 20, 2015 at 9:59 AM, Johan Tibell
Friends,
I'm taking a step back from day-to-day library work.
There are two main reasons I use Haskell: on one hand I find writing Haskell educational and fun. On the other I hope to make it a viable alternative to existing mainstream languages. With recent changes to our core libraries, and the general direction these are moving in, I believe we're moving away from becoming a viable alternative to those mainstream languages.
This has some practical implications for how I spend my Haskell hacking time. Much of what I do is maintaining and working on libraries that are needed for real world usage, but that aren't that interesting to work on. I've lost the motivation to work on these.
I've decided to take a step back from the core maintenance work on cabal, network, containers, and a few others* starting now. I've already found replacement maintainers for these.
I still plan to hack on random side projects, including GHC, and to continue coming to Haskell events and conference, just with a shorter bug backlog to worry about. :)
-- Johan Tibell
* For now I will still hack on unordered-containers and ekg, as there are some things I'd like to experiment with there.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

I look forward to seeing yah around at events, as always!
On Tuesday, October 20, 2015, Johan Tibell
Friends,
I'm taking a step back from day-to-day library work.
There are two main reasons I use Haskell: on one hand I find writing Haskell educational and fun. On the other I hope to make it a viable alternative to existing mainstream languages. With recent changes to our core libraries, and the general direction these are moving in, I believe we're moving away from becoming a viable alternative to those mainstream languages.
This has some practical implications for how I spend my Haskell hacking time. Much of what I do is maintaining and working on libraries that are needed for real world usage, but that aren't that interesting to work on. I've lost the motivation to work on these.
I've decided to take a step back from the core maintenance work on cabal, network, containers, and a few others* starting now. I've already found replacement maintainers for these.
I still plan to hack on random side projects, including GHC, and to continue coming to Haskell events and conference, just with a shorter bug backlog to worry about. :)
-- Johan Tibell
* For now I will still hack on unordered-containers and ekg, as there are some things I'd like to experiment with there.

Sincere thanks for all the work that you've done for the Haskell ecosystem, it's much appreciated and will be sorely missed. I'm interested in why you think recent changes are making Haskell a less viable alternative to mainstream languages. My experience is the opposite - beginners frequently ask "why is x not a superclass of y?" and "why does function a seem to be the same as b?", and are horrified to be told that it's for historical reasons (y existed before x, a existed before the more general b, etc.). This is a big anti-climax for someone coming from a "mainstream" language, where type classes are all in the expected logical hierarchy, and functions/types always have the most general constraints possible. -- View this message in context: http://haskell.1045720.n5.nabble.com/Taking-a-step-back-tp5820315p5820322.ht... Sent from the Haskell - Libraries mailing list archive at Nabble.com.

On Tue, Oct 20, 2015 at 7:45 AM, Jeremy
I'm interested in why you think recent changes are making Haskell a less viable alternative to mainstream languages.
I don't want to put words in Johan's mouth, but we visited together last
week and discussed this very topic, and I think my feelings on the matter
are pretty similar.
For the sake of argument, let's assume I'm a potential commercial user who
needs to decide whether Haskell is a technology I can base my next product
on. I'm going to do a cost-benefit analysis before I make my decision. The
major "pro" arguments you hear for using Haskell is that you'll end up with
programs that are more likely to be correct, and that since the language is
more expressive, you'll work faster: in other words, your net productivity
will increase. Of course, these hypothetical productivity benefits are
extremely difficult to quantify (and Lord knows, we've tried), but that's
not at all true for the "con" arguments:
- how many Haskell programmers are there in industry? If I lose my local
expert who is trying to push us to use this thing, can I hire another?
- how many lines of code are written in Haskell globally vs other
languages?
- how much tooling will I have available to help me if I choose Haskell
vs. a "safer" technology like Java, Python, or Go?
- how many open source libraries will I have available to me to handle
common tasks, and what is their quality?
- how likely am I to encounter bugs in the compiler or base libraries?
The point Johan is trying to make is this: if I'm thinking of using
Haskell, then I'm taking on a lot of project risk to get a (hypothetical,
difficult to quantify) X% productivity benefit. If choosing it actually
*costs* me a (real, obvious, easy to quantify) Y% tax because I have to
invest K hours every other quarter fixing all my programs to cope with
random/spurious changes in the ecosystem and base libraries, then unless we
can clearly convince people that X >> Y, the rationale for choosing to use
it is degraded or even nullified altogether.
G
--
Gregory Collins

On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins
The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually *costs* me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether.
So I'll rephrase a question I asked earlier that never got an answer: if I'm developing a commercial project based on ghc and some ecosystem, what would possibly cause me to change either the ghc version or any part of the ecosystem every other quarter? Or ever, for that matter? I've never worked on a commercial project that changed anything major mid-project, no matter what language it was using. As far as I'm concerned, one of the major features of stack is that it handles project-specific ecostystems cleanly and transparently.

On 20/10/15 19:47, Mike Meyer wrote:
On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins
mailto:greg@gregorycollins.net> wrote: The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually *costs* me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether.
So I'll rephrase a question I asked earlier that never got an answer: if I'm developing a commercial project based on ghc and some ecosystem, what would possibly cause me to change either the ghc version or any part of the ecosystem every other quarter? Or ever, for that matter? I don't know about them, I can tell you my personal experience.
If GHC and all libraries were perfect and free from bugs and ultimately optimized, then you'd be right: there would be no reason to change. But if you ever hit a bug in GHC or a library which was fixed in a future version, or if you want an improvement made to it, you may have to update the compiler. Library creators/maintainers do not always maintain their libraries compatible with very old/very new versions of the compiler. In an ecosystem like ours, with 3 versions of the compiler in use simultaneously, each with different language features and base APIs changed, compatibility requires a lot of work. This problem is transitive: if you depend on (a new version of a library that depends on)* a new version of base or a new language feature, you'll may have to update GHC. If you do not have the resources to backport those fixes and improvements, you'll be forced to update. In large projects you are likely to use hundreds of auxiliary libraries, so this is very likely to happen. I recently had to do this for one library because I could only compile it with a newer version of GHC. This project had 30K lines of Haskell split in dozens of libraries and a few commercial projects in production. It meant fixing, recompiling, packaging and testing everything again, which takes days and it's not unattended work :( It could easily happen again if I depend on anything that stops compiling with this version of GHC because someone considers it "outdated" or does not have the resources to maintain two versions of his/her library. Does that more or less answer your question? Cheers Ivan PS. I do not use stack yet. So, I remain ignorant about that. I see how it could help in some cases, but not this one.

On Tue, Oct 20, 2015 at 2:24 PM Ivan Perez
On 20/10/15 19:47, Mike Meyer wrote:
On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins
wrote: The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually *costs* me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether.
So I'll rephrase a question I asked earlier that never got an answer: if I'm developing a commercial project based on ghc and some ecosystem, what would possibly cause me to change either the ghc version or any part of the ecosystem every other quarter? Or ever, for that matter?
I don't know about them, I can tell you my personal experience.
If GHC and all libraries were perfect and free from bugs and ultimately optimized, then you'd be right: there would be no reason to change.
But if you ever hit a bug in GHC or a library which was fixed in a future version, or if you want an improvement made to it, you may have to update the compiler.
Library creators/maintainers do not always maintain their libraries compatible with very old/very new versions of the compiler. In an ecosystem like ours, with 3 versions of the compiler in use simultaneously, each with different language features and base APIs changed, compatibility requires a lot of work.
This problem is transitive: if you depend on (a new version of a library that depends on)* a new version of base or a new language feature, you'll may have to update GHC. If you do not have the resources to backport those fixes and improvements, you'll be forced to update. In large projects you are likely to use hundreds of auxiliary libraries, so this is very likely to happen.
I recently had to do this for one library because I could only compile it with a newer version of GHC. This project had 30K lines of Haskell split in dozens of libraries and a few commercial projects in production. It meant fixing, recompiling, packaging and testing everything again, which takes days and it's not unattended work :( It could easily happen again if I depend on anything that stops compiling with this version of GHC because someone considers it "outdated" or does not have the resources to maintain two versions of his/her library.
Does that more or less answer your question?
Not really. IIUC, your fundamental complaint is that the cost of tracking changes to the Haskell ecosystem outweighs any potential gains from using Haskell. But the choices that lead you to needing to track those changes don't make sense to me. For instance, you talk about compatibility requiring a lot of work, which I presume means between projects. Yes, having to swap out ecosystems and tool sets when you change projects can be a PITA, but even maintaining the environment by hand is less work than trying to keep all your projects compatible across multiple environments. So why do that? Especially when you have tools like virtual environments and stack to take away the pain of multiple environments? And yes, if some part of the ecosystem has a bug you have to get fixed and an update will get the fix, that's one option. But it also comes with a cost, in that you need to verify that it didn't introduce any new bugs while fixing the old one. Plus, dealing with possible changes in the API. And as you note, if that forces you to update some other part of the ecosystem, all that work is transitive to those other parts. It indeed adds up to a lot of work. Enough that I have to question that it's less work than backporting a fix, or even developing a new one from scratch. Over a couple of decades of building commercial projects in the P languages, when faced with the alternatives you outlined here, updating anything major was never the choice if more than one person was actively writing code. Even with a language that put a priority on not breaking old code in order to minimize the cost of doing that update. Maybe there's something I'm missing about Haskell that makes fixing somebody else's code take a lot more resources than it does in other languages. In which case that, not the changing ecosystem, is the argument against Haskell.

stack and Travis CI make it easy for library authors to make sure their
libraries work across multiple versions of GHC. For users, it makes it
easier to install the newest version of GHC.
I have to say that I think very few people are choosing not to use Haskell
because there are breaking changes to the language.
In fact, this problem didn't register as a major concern in FPComplete's
survey
https://www.fpcomplete.com/blog/2015/05/thousand-user-haskell-survey
If anything new releases with improvements (breaking or not) get people
excited about using Haskell.
When people talk about stability, they are generally thinking about whether
the platform is buggy, and whether they can get a stack trace when there is
a bug, not about upgrade cycles. For new users, upgrade cycles tend to be
an afterthought.
The burden is placed on open-source library authors like Johan and myself.
In general we would like to spend our time on things that produce new value
and minimize spending time keeping things working how they are now. If you
don't even believe the change prompting this work tax is a useful one, I
can definitely see how it would be demoralizing. In my case I can say that
for the last GHC release, members of the community that started using the
pre-release contributed a lot of patches for the upgrades, so it actually
did not bother me too much. On the other hand, aeson's latest release with
unnecessary and undocumented breaking changes created hours of work for me
for absolutely no reason.
https://github.com/bos/aeson/pull/288
On Tue, Oct 20, 2015 at 12:24 PM, Ivan Perez
On 20/10/15 19:47, Mike Meyer wrote:
On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins
wrote: The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually *costs* me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether.
So I'll rephrase a question I asked earlier that never got an answer: if I'm developing a commercial project based on ghc and some ecosystem, what would possibly cause me to change either the ghc version or any part of the ecosystem every other quarter? Or ever, for that matter?
I don't know about them, I can tell you my personal experience.
If GHC and all libraries were perfect and free from bugs and ultimately optimized, then you'd be right: there would be no reason to change.
But if you ever hit a bug in GHC or a library which was fixed in a future version, or if you want an improvement made to it, you may have to update the compiler.
Library creators/maintainers do not always maintain their libraries compatible with very old/very new versions of the compiler. In an ecosystem like ours, with 3 versions of the compiler in use simultaneously, each with different language features and base APIs changed, compatibility requires a lot of work.
This problem is transitive: if you depend on (a new version of a library that depends on)* a new version of base or a new language feature, you'll may have to update GHC. If you do not have the resources to backport those fixes and improvements, you'll be forced to update. In large projects you are likely to use hundreds of auxiliary libraries, so this is very likely to happen.
I recently had to do this for one library because I could only compile it with a newer version of GHC. This project had 30K lines of Haskell split in dozens of libraries and a few commercial projects in production. It meant fixing, recompiling, packaging and testing everything again, which takes days and it's not unattended work :( It could easily happen again if I depend on anything that stops compiling with this version of GHC because someone considers it "outdated" or does not have the resources to maintain two versions of his/her library.
Does that more or less answer your question?
Cheers
Ivan
PS. I do not use stack yet. So, I remain ignorant about that. I see how it could help in some cases, but not this one.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

7.10 cost me an hour on a 60k loc project.
Aeson 0.10 (<3 you Bryan, but this was brutal) cost me a couple days on a
2k loc project and I'm still not totally satisfied with how things have
settled yet.
Going to concur with Greg here.
On Tue, Oct 20, 2015 at 4:01 PM, Greg Weber
stack and Travis CI make it easy for library authors to make sure their libraries work across multiple versions of GHC. For users, it makes it easier to install the newest version of GHC.
I have to say that I think very few people are choosing not to use Haskell because there are breaking changes to the language.
In fact, this problem didn't register as a major concern in FPComplete's survey https://www.fpcomplete.com/blog/2015/05/thousand-user-haskell-survey
If anything new releases with improvements (breaking or not) get people excited about using Haskell. When people talk about stability, they are generally thinking about whether the platform is buggy, and whether they can get a stack trace when there is a bug, not about upgrade cycles. For new users, upgrade cycles tend to be an afterthought.
The burden is placed on open-source library authors like Johan and myself. In general we would like to spend our time on things that produce new value and minimize spending time keeping things working how they are now. If you don't even believe the change prompting this work tax is a useful one, I can definitely see how it would be demoralizing. In my case I can say that for the last GHC release, members of the community that started using the pre-release contributed a lot of patches for the upgrades, so it actually did not bother me too much. On the other hand, aeson's latest release with unnecessary and undocumented breaking changes created hours of work for me for absolutely no reason. https://github.com/bos/aeson/pull/288
On Tue, Oct 20, 2015 at 12:24 PM, Ivan Perez
wrote: On 20/10/15 19:47, Mike Meyer wrote:
On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins
wrote: The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually *costs* me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether.
So I'll rephrase a question I asked earlier that never got an answer: if I'm developing a commercial project based on ghc and some ecosystem, what would possibly cause me to change either the ghc version or any part of the ecosystem every other quarter? Or ever, for that matter?
I don't know about them, I can tell you my personal experience.
If GHC and all libraries were perfect and free from bugs and ultimately optimized, then you'd be right: there would be no reason to change.
But if you ever hit a bug in GHC or a library which was fixed in a future version, or if you want an improvement made to it, you may have to update the compiler.
Library creators/maintainers do not always maintain their libraries compatible with very old/very new versions of the compiler. In an ecosystem like ours, with 3 versions of the compiler in use simultaneously, each with different language features and base APIs changed, compatibility requires a lot of work.
This problem is transitive: if you depend on (a new version of a library that depends on)* a new version of base or a new language feature, you'll may have to update GHC. If you do not have the resources to backport those fixes and improvements, you'll be forced to update. In large projects you are likely to use hundreds of auxiliary libraries, so this is very likely to happen.
I recently had to do this for one library because I could only compile it with a newer version of GHC. This project had 30K lines of Haskell split in dozens of libraries and a few commercial projects in production. It meant fixing, recompiling, packaging and testing everything again, which takes days and it's not unattended work :( It could easily happen again if I depend on anything that stops compiling with this version of GHC because someone considers it "outdated" or does not have the resources to maintain two versions of his/her library.
Does that more or less answer your question?
Cheers
Ivan
PS. I do not use stack yet. So, I remain ignorant about that. I see how it could help in some cases, but not this one.
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
-- Chris Allen Currently working on http://haskellbook.com

Since bug fixes are not back ported to all older versions you are sometimes forced to upgrade to avoid serious bugs.
From: Libraries [mailto:libraries-bounces@haskell.org] On Behalf Of Mike Meyer
Sent: 20 October 2015 19:47
To: Gregory Collins; Jeremy
Cc: Haskell Libraries
Subject: Re: Taking a step back
On Tue, Oct 20, 2015 at 1:35 PM Gregory Collins

On 20/10/15 19:35, Gregory Collins wrote:
On Tue, Oct 20, 2015 at 7:45 AM, Jeremy
mailto:voldermort@hotmail.com> wrote: I'm interested in why you think recent changes are making Haskell a less viable alternative to mainstream languages.
[...] Of course, these hypothetical productivity benefits are extremely difficult to quantify (and Lord knows, we've tried), but that's not at all true for the "con" arguments:
* how many Haskell programmers are there in industry? If I lose my local expert who is trying to push us to use this thing, can I hire another? * how many lines of code are written in Haskell globally vs other languages? * how much tooling will I have available to help me if I choose Haskell vs. a "safer" technology like Java, Python, or Go? * how many open source libraries will I have available to me to handle common tasks, and what is their quality? * how likely am I to encounter bugs in the compiler or base libraries?
We actually get these questions from potential clients *all the time* (in particular, everyone asks 1 and 3). I don't always have a convincing answer.
The point Johan is trying to make is this: if I'm thinking of using Haskell, then I'm taking on a lot of project risk to get a (hypothetical, difficult to quantify) X% productivity benefit. If choosing it actually *costs* me a (real, obvious, easy to quantify) Y% tax because I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries, then unless we can clearly convince people that X >> Y, the rationale for choosing to use it is degraded or even nullified altogether. Not even that. Learning and some tooling costs can be amortized over time, but a regular and frequent cost tied to upgrades in the ecosystem may be really hard to estimate in advance. This makes profit, viability and deadlines, mid-term and long-term, really hard to estimate and fulfill. (I've also tried, and often failed.)
If clients (supervisors, project managers, <your company>) have *any* doubts that using the language may be cost-effective, they won't go for it. Cheers Ivan

Gregory Collins
writes:
I have to invest K hours every other quarter fixing all my programs to cope with random/spurious changes in the ecosystem and base libraries
Doesn't this relate directly to which features you're using? I'd be interested to know: since 1998, how many programs using only Haskell 98 have broken due to recent changes? John

On Tue, 20 Oct 2015, Jeremy wrote:
I'm interested in why you think recent changes are making Haskell a less viable alternative to mainstream languages. My experience is the opposite - beginners frequently ask "why is x not a superclass of y?" and "why does function a seem to be the same as b?", and are horrified to be told that it's for historical reasons (y existed before x, a existed before the more general b, etc.).
I would not explain it this way, not even to myself.
This is a big anti-climax for someone coming from a "mainstream" language, where type classes are all in the expected logical hierarchy, and functions/types always have the most general constraints possible.
The most general type would be: f :: Anything a => a and programs would read like f (f (f a) (f b)). I don't think that the Traversable and Foldable functions in Prelude are the most general ones - what about using Arrow class instead of functions? If we always try to get the most general functions into Prelude we will get constant change but certainly not progress. The more general the types become the more type annotations you will need and the more mental type inference the reader of a program must perform. Prelude would tend to depend on the newest type extensions.

Thank you for all the hard work you've put in over the years! I can certainly say (at least from a sample size of one) that your have met your goals, and your libraries have been paramount to letting me use Haskell as an alternative to mainstream languages. It's sad to see someone step away due to turbulence in the community and future visions for Haskell, but I hope that when things settle (I truly believe they will) that you will feel like joining us again. All the best!

Thank you for your contributions to the Haskell community. Your work has
done a lot, especially for working Haskell programmers like myself and my
coworkers. I'm also grateful that you're willing to continue moving the
ball forward with unordered-containers and ekg.
You didn't mention it explicitly, but what about maintainership of Cassava?
I have a soft-spot for that library which I've explained in this Github
issue[1] and I'd like the library to get the love & attention I think it
deserves.
Hoping you'll reconsider your appraisal of Haskell in the future, but if
you're not having fun, no reason to suffer.
Cheers,
[1]: https://github.com/tibbe/cassava/issues/101
On Tue, Oct 20, 2015 at 8:59 AM, Johan Tibell
Friends,
I'm taking a step back from day-to-day library work.
There are two main reasons I use Haskell: on one hand I find writing Haskell educational and fun. On the other I hope to make it a viable alternative to existing mainstream languages. With recent changes to our core libraries, and the general direction these are moving in, I believe we're moving away from becoming a viable alternative to those mainstream languages.
This has some practical implications for how I spend my Haskell hacking time. Much of what I do is maintaining and working on libraries that are needed for real world usage, but that aren't that interesting to work on. I've lost the motivation to work on these.
I've decided to take a step back from the core maintenance work on cabal, network, containers, and a few others* starting now. I've already found replacement maintainers for these.
I still plan to hack on random side projects, including GHC, and to continue coming to Haskell events and conference, just with a shorter bug backlog to worry about. :)
-- Johan Tibell
* For now I will still hack on unordered-containers and ekg, as there are some things I'd like to experiment with there.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
-- Chris Allen Currently working on http://haskellbook.com

Johan,
Thank you so much for all of your contributions to the community.
I confess, there are days when I find myself lost in maintenance hell that
I feel a desire to throw in the towel as well. (If Eric Mertens and others
hadn't picked up so much of the slack on my own projects I'm afraid I
likely would have reached the point of gravitational collapse long ago.)
I'm terribly sorry to hear that recent attempts to mitigate the impact of
changes, the three release policy which as inspired by comments you made,
haven't been enough to assuage your fears and discontent about the current
direction things are heading.
We are all poorer for the loss of your guidance.
-Edward
On Tue, Oct 20, 2015 at 9:59 AM, Johan Tibell
Friends,
I'm taking a step back from day-to-day library work.
There are two main reasons I use Haskell: on one hand I find writing Haskell educational and fun. On the other I hope to make it a viable alternative to existing mainstream languages. With recent changes to our core libraries, and the general direction these are moving in, I believe we're moving away from becoming a viable alternative to those mainstream languages.
This has some practical implications for how I spend my Haskell hacking time. Much of what I do is maintaining and working on libraries that are needed for real world usage, but that aren't that interesting to work on. I've lost the motivation to work on these.
I've decided to take a step back from the core maintenance work on cabal, network, containers, and a few others* starting now. I've already found replacement maintainers for these.
I still plan to hack on random side projects, including GHC, and to continue coming to Haskell events and conference, just with a shorter bug backlog to worry about. :)
-- Johan Tibell
* For now I will still hack on unordered-containers and ekg, as there are some things I'd like to experiment with there.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

On Tue, 20 Oct 2015, Johan Tibell wrote:
I'm taking a step back from day-to-day library work.
We have seen a lot of frustration with recent breaking changes to Prelude. This makes me wonder whether we should try a different way of performing votings. Although there seems to be some agreement that majority votes are not the ultimate tool to make decisions, controversal proposals such as the FTP were essentially made by majority votes (strictly speaking it was a majority that convinced the benevolent dictators). I have read about different voting systems that do not try to maximize the number of happy people but try to minimize the number of frustrated people. Applied to libraries@haskell.org we would no longer count +1, -1 and 0, but only -1 and 0 anymore, but we would also consider the status quo as one of the alternatives. E.g. if someone proposes something like FTP you would not answer +1 but instead: proposal: 0 status quo: -1 or proposal: 0 status quo: 0 I guess that this way we may find out that some proposals are nice to have for a majority of people but the status quo is not bad, too, and in summary there is no pressure to frustrate a (still big) minority. How about that? Maybe other people have more experience with that voting system or have suggestions for alternatives. (I am curious, whether someone replies "+1" to this suggestion. :-)

Henning Thielemann
writes:
Applied to libraries@haskell.org we would no longer count +1, -1 and 0, but only -1 and 0 anymore, but we would also consider the status quo as one of the alternatives.
The only problem with not hearing positive votes is that we don't know the size of the number of people in favor. Here's what ANSI/ISO does in the case of C++: We have a tiered voting system. First, you take a "straw poll" (say, on this list) to determine the level of interest. To this, people vote: SA WA DC WF SF. That is: Strongly Against, Weakly Against, Don't Care, Weakly in Favor, Strongly in Favor. Only motions largely in favor move forward. Those that have Strongly Against votes lead to further discussion with those who are disappointed. We try not to force anything down anyone's throats if it can be avoided. SA votes really mean something, perhaps even more than SF. If a motion is largely in favor, it moves to formal proposal before the committee (which in this case might be the Haskell Prime committee?). Each member of that committee has a vote, and simple majority decides whether it moves to the last round. In the last round, delegates from each country who make the final decision by majority. In the case of Haskell, our "countries" might be different: Academia, Industry, Hobbyists, etc. These delegates are supposed to represent the will of their constituents, but can vote however they wish. In this way, no vote is a surprise, and there are several stages of both public and expert consideration as it moves toward a formal conclusion. John

On Wed, Nov 4, 2015 at 9:37 PM, John Wiegley
In the last round, delegates from each country who make the final decision by majority. In the case of Haskell, our "countries" might be different: Academia, Industry, Hobbyists, etc. These delegates are supposed to represent the will of their constituents, but can vote however they wish.
The trouble with this would be that the split in the community does not seem to be between e.g. the interests of academia and industry but between conversative people in academia and the industry and those who are more open to language changes in both groups. I suspect the divide isn't even close to the same on most questions either so a representative system might be tricky to implement. Maybe if some form of grouping of people in the community is necessary it should be made along the lines of the groups we saw mentioned in recent FTP discussions: authors of printed books, new Haskell users coming from other languages and new Haskell users learning it without prior experience, Haskell teachers, maintainers of libraries spanning many GHC versions, users writing new code without backwards compatibility concerns, those interested in using only one of the existing Haskell standards without language extensions and those interested in using the latest extensions available,... Perhaps in future discussions a focus should be on identifying and grouping the users affected by each change and each user should be encouraged to vote +1, 0 or -1 for each of the groups they are a part of separately, explaining how they arrived at their final vote themselves. Matthias Hörmann

Hi, I’m not sure if the problem is within the voting system. At least for me, the problem is mostly that I simply can’t assess the consequences of a change. Often, I am easily convinced that the change would be for the better, but I can’t precisely prognose how much impact it would have on me with my work with Haskell, let alone on others. Even more so, I have a hard time to assess the cost of a change: Will it cause major annoyance for many people? Will it just be something that a few people will have to take care of once and it’s done? Will it be smooth or is there a high risk of unexpected knock-on effects? In the end, I tend to be skeptical of most changes, but convincable by a carefully laid out transition plan, hoping that he who created the plan thought it trough. Ideally, the benefits and costs of a change could be quantified objectively, and we would not have to vote in the first place. But that’s of course not possible. (This mail does not offer solutions, or even ask for any, sorry. It was just a slight sigh, maybe with the hope to get people to too excited about change the either way, because it seems we cannot avoid suboptimal choices anyways.) Greetings, Joachim -- Joachim “nomeata” Breitner mail@joachim-breitner.de • http://www.joachim-breitner.de/ Jabber: nomeata@joachim-breitner.de • GPG-Key: 0xF0FBF51F Debian Developer: nomeata@debian.org

Hi all, I agree with Joachim Breitner. I've worked with a number of groups with various approaches to collaborative decision making (not just just those called "voting"); and imo the recent upset has nothing to do with the "voting" procedure. I'm not saying we shouldn't consider changes to how we make decisions, just that if/when we do so we should do so for its own sake. The problem isn't with "voting" because, as mentioned repeatedly, we've taken a wide sample of the community and found overwhelming support for the changes. Given the margin of support, choosing other approaches to counting up the size of that margin are unlikely to alter the ordering of "for > against". Given the margin of support, the only thing which could invert that ordering is if we took specific individuals (e.g., those who've publicly "resigned") and considered them to be dictators. Thus, unless the actual proposal is to make those individuals dictators, their opposition is insufficient to counter the support from the rest of the community. No matter how much we dislike the outcomes we got, changing the process of decision making wouldn't've allowed us to avoid those outcomes (again, unless we decided to make certain specific individuals into dictators). The real problem is the growing divide in the community between the "liberals" vs the "conservatives". We could define these groups as those who're willing to break things vs want more stability, or as those who embrace polymorphism vs those who want to minimize mental type inference, or a few other ways I'm sure. How exactly we define the groups doesn't much matter imo; the point is: there are two groups which are growing ever more divergent from one another. Changing how we make decisions isn't going to reconcile these two groups; so long as the groups are widely divergent, any decisions made will upset one or the other. So the real issue at hand is to address the following two questions: (1) how can we reconcile the two groups, reducing the distance between them so as to reduce conflict? (2) supposing the groups cannot be (sufficiently) reconciled, how do we proceed? -- Live well, ~wren

On Fri, Nov 6, 2015 at 11:19 PM, wren romano
The real problem is the growing divide in the community between the "liberals" vs the "conservatives". We could define these groups as those who're willing to break things vs want more stability, or as those who embrace polymorphism vs those who want to minimize mental type inference, or a few other ways I'm sure. How exactly we define the groups doesn't much matter imo; the point is: there are two groups which are growing ever more divergent from one another. Changing how we make decisions isn't going to reconcile these two groups; so long as the groups are widely divergent, any decisions made will upset one or the other. So the real issue at hand is to address the following two questions:
(1) how can we reconcile the two groups, reducing the distance between them so as to reduce conflict? (2) supposing the groups cannot be (sufficiently) reconciled, how do we proceed?
It may help to have some data to better understand the community's postures. Perhaps we could design a survey with questions on each of these design orientations? I’m not sure at all that it’d be helpful to resolve conflicts, but it’d certainly be interesting to see what clusters of language design ideas come up — the division you mention certainly seems to be there in some intuitive sense, but our intuition might be flawed in many ways (is polymorphism “conservative” or “liberal”?). We could include questions relating to the number of public libraries published and maintained, time spent blogging, subscription to various mailing lists, and such, to see how design ideas relate to various forms of community involvement. Of course, results are likely to be awfully biased in many ways if such a survey is not designed, promoted and analyzed very carefully, and it may turn out that arguments based on the resulting data end up being harmful to the community’s discussions. I don’t know; I’m just curious!

On November 6, 2015 at 10:49:14 PM, wren romano (wren@community.haskell.org) wrote:
Hi all,
The real problem is the growing divide in the community between the "liberals" vs the "conservatives". We could define these groups as those who're willing to break things vs want more stability, or as those who embrace polymorphism vs those who want to minimize mental type inference, or a few other ways I'm sure. How exactly we define the groups doesn't much matter imo; the point is: there are two groups which are growing ever more divergent from one another.
I think that a “two groups” model of the disputes we’ve had lately simplifies too much, and in fact runs the risk of tending to force a bunch of varying motives and concerns into only two buckets, which I worry will increase the contentiousness of discussions rather help to moderate things. Valuing stability for teaching purposes is very different than valuing stability from the standpoint of a library maintainer, for example. And valuing monomorphic definitions of some things is not the same as valuing monomorphic definitions of other things — or opposing design compromises made in the name of performance, etc. The FTP, while a step forward in many ways, made compromises that opened it up to criticism from many standpoints — from the correctness standpoint it did strange things to the List module — from the backwards compatibility standpoint it did not provide perfect backwards compat (especially with warnings taken into account) — from the performance standpoint it turns out to have introduced at least one unanticipated regression. So no matter your concern, there is somewhere where it did not hit all the marks perfectly. However, this is for the most part because it sought to _mainly_ cover all those marks — so there is a _decent_ correctness story, and a _decent_ backwards compat story, and a _pretty good_ performance story, etc. But by that token, I hesitate to lump those who had concerns or opposition around the FTP into any particular camp. What I think we can try to ask from people is that they evaluate each proposal, to the extent possible, on _its own merits_ and not as part of any camp they might imagine they are in regarding a long term alignment over “fast” or “slow” or “no” library evolution, or the like. Even if we can’t get 100% consensus (and of course we won’t be able to), a continuing dialog where everyone tries to hear everyone else’s concerns and we try to make sure everyone is comfortable with the direction forward should still be the _goal_. Ideally, we will have less voting and not more, because ideally proposals that we move forward with will have a wide base of support across the board. In my mind, the way to do that is to treat each new library proposal case-by-case and go in willing to hear all sides, rather than prejudging what people’s take may be, why they might have such a take, etc. Cheers, Gershom

On Sat, Nov 7, 2015 at 4:02 PM, Gershom B
On November 6, 2015 at 10:49:14 PM, wren romano (wren@community.haskell.org) wrote:
The real problem is the growing divide in the community between the "liberals" vs the "conservatives". We could define these groups as those who're willing to break things vs want more stability, or as those who embrace polymorphism vs those who want to minimize mental type inference, or a few other ways I'm sure. How exactly we define the groups doesn't much matter imo; the point is: there are two groups which are growing ever more divergent from one another.
I think that a “two groups” model of the disputes we’ve had lately simplifies too much, and in fact runs the risk of tending to force a bunch of varying motives and concerns into only two buckets, which I worry will increase the contentiousness of discussions rather help to moderate things.
Oh, I totally agree. My point wasn't about the number of groups, nor about their organizing concerns; which I'd hoped to make clear by the last sentence quoted above. No, my point was that there is (at least one) rift in the community, that this rift is growing, that the real issue at hand is how we —as a community— should respond to that rift, and —perhaps most importantly to the current thread— that changing the procedures of collective decision making will not have an effective impact on that rift since the organizing concerns of the rift (whatever they may be) are not about issues of democratic control and representation but rather about issues of communal identity and mores. -- Live well, ~wren

Gershom B
writes:
Even if we can’t get 100% consensus (and of course we won’t be able to), a continuing dialog where everyone tries to hear everyone else’s concerns and we try to make sure everyone is comfortable with the direction forward should still be the _goal_. Ideally, we will have less voting and not more, because ideally proposals that we move forward with will have a wide base of support across the board. In my mind, the way to do that is to treat each new library proposal case-by-case and go in willing to hear all sides, rather than prejudging what people’s take may be, why they might have such a take, etc.
I very much agree, Gershom. John

On 7 Nov 2015, at 03:49, wren romano wrote:
The real problem is the growing divide in the community between the "liberals" vs the "conservatives". We could define these groups as those who're willing to break things vs want more stability, or as those who embrace polymorphism vs those who want to minimize mental type inference, or a few other ways I'm sure. How exactly we define the groups doesn't much matter imo;
I agree with Gershom (and probably Wren too!) that characterising the community discord as a split between two camps is neither accurate nor helpful. But I particularly want us to get away from the idea that those who oppose any particular proposed change, tend to be against language changes in general. It is simply not the case, for any of the recent disputed proposals. Many of the recent objectors are (or have been) themselves language designers, compiler implementers, and indeed made radical language proposals themselves. I do not see any "conservative" party here at all. People who disagree with recent proposals do so with specific technical and sociological arguments. That's it. Some supporters of those proposals might prefer it if they could dismiss the opponents as simply disliking all change, and therefore that their arguments are not worth engaging with. That is a category mistake. Regards, Malcolm

On 2015-11-11 at 11:41:56 +0100, Malcolm Wallace wrote:
But I particularly want us to get away from the idea that those who oppose any particular proposed change, tend to be against language changes in general. It is simply not the case, for any of the recent disputed proposals. [...] People who disagree with recent proposals do so with specific technical and sociological arguments.
Could you enumerate the "recent disputed proposals"? It's a bit unclear to me right now which ones are considered in that set. Are all proposals derived from the (real) Burning Bridges (Meta-)Proposal https://mail.haskell.org/pipermail/libraries/2013-May/020046.html contained in that "recent disputed proposals" set? Were there any non-disputed proposals recently?
participants (21)
-
Augustsson, Lennart
-
Carter Schonwald
-
Christopher Allen
-
Edward Kmett
-
evan@evan-borden.com
-
Gershom B
-
Greg Weber
-
Gregory Collins
-
Henning Thielemann
-
Herbert Valerio Riedel
-
Ivan Perez
-
Jeremy
-
Joachim Breitner
-
Johan Tibell
-
John Wiegley
-
Malcolm Wallace
-
Manuel Gómez
-
Matthias Hörmann
-
Mike Meyer
-
Oliver Charles
-
wren romano