
Hello everyone, As you hopefully know, a few weeks ago we proposed a new process [1] for collecting, discussing, and deciding upon changes to GHC and its Haskell superset. While we have been happy to see a small contingent of contributors join the discussion, the number is significantly smaller than the set who took part in the earlier Reddit discussions. In light of this, we are left a bit uncertain of how to proceed. So, we would like to ask you to let us know your feelings regarding the proposed process: * Do you feel the proposed process is an improvement over the status quo? * Why? (this needn't be long, just a sentence hitting the major points) * What would you like to see changed in the proposed process, if anything? That's all. Again, feel free to reply either on the GitHub pull request [1] or this thread if you would prefer. Your response needn't be long; we just want to get a sense of how much of the community feels that 1) this effort is worth undertaking, and 2) that the proposal before us is in fact an improvement over the current state of affairs. Thanks for your help! Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/1

* What would you like to see changed in the proposed process, if anything?
*Simon Peyton Jones as Benevolent Dictator For Life (BDFL)* If the BDFL had made a simple YES/NO decision on ShortImports [1] and ArgumentDo [2], we wouldn't be here talking about process proposals, Anthony wouldn't be mad, everything would be fine. We don't need another Haskell committee. * Keep using Trac for proposals, but use the description field of a ticket for the specification, instead of separate wiki page. * Add better filtering possibilities to Trac (say someone wants to only subscribe to tickets where syntax extensions are discussed). Adding better filtering possibilities will also benefit bug fixers (say someone wants to only subscribe to bugs on Windows or with keyword=PatternSynonyms). * Don't let hotly debated feature requests go without a resolution. [0] https://en.wikipedia.org/wiki/Benevolent_dictator_for_life [1] https://ghc.haskell.org/trac/ghc/ticket/10478 [2] https://ghc.haskell.org/trac/ghc/ticket/10843

Looks like reddit is a wrong place, so I'm replicating my comment here: On Wed, 2016-07-20 at 11:36 +0200, Ben Gamari wrote:
Hello everyone,
As you hopefully know, a few weeks ago we proposed a new process [1] for collecting, discussing, and deciding upon changes to GHC and its Haskell superset. While we have been happy to see a small contingent of contributors join the discussion, the number is significantly smaller than the set who took part in the earlier Reddit discussions.
In light of this, we are left a bit uncertain of how to proceed. So, we would like to ask you to let us know your feelings regarding the proposed process:
* Do you feel the proposed process is an improvement over the status quo?
Yes, definitely. The existing process is too vague, so formalizing it is a win in any case.
* Why? (this needn't be long, just a sentence hitting the major points)
* What would you like to see changed in the proposed process, if anything?
The proposed process overlaps with the Language Committee powers. In theory the Committee works on language standard, but de facto Haskell is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC adds new extension to Haskell. So I'd like the process to enforce separation between experimental extensions (not recommended in production code) and language improvements. I'd like the process to specify how the GHC Committee is going to communicate and share powers with the Language Committee. Thanks, Yuras.
That's all. Again, feel free to reply either on the GitHub pull request [1] or this thread if you would prefer. Your response needn't be long; we just want to get a sense of how much of the community feels that 1) this effort is worth undertaking, and 2) that the proposal before us is in fact an improvement over the current state of affairs.
Thanks for your help!
Cheers,
- Ben
[1] https://github.com/ghc-proposals/ghc-proposals/pull/1 _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-user s

Yuras Shumovich
Looks like reddit is a wrong place, so I'm replicating my comment here:
Thanks for your comments Yuras!
* Do you feel the proposed process is an improvement over the status quo?
Yes, definitely. The existing process is too vague, so formalizing it is a win in any case.
Good to hear.
* What would you like to see changed in the proposed process, if anything?
The proposed process overlaps with the Language Committee powers. In theory the Committee works on language standard, but de facto Haskell is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC adds new extension to Haskell. So I'd like the process to enforce separation between experimental extensions (not recommended in production code) and language improvements. I'd like the process to specify how the GHC Committee is going to communicate and share powers with the Language Committee.
To clarify I think Language Committee here refers to the Haskell Prime committee, right? I think these two bodies really do serve different purposes. Historically the Haskell Prime committee has been quite conservative in the sorts of changes that they standardized; as far as I know almost all of them come from a compiler. I would imagine that the GHC Committee would be a gate-keeper for proposals entering GHC and only some time later, when the semantics and utility of the extension are well-understood, would the Haskell Prime committee consider introducing it to the Report. As far as I understand it, this is historically how things have worked in the past, and I don't think this new process would change that. Of course, let me know if I'm off-base here. Cheers, - Ben

On Wednesday, July 20, 2016, Ben Gamari
Yuras Shumovich
javascript:;> writes: Looks like reddit is a wrong place, so I'm replicating my comment here:
Thanks for your comments Yuras!
* Do you feel the proposed process is an improvement over the status quo?
Yes, definitely. The existing process is too vague, so formalizing it is a win in any case.
Good to hear.
* What would you like to see changed in the proposed process, if anything?
The proposed process overlaps with the Language Committee powers. In theory the Committee works on language standard, but de facto Haskell is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC adds new extension to Haskell. So I'd like the process to enforce separation between experimental extensions (not recommended in production code) and language improvements. I'd like the process to specify how the GHC Committee is going to communicate and share powers with the Language Committee.
To clarify I think Language Committee here refers to the Haskell Prime committee, right?
I think these two bodies really do serve different purposes. Historically the Haskell Prime committee has been quite conservative in the sorts of changes that they standardized; as far as I know almost all of them come from a compiler. I would imagine that the GHC Committee would be a gate-keeper for proposals entering GHC and only some time later, when the semantics and utility of the extension are well-understood, would the Haskell Prime committee consider introducing it to the Report. As far as I understand it, this is historically how things have worked in the past, and I don't think this new process would change that.
Of course, let me know if I'm off-base here.
As one of the 20 members of the Haskell (Prime) 2020 committee id like to interject on this front: the preliminary discussions the committee has had thus far had a clear agreement that we shall aim to be a bit more progressive about what shall be included in the standard. The main bar will be the extent to which features or capabilities can be articulated without over specifying implementation details and can tractably have compatible but different compilers for the standard. I think some of the other prime committee members can articulate this a bit better than I, so don't hold me to this precise phrasing ;)
Cheers,
- Ben

On Wed, 2016-07-20 at 18:37 +0200, Ben Gamari wrote:
Yuras Shumovich
writes: Looks like reddit is a wrong place, so I'm replicating my comment here:
Thanks for your comments Yuras!
* Do you feel the proposed process is an improvement over the status quo?
Yes, definitely. The existing process is too vague, so formalizing it is a win in any case.
Good to hear.
* What would you like to see changed in the proposed process, if anything?
The proposed process overlaps with the Language Committee powers. In theory the Committee works on language standard, but de facto Haskell is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC adds new extension to Haskell. So I'd like the process to enforce separation between experimental extensions (not recommended in production code) and language improvements. I'd like the process to specify how the GHC Committee is going to communicate and share powers with the Language Committee.
To clarify I think Language Committee here refers to the Haskell Prime committee, right?
Yes, Herbert used "Haskell Prime 2020 committee" and "Haskell Language committee" interchangeable in the original announcement https://mail.ha skell.org/pipermail/haskell-prime/2016-April/004050.html
I think these two bodies really do serve different purposes. Historically the Haskell Prime committee has been quite conservative in the sorts of changes that they standardized; as far as I know almost all of them come from a compiler. I would imagine that the GHC Committee would be a gate-keeper for proposals entering GHC and only some time later, when the semantics and utility of the extension are well-understood, would the Haskell Prime committee consider introducing it to the Report. As far as I understand it, this is historically how things have worked in the past, and I don't think this new process would change that.
I think it is what the process should change. It makes sense to have two committees only if we have multiple language implementations, but it is not the case. Prime committee may accept or reject e.g. GADTs, but it will change nothing because people will continue using GADTs regardless, and any feature accepted by the Prime committee will necessary be compatible with GADTs extension. The difference between standard and GHC-specific extensions is just a question of formal specification, interesting mostly for language lawyer. (But it is good to have such formal specification even for GHC- specific extensions, right?) Probably it is time to return -fglasgow-exts back to separate standard feature from experimental GHC-specific ones.
Of course, let me know if I'm off-base here.
Cheers,
- Ben

On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovichy@gmail.com) wrote:
I think it is what the process should change. It makes sense to have two committees only if we have multiple language implementations, but it is not the case. Prime committee may accept or reject e.g. GADTs, but it will change nothing because people will continue using GADTs regardless, and any feature accepted by the Prime committee will necessary be compatible with GADTs extension.
I disagree. By the stated goals of the H2020 Committee, if it is successful, then by 2020 it will still for the most part have only standardized ony a _portion_ of the extentions that now exist today. There’s always been a barrier between implementation and standard in the Haskell language, that’s precisely one of the things that _keeps_ it from having become entirely implementation-defined despite the prevelance of extensions. Having two entirely different processes here (though obviously not without communication between the individuals involved) helps maintain that. —Gershom

On Jul 21, 2016, at 10:32 AM, Gershom B
wrote: On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovichy@gmail.com) wrote:
It makes sense to have two committees only if we have multiple language implementations, but it is not the case.
I disagree. By the stated goals of the H2020 Committee, if it is successful, then by 2020 it will still for the most part have only standardized ony a _portion_ of the extentions that now exist today.
+1 to Gershom's comment.

On Thu, 2016-07-21 at 10:32 -0400, Gershom B wrote:
On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovichy@gmail.com ) wrote:
I think it is what the process should change. It makes sense to have two committees only if we have multiple language implementations, but it is not the case. Prime committee may accept or reject e.g. GADTs, but it will change nothing because people will continue using GADTs regardless, and any feature accepted by the Prime committee will necessary be compatible with GADTs extension.
I disagree. By the stated goals of the H2020 Committee, if it is successful, then by 2020 it will still for the most part have only standardized ony a _portion_ of the extentions that now exist today.
Yes, I know. But don't you see how narrow the responsibility of the H2020 Committee is? GHC Committee makes all important decisions, and H2020 just collects some of GHC extensions into a set of "standard" ones. It is useful only when "nonstandard" extensions are not widely used (e.g. marked as experimental, and are not recommended for day-to- day use).
There’s always been a barrier between implementation and standard in the Haskell language, that’s precisely one of the things that _keeps_ it from having become entirely implementation-defined despite the prevelance of extensions.
Unfortunately Haskell *is* implementation-defined language. You can't compile any nontrivial package from Hackage using Haskell2010 GHC. And the same will be true for Haskell2020. We rely on GHC-specific extensions everywhere, directly or indirectly. If the goal of the Haskell Prime is to change that, then the GHC-specific extensions should not be first class citizens in the ecosystem. Otherwise there is no sense in two committees. We can continue pretending that Haskell is standard-defined language, but it will not help to change the situation.
Having two entirely different processes here (though obviously not without communication between the individuals involved) helps maintain that.
—Gershom

On Jul 21, 2016, at 11:29 AM, Yuras Shumovich
wrote: Unfortunately Haskell *is* implementation-defined language. You can't compile any nontrivial package from Hackage using Haskell2010 GHC.
Sadly, I agree with this statement. And I think this is what we're trying to change.
And the same will be true for Haskell2020. We rely on GHC-specific extensions everywhere, directly or indirectly. If the goal of the Haskell Prime is to change that, then the GHC-specific extensions should not be first class citizens in the ecosystem.
My hope is that Haskell2020 will allow us to differentiate between standardized extensions and implementation-defined ones. A key part of this hope is that we'll get enough extensions in the first set to allow a sizeable portion of our ecosystem to used only standardized extensions.
We can continue pretending that Haskell is standard-defined language, but it will not help to change the situation.
But writing a new standard that encompasses prevalent usage will help to change the situation. And that's the process I'm hoping to contribute to. Richard

On Thu, 2016-07-21 at 13:25 -0400, Richard Eisenberg wrote:
On Jul 21, 2016, at 11:29 AM, Yuras Shumovich
wrote:
Unfortunately Haskell *is* implementation-defined language. You can't compile any nontrivial package from Hackage using Haskell2010 GHC.
Sadly, I agree with this statement. And I think this is what we're trying to change.
And I'd like it to be changed too. I'm paid for writing SW in Haskell, and I want to have a standard. At the same time I'm (probably unusual) Haskell fan, so I want new cool features. Don't you see a conflict of interests? https://www.reddit.com/r/haskell/comments/4oyxo2/blog_contributing_to_ghc/d4...
And the same will be true for Haskell2020. We rely on GHC-specific extensions everywhere, directly or indirectly. If the goal of the Haskell Prime is to change that, then the GHC-specific extensions should not be first class citizens in the ecosystem.
My hope is that Haskell2020 will allow us to differentiate between standardized extensions and implementation-defined ones. A key part of this hope is that we'll get enough extensions in the first set to allow a sizeable portion of our ecosystem to used only standardized extensions.
It is hopeless. Haskell2020 will not include TemplateHaskell, GADTs, etc. Haskell Prime committee will never catch up if GHC will continue adding new extensions. In 2020 everybody will use pattern synonyms, overloaded record fields and TypeInType, so the standard will be as far from practice as it is now. The whole idea of language extensions, as it is right now, works against Haskell Prime. https://www.reddit.com/r/haskell/comments/46jq4i/what_is_the_eventual_fate_o... I abandoned my CStructures proposal because of that. I don't want to increase entropy. https://phabricator.haskell.org/D252
We can continue pretending that Haskell is standard-defined language, but it will not help to change the situation.
But writing a new standard that encompasses prevalent usage will help to change the situation. And that's the process I'm hoping to contribute to.
I see only one real way to change the situation -- standardize all widely used extensions and declare anything new as experimental unless accepted by the Haskell Prime Committee. Probably there are other ways, but we need to cleanup the mess ASAP. New extensions only contribute to the mess -- that is my point.
Richard

On Jul 21, 2016, at 2:25 PM, Yuras Shumovich
wrote: It is hopeless. Haskell2020 will not include TemplateHaskell, GADTs, etc.
Why do you say this? I don't think this is a forgone conclusion. I'd love to see these standardized. My own 2¢ on these are that we can standardize some subset of TemplateHaskell quite easily. GADTs are harder because (to my knowledge) no one has ever written a specification of type inference for GADTs. (Note that the OutsideIn paper admits to failing at this.) Perhaps we can nail it, but perhaps not. Even so, we can perhaps standardize much of the behavior around GADTs (but with pattern matches requiring lots of type annotations) and say that an implementation is free to do better. Maybe we can do even better than this, but I doubt we'll totally ignore this issue.
Haskell Prime committee will never catch up if GHC will continue adding new extensions.
Of course not. But I believe some libraries also refrain from using new extensions for precisely the same reason -- that the new extensions have yet to fully gel.
In 2020 everybody will use pattern synonyms, overloaded record fields and TypeInType, so the standard will be as far from practice as it is now.
Pattern synonyms, now with a published paper behind them, may actually be in good enough shape to standardize by 2020. I don't know anything about overloaded record fields. I'd be shocked if TypeInType is ready to standardize by 2020. But hopefully we'll get to it.
The whole idea of language extensions, as it is right now, works against Haskell Prime.
I heartily disagree here. Ideas that are now standard had to have started somewhere, and I really like (in theory) the way GHC/Haskell does this. The (in theory) parenthetical is because the standardization process has been too, well, dead to be useful. Is that changing? Perhaps. I'd love to see more action on that front. I'm hoping to take on a more active role in the committee after my dissertation is out the door (2 more weeks!).
I see only one real way to change the situation -- standardize all widely used extensions and declare anything new as experimental unless accepted by the Haskell Prime Committee.
Agreed here. I think that's what we're trying to do. If you have a good specification for GADT type inference, that would help us. :) Richard

On Thu, 2016-07-21 at 14:38 -0400, Richard Eisenberg wrote:
On Jul 21, 2016, at 2:25 PM, Yuras Shumovich
wrote: It is hopeless. Haskell2020 will not include TemplateHaskell, GADTs, etc.
Why do you say this? I don't think this is a forgone conclusion. I'd love to see these standardized.
Because I'm a pessimist :) We even can't agree to add `text` to the standard library.
My own 2¢ on these are that we can standardize some subset of TemplateHaskell quite easily. GADTs are harder because (to my knowledge) no one has ever written a specification of type inference for GADTs. (Note that the OutsideIn paper admits to failing at this.) Perhaps we can nail it, but perhaps not. Even so, we can perhaps standardize much of the behavior around GADTs (but with pattern matches requiring lots of type annotations) and say that an implementation is free to do better. Maybe we can do even better than this, but I doubt we'll totally ignore this issue.
Haskell Prime committee will never catch up if GHC will continue adding new extensions.
Of course not. But I believe some libraries also refrain from using new extensions for precisely the same reason -- that the new extensions have yet to fully gel.
And you are an optimist. We are lazy, so we'll use whatever is convenient. There are three ways to force people to refrain from using new extensions: - mature alternative compiler exists, so nobody will use your library unless it uses only the common subset of features; - the standard covers all usual needs (I don't think it will be possible in near future, and existence of this email thread proves that.) - new features are not first class citizens; e.g. `cabal check` issues an error (or warning) when you are uploading a package with immature extension used.
In 2020 everybody will use pattern synonyms, overloaded record fields and TypeInType, so the standard will be as far from practice as it is now.
Pattern synonyms, now with a published paper behind them, may actually be in good enough shape to standardize by 2020. I don't know anything about overloaded record fields. I'd be shocked if TypeInType is ready to standardize by 2020. But hopefully we'll get to it.
The whole idea of language extensions, as it is right now, works against Haskell Prime.
I heartily disagree here. Ideas that are now standard had to have started somewhere, and I really like (in theory) the way GHC/Haskell does this.
I'm not against language extensions completely. But using them should be a real pain to prevent people from using then everywhere. Ideally you should have to compile GHC manually to get a particular extension enabled :)
The (in theory) parenthetical is because the standardization process has been too, well, dead to be useful. Is that changing? Perhaps. I'd love to see more action on that front. I'm hoping to take on a more active role in the committee after my dissertation is out the door (2 more weeks!).
I see only one real way to change the situation -- standardize all widely used extensions and declare anything new as experimental unless accepted by the Haskell Prime Committee.
Agreed here.
Great. So I propose to split section "9. GHC Language Features" of the user manual into "Stable language extensions" and "Experimental language extensions", move all the recently added extensions into the latter one, explicitly state in the proposed process that all new extensions go to the "Experimental" subsection initially and specify when they go to the "Stable" subsection.
I think that's what we're trying to do. If you have a good specification for GADT type inference, that would help us. :)
I'd personally prefer to mark GADT and TH as experimental. The difficulties with their standardizing is a sign of immaturity. I regret about each time I used them in production code.
Richard

Hello Ben,
I posted this when you originally asked for feed-back, but perhaps it
got buried among the rest of the e-mails.
I think the proposal sounds fairly reasonable, but it is hard to say how
well it will work in practice until we try it, and we should be ready to
change it if needs be.
Some clarifying questions on the intended process:
1. After submitting the initial merge request, is the person making the
proposal to wait for any kind of acknowledgment, or just move on to step 2?
2. Is the discussion going to happen on one of the mailing lists, if so
which? Is it the job of the proposing person to involve/notify the
committee about the discussion? If so, how are they to find out who is on
the committee?
3. How does one actually perform step 3, another pull request or simply
an e-mail to someone?
Typo: two separate bullets in the proposal are labelled as 4.
Cheers,
-Iavor
On Wed, Jul 20, 2016 at 2:36 AM, Ben Gamari
Hello everyone,
As you hopefully know, a few weeks ago we proposed a new process [1] for collecting, discussing, and deciding upon changes to GHC and its Haskell superset. While we have been happy to see a small contingent of contributors join the discussion, the number is significantly smaller than the set who took part in the earlier Reddit discussions.
In light of this, we are left a bit uncertain of how to proceed. So, we would like to ask you to let us know your feelings regarding the proposed process:
* Do you feel the proposed process is an improvement over the status quo?
* Why? (this needn't be long, just a sentence hitting the major points)
* What would you like to see changed in the proposed process, if anything?
That's all. Again, feel free to reply either on the GitHub pull request [1] or this thread if you would prefer. Your response needn't be long; we just want to get a sense of how much of the community feels that 1) this effort is worth undertaking, and 2) that the proposal before us is in fact an improvement over the current state of affairs.
Thanks for your help!
Cheers,
- Ben
[1] https://github.com/ghc-proposals/ghc-proposals/pull/1
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

Iavor Diatchki
Hello Ben,
I posted this when you originally asked for feed-back, but perhaps it got buried among the rest of the e-mails.
Indeed it seems that way. Sorry about that!
I think the proposal sounds fairly reasonable, but it is hard to say how well it will work in practice until we try it, and we should be ready to change it if needs be.
Right. I fully expect that we will have to iterate on it.
Some clarifying questions on the intended process: 1. After submitting the initial merge request, is the person making the proposal to wait for any kind of acknowledgment, or just move on to step 2?
The discussion phase can happen asynchronously from any action by the Committee. Of course, the Committee should engauge in discussion early, but I don't think any sort of acknowledgement is needed. An open pull request should be taken to mean "let's discuss this idea."
2. Is the discussion going to happen on one of the mailing lists, if so which? Is it the job of the proposing person to involve/notify the committee about the discussion? If so, how are they to find out who is on the committee?
The proposed process places the discussion in a pull request. The idea here is to use well-understood and widely-used code review tools to faciliate the conversation. The Committee members will be notified of the open pull request by the usual event notification mechanism (e.g. in GitHub one can subscribe to a repository).
3. How does one actually perform step 3, another pull request or simply an e-mail to someone?
The opening of the pull request would mark the beginning of the discussion period. When the author feels that the discussion has come to something of a conclusion, they will request that the GHC Committee consider the proposal for acceptable by leaving a comment on the pull request.
Typo: two separate bullets in the proposal are labelled as 4.
I believe this should be fixed now. Thanks! Cheers, - Ben

El 20 jul 2016, a las 12:45, Ben Gamari
escribió: Iavor Diatchki
writes: Hello Ben,
I posted this when you originally asked for feed-back, but perhaps it got buried among the rest of the e-mails. Indeed it seems that way. Sorry about that!
I think the proposal sounds fairly reasonable, but it is hard to say how well it will work in practice until we try it, and we should be ready to change it if needs be. Right. I fully expect that we will have to iterate on it.
Some clarifying questions on the intended process: 1. After submitting the initial merge request, is the person making the proposal to wait for any kind of acknowledgment, or just move on to step 2? The discussion phase can happen asynchronously from any action by the Committee. Of course, the Committee should engauge in discussion early, but I don't think any sort of acknowledgement is needed. An open pull request should be taken to mean "let's discuss this idea."
2. Is the discussion going to happen on one of the mailing lists, if so which? Is it the job of the proposing person to involve/notify the committee about the discussion? If so, how are they to find out who is on the committee?
The proposed process places the discussion in a pull request. The idea here is to use well-understood and widely-used code review tools to faciliate the conversation.
This part runs strongly against the grain of what I'd prefer: email is lightweight, decentralized, standard, and has many clients. We can read discussion of Haskell proposals any way we like. Github on the other hand only allows us to read issues by going to Github, and using whatever interface Github has given us (which personally I find very annoying, esp. on mobile). In addition, reading proposals offline becomes very difficult. Many of us read discussion when commuting, where, e.g. in NYC, there isn't cell service. For reviewing code that implements a proposal, I'm a lot more flexible (although again I'm not a fan of Github) For the people who like having history tracked with git: gitit is a possibility, and is written in Haskell. Tom
The Committee members will be notified of the open pull request by the usual event notification mechanism (e.g. in GitHub one can subscribe to a repository).
3. How does one actually perform step 3, another pull request or simply an e-mail to someone? The opening of the pull request would mark the beginning of the discussion period. When the author feels that the discussion has come to something of a conclusion, they will request that the GHC Committee consider the proposal for acceptable by leaving a comment on the pull request.
Typo: two separate bullets in the proposal are labelled as 4. I believe this should be fixed now. Thanks!
Cheers,
- Ben
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

20 juli 2016 kl. 19:38 skrev amindfv@gmail.com:
El 20 jul 2016, a las 12:45, Ben Gamari
escribió: Iavor Diatchki
writes: Hello Ben,
I posted this when you originally asked for feed-back, but perhaps it got buried among the rest of the e-mails. Indeed it seems that way. Sorry about that!
I think the proposal sounds fairly reasonable, but it is hard to say how well it will work in practice until we try it, and we should be ready to change it if needs be. Right. I fully expect that we will have to iterate on it.
Some clarifying questions on the intended process: 1. After submitting the initial merge request, is the person making the proposal to wait for any kind of acknowledgment, or just move on to step 2? The discussion phase can happen asynchronously from any action by the Committee. Of course, the Committee should engauge in discussion early, but I don't think any sort of acknowledgement is needed. An open pull request should be taken to mean "let's discuss this idea."
2. Is the discussion going to happen on one of the mailing lists, if so which? Is it the job of the proposing person to involve/notify the committee about the discussion? If so, how are they to find out who is on the committee?
The proposed process places the discussion in a pull request. The idea here is to use well-understood and widely-used code review tools to faciliate the conversation.
This part runs strongly against the grain of what I'd prefer: email is lightweight, decentralized, standard, and has many clients. We can read discussion of Haskell proposals any way we like. Github on the other hand only allows us to read issues by going to Github, and using whatever interface Github has given us (which personally I find very annoying, esp. on mobile). In addition, reading proposals offline becomes very difficult. Many of us read discussion when commuting, where, e.g. in NYC, there isn't cell service.
For reviewing code that implements a proposal, I'm a lot more flexible (although again I'm not a fan of Github)
For the people who like having history tracked with git: gitit is a possibility, and is written in Haskell.
Tom
It's possible both follow and contribute to issues in a github repo via email. I do it all the time for Idris. // Niklas
The Committee members will be notified of the open pull request by the usual event notification mechanism (e.g. in GitHub one can subscribe to a repository).
3. How does one actually perform step 3, another pull request or simply an e-mail to someone? The opening of the pull request would mark the beginning of the discussion period. When the author feels that the discussion has come to something of a conclusion, they will request that the GHC Committee consider the proposal for acceptable by leaving a comment on the pull request.
Typo: two separate bullets in the proposal are labelled as 4. I believe this should be fixed now. Thanks!
Cheers,
- Ben
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

* What would you like to see changed in the proposed process, if anything? No GitHub. In order to fully utilise GitHub, one needs to run
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 20/07/16 11:36, Ben Gamari wrote: proprietary programs. Additionally, GitHub is proprietary software server-side. While I don't feel too strongly about which of the proposed alternatives are chosen, since they are both free software, augmenting Phabricator would probably be the best choice, since this avoids adding another piece of infrastructure to use and administer. - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJXj6s4AAoJENQqWdRUGk8BI9oP/RCV14jmpHpbJ1Cr42Nr+yam cXrjSmKGfHNDGbBRqfORBTaAGkRbXvVJAWYiaUXddl1FUmB40JWg+sDQ/a2PVp28 10QB361+FzqSir+7wkJd27GH3mki1Hmsm/wKHISDal2P40QWSRVKZh8xr1vvWzje MVL42AvBV/P9nF40bEI+axO11A7/PnkHCzQqspK/DdtwRWLZ5ny3XYI1owH/zy9m Roo2+Zw0jIxKL18l6edLoPEiunsj7B9iHwf+TglODgyBbIdqAndxuQuJinOYEz+q FooDD3Qv+qhrRAUnTXXQ+pXO7hYqXTLqeEQKekhaj8zgo2OqzY96RM7Q2OQ0xuaR mYDe99Bg9SC5fqYZX4yVSr8anw+dGqT2FDeqWg3OBg0wDH+QZ7mhlqXbdXENaFcx 0TtIEYTf7QGZioE3B21DcfQSoXoWOPvEWE7qityPLBIln/FcA/B0obtBVNooErdA c2WM7BdNnE6+nBuxMC39FhX1Tr61Ao3BtGKJJyeANcLxefrGu+6T1udM7IsKfPAC obZllRLNrTTR9xyD/8ebVpI8wstxcjmaGCFN8kA74byMwoX/fMN/Ol7N47Ee3BWE l4OqPSxAACZrwkRLoovX/PhulOxX5E/go4aio9fDZT8i5HKxocb66GmixBxALejA Vjegvti1Nfj6cAAWmNT1 =apBk -----END PGP SIGNATURE-----

On Jul 20, 2016, at 12:47 PM, Alexander Berntsen
wrote: * What would you like to see changed in the proposed process, if anything? No GitHub. In order to fully utilise GitHub, one needs to run
On 20/07/16 11:36, Ben Gamari wrote: proprietary programs. Additionally, GitHub is proprietary software server-side.
While I indeed sympathize with your desire to avoid proprietary, closed software, I'd like to point out that avoiding GitHub because it's closed has a real cost: * It requires more work (a very limited resource) to maintain our own instance of whatever alternate solution we come up with. * GitHub is very current in our community. Moving away from GitHub may increase barriers to contributions. Of course, our use of GitHub would appear to increase barriers to those who avoid closed software. This fact means that we need to balance the desires of some potential contributors (who prefer to avoid closed software) with other potential contributors (who prefer to use GitHub). To be clear, I'm not trying to shoot down Alexander's point -- just trying to point out that there are shades of gray here. Also, what proprietary programs are needed to fully utilize GitHub? I just use git and ssh, both pieces of free software. Richard

While I indeed sympathize with your desire to avoid proprietary, closed software, I'd like to point out that avoiding GitHub because it's closed has a real cost I don't value those points over my freedom. But those who don't value
Also, what proprietary programs are needed to fully utilize GitHub? I just use git and ssh, both pieces of free software. For proposals, we'll be doing lots of discussions and review. Those
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 20/07/16 19:00, Richard Eisenberg wrote: their freedom might value them, so thanks for listing them. parts of GitHub rely heavily on proprietary client-side JavaScript. - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJXj69RAAoJENQqWdRUGk8Bnv8P+wc2CV4CEifkebXniGte7oOm yJ3fB/wOhm5sgGVOOAVNiv3VfvQb8t3qsouFkYyaVJXfLMGhK1PehC3Y38mI28HU NA1FK9CVyBU3YHQmyyqegJGyjj+DEVrN/rmydgU3xCPCRflB/gvMdv0PMUhlIj8v 2tTjXguElLxAncnsU4YIVWcmvN+Wssikv8H/FWksJTsuZdFXkC1VzLbGJHGc0plt s8v6OWWFxXqv3ujFy1e4yTv+VoBeKEbEVPhsxGvU7q4IkrVGLUOEnR0LPV3+H+Um 1XWez8VIBMIobnDE/0ZR++6AXc08iOhTR1Cgd+6YW0SGDP54HqG/Oj1Hop7kdAy6 7j62Csr8DAjUOfJyWng+vmIOd66g/Cu3OxQlmtiznjnStDQKmPpLTe3srYWC2w0T iIR8sKqmFb4QnohDH6bG4JPCtwVIM2RXg8xzcfbHMihqa1DZqS8DCrtx80Ga5UHP Ln0/j6ig5erQFJU7XM5CvnUDIMu0wQp8bP4VbTqRM4dvFVoBxOk8D9vZFh2LlSNM /sYkD0oRHeFFoXRiL5XuTLufntA/BR+n/fFGoHL2ZtLEWk+Phe70PvLT/vqXJtZk AtwlNXCr1uD/4rhCxdNAoVuV8o+8F+Iq0l4kFGqtH8hZdRN9+HpISWO3HgaeyIVD tJbo1M5Eo3+Ky7o25uiQ =cPxT -----END PGP SIGNATURE-----

Alexander Berntsen
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
* What would you like to see changed in the proposed process, if anything? No GitHub. In order to fully utilise GitHub, one needs to run
On 20/07/16 11:36, Ben Gamari wrote: proprietary programs. Additionally, GitHub is proprietary software server-side.
I know, it's rather frustrating. I also have fairly strong feelings about open-source purity, but in this case I just don't see any way to improve the current situation under this constraint. I agree that Phabricator is the logical choice for self-hosting in our situation, but sadly it just doesn't have the features at the moment to make the process convenient and accessible (which is the motivation for the change in the first place).
While I don't feel too strongly about which of the proposed alternatives are chosen, since they are both free software, augmenting Phabricator would probably be the best choice, since this avoids adding another piece of infrastructure to use and administer.
The Phabricator developers already do a fair amount for us without charging for their time. We can ask them to add the features that we need but this will take time, if it happens at all, unless we put money on the table. Unless someone is willing to put money down I'm not sure Phabricator will be an option in the foreseeable future. It does look like Gitlab is an impressive option but really then we are back to the problem of fragmented development tools. Using Trac, Phabricator, Gitlab, and mailing lists all in one project seems a bit silly. Cheers, - Ben

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 20/07/16 19:04, Ben Gamari wrote:
I know, it's rather frustrating. I also have fairly strong feelings about open-source purity, but in this case I just don't see any way to improve the current situation under this constraint. I don't think that starting to rely on proprietary software *is* an improvement, but the opposite.
It does look like Gitlab is an impressive option but really then we are back to the problem of fragmented development tools. Using Trac, Phabricator, Gitlab, and mailing lists all in one project seems a bit silly. I don't understand why using GitLab is more silly than using GitHub, when considering fragmentation.
Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJXj6/uAAoJENQqWdRUGk8Bk4UP/RsWT2YjFTxFR0mPxx0gQgFd jq9lp8DlJsZJ33I7OF4kuFLxjbMoybRDRbr9rDSWXVhor25mxogXa+5VU6/PXj5I IKsh495k8BZDDajlUuPPvC0jymHtJI2urVWls9Da/uVOu/xeuutK+fvuosLsuPAh 0AaoncvDV9LaGDYxOEGIQa5ucEiDwE5k+PbPyxH9lCWXH8ULKGG+tPIh+0/wCCbx pxoQhire9LLWUXtkMQ654mgurQ76BD97b4Hab42ommtwNwFnxS4Oqw/Q7n2dzmdc WSiHu6S8twcWgqiJNr+OVcNuXcRHmFHYnS0VvI9tTYvvE5cZAenjJrXE1ncyoZkh yTPulCx18GxNafzEsGxw7LYSmMIb2/QKWEkRDh7kkP+fOFvXPTl7QhUjz5RGJ90s RgbDD9M17fRJ96yGwogFSzSVZP1KTiYCOMnqk8KjDQgwXOqQFCDKUS1dawpeARCi M6zJsVkuVjJCsb/lKzlq71w1yJuOMS6gO92SajqRFfJ+j8q+GTP6/DnY+vUDvoxq klNAhqQGJKLgvy47fS6MN9hv8K4yj4NmDY8vSruOOJ1RBlruNiUoEXFKSfRZVwpk FjEZYar7/rdOIVsI5W7Tt4Qip3LGnpqzdHH3lehxWaDtWgwrV0hnMToVapOsm0Rs n2OTI95HXi21nm7jYV+X =I8EH -----END PGP SIGNATURE-----

Alexander Berntsen
On 20/07/16 19:04, Ben Gamari wrote:
I know, it's rather frustrating. I also have fairly strong feelings about open-source purity, but in this case I just don't see any way to improve the current situation under this constraint.
I don't think that starting to rely on proprietary software *is* an improvement, but the opposite.
This is a bit of a judgement call. I know this is a long-contested issue, but personally for me it puts me at ease if, * the proprietary code is running on someone else's machine * I can use the application with open tools (a web browser of your choice, git, and an email client) * I can get my data out if needed
It does look like Gitlab is an impressive option but really then we are back to the problem of fragmented development tools. Using Trac, Phabricator, Gitlab, and mailing lists all in one project seems a bit silly.
I don't understand why using GitLab is more silly than using GitHub, when considering fragmentation.
When put this way my argument does indeed sound a bit silly. :-) Perhaps it's not. I think the difference is that we would be consolidating on a platform which much of the Haskell community already uses in their non-GHC development. Cheers, - Ben

I really appreciate you putting so much work into this. It is very
important, and I believe could do much to increase awareness of and
participation in these processes.
I've left most of my thoughts as line comments on the proposal document,
but since discussion of platform choice is taking place here, I'll quote
the Motivations section:
1. Higher than necessary barrier-to-entry.
For the purposes of this proposal, whether we would prefer a competing
alternative is secondary to the fact that a Github account has become a
very low common denominator for people wishing to participate in the
development of open source projects. If we decide to proceed with a
non-Github platform, we need to make a compelling case that the alternate
choice does not raise the barrier to entry, or else we need to decide that
we have different priorities for this effort.
Thanks,
Adam
On Wed, Jul 20, 2016 at 12:56 PM, Ben Gamari
Alexander Berntsen
writes: On 20/07/16 19:04, Ben Gamari wrote:
I know, it's rather frustrating. I also have fairly strong feelings about open-source purity, but in this case I just don't see any way to improve the current situation under this constraint.
I don't think that starting to rely on proprietary software *is* an improvement, but the opposite.
This is a bit of a judgement call. I know this is a long-contested issue, but personally for me it puts me at ease if,
* the proprietary code is running on someone else's machine
* I can use the application with open tools (a web browser of your choice, git, and an email client)
* I can get my data out if needed
It does look like Gitlab is an impressive option but really then we are back to the problem of fragmented development tools. Using Trac, Phabricator, Gitlab, and mailing lists all in one project seems a bit silly.
I don't understand why using GitLab is more silly than using GitHub, when considering fragmentation.
When put this way my argument does indeed sound a bit silly. :-)
Perhaps it's not. I think the difference is that we would be consolidating on a platform which much of the Haskell community already uses in their non-GHC development.
Cheers,
- Ben
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users

On Wed, 20 Jul 2016, Adam Foltzer wrote:
1. Higher than necessary barrier-to-entry. For the purposes of this proposal, whether we would prefer a competing alternative is secondary to the fact that a Github account has become a very low common denominator for people wishing to participate in the development of open source projects. If we decide to proceed with a non-Github platform, we need to make a compelling case that the alternate choice does not raise the barrier to entry, or else we need to decide that we have different priorities for this effort.
Hi all, I'm a bit of an outsider here as I'm not involved in GHC development (but I am interested in how it goes). I've struggled with my own desire to avoid using proprietary software like GitHub, and the desire to work with those who favor it, so I am interested in how these competing desires can be addressed. Would the barrier to entry to a non-GitHub system be reduced by using GitHub for user authentication/accounts (like http://exercism.io/ ), or is knowing how to use other software too much of a barrier (I guess that would depend on the software…)? Thanks, Jack

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 20/07/16 23:41, Jack Hill wrote:
Would the barrier to entry to a non-GitHub system be reduced by using GitHub for user authentication/accounts For what it's worth, GitLab supports this[0]. You can also use Twitter, or whatever.
[0] http://docs.gitlab.com/ee/integration/omniauth.html - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJXj/B7AAoJENQqWdRUGk8BJZcQAJQc/v7UWFdFEVVHlE9Mp/PR ddlT5ngNJEagxzv5vZpEAk3oVTq2jyMaHP/KM+RGwR8l025jdP/groZ73g2X1qTn EMLg+orKO5SGMeHK0jdypBQCMEhwNI3kDvh/Nu3ZOdM2yBWHbHHkW/3CD6T4n+zo UpSDfmlkpOXfOtssqngjlnYJ0/roldrJGY1RCGmtrljFmvWTlmBBTTA/HqxB1mQy doA9uJnwI8Cr7MTpoV7W8yOzv8IkfiOgO7Q3kUv8rrRtij7uN4wqM0+/eyFEmapO BGMWM5RixmffjIyKILFUr0WkgYk8WtgXfqA8+kcYYeKQB+er7Jppbzw+IeSdyJ0g UyM7140uBtLAqHUCpvXx8Yp36qRmEtf5Bqscz9+4oQoAWLeYAvHWaIcifj1fqtUd HQuMdES0Pm1PcXPBu1WNd31QUeb0UsM7N1STvgAv/Iwi2SH/OFk4JyPgjSrtPDc+ KoD5LunsWBotipZr1reia7pW0mjhwPf8e6rI7FOhZjNFyNesoQWxIDFTp3IBxSYZ oyz7bdGRP1H1MsxwBeVpPN5IzWA4EkCiWKcCb6lGFvg9lqSFfFY36HGwoHehkvo/ cSV9agY4V4ZSl0wIp3sy5eCPcUmaBR/JHCiDrhpVl+x8kRaYVbjvoS09Kty0J3nj upaF1qaI20+1Jdv0pyab =n/aA -----END PGP SIGNATURE-----

Jack Hill
Hi all,
I'm a bit of an outsider here as I'm not involved in GHC development (but I am interested in how it goes). I've struggled with my own desire to avoid using proprietary software like GitHub, and the desire to work with those who favor it, so I am interested in how these competing desires can be addressed.
Would the barrier to entry to a non-GitHub system be reduced by using GitHub for user authentication/accounts (like http://exercism.io/ ), or is knowing how to use other software too much of a barrier (I guess that would depend on the software…)?
To some extent. The size of the barrier posed by an alternate system isn't a discrete quantity and is highly dependent upon one's frame of reference. Many people won't wander from GitHub at all; other won't even register for a GitHub account. People vary widely in their preferences, which is what makes this problem so difficult. Cheers, - Ben

2016-07-20 23:16 GMT+02:00 Adam Foltzer
[...] I'll quote the Motivations section:
1. Higher than necessary barrier-to-entry.
For the purposes of this proposal, whether we would prefer a competing alternative is secondary to the fact that a Github account has become a very low common denominator for people wishing to participate in the development of open source projects. If we decide to proceed with a non-Github platform, we need to make a compelling case that the alternate choice does not raise the barrier to entry, or else we need to decide that we have different priorities for this effort.
+1 for that. Just to give a few numbers, just gathered from Hackage by some grep/sed/wc "technology": 6799 of the 9946 packages (i.e. 68%) use GitHub. The numbers are even higher when one considers the top 100 downloaded packages only: 92% of them use GitHub. So like it or not, the Haskell community already relies *heavily* on GitHub, and it seems that most people don't have a problem with that or consider the alternatives inferior. As Ben already said, using some proprietary SW is no real problem as long as you can get all your data out of it (in a non-proprietary format). And I don't understand the point about "proprietary client-side JavaScript" at all: Should we stop using 99% of the Internet because some server sends us some JavaScript we have no license for? And what about all those routers/switches/etc. in between which connect you to the rest of the world: They definitely run proprietary SW, and nobody cares (for a good reason). Don't get me wrong: I'm very much for Open Source, but let's not go over the top here. Let's use a tool basically everybody knows and focus on the content, not on the technology.

like it or not, the Haskell community already relies *heavily* on GitHub, and it seems that most people don't have a problem with that or consider the alternatives inferior. Just because other people are doing The Wrong Thing, it does not mean
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 I'm replying to this to clarify that I object to GitHub in order to have a clear conscience, not because I think GitHub is a "bad tool". Attempting to shoot down my arguments against GitHub with arguments for convenience or it being a "a good tool", or popular, or whatever, do not work. To me, GitHub is not even an alternative to begin with. Of course, I don't expect most (or even any) devs here to agree with my position, but I wanted to elucidate it nonetheless, so that you may understand it better. On 21/07/16 10:59, Sven Panne wrote: that you need to do it too, nor does it excuse your doing it.
And I don't understand the point about "proprietary client-side JavaScript" at all: Should we stop using 99% of the Internet because some server sends us some JavaScript we have no license for? If you value your freedom: yes. It's proprietary software executed on your computer, just like any other proprietary software executed on your computer[0].
And what about all those routers/switches/etc. in between which connect you to the rest of the world: They definitely run proprietary SW, and nobody cares (for a good reason). Those do not execute proprietary software on your computer, so that's not comparable.
Don't get me wrong: I'm very much for Open Source, but let's not go over the top here. Let's use a tool basically everybody knows and focus on the content, not on the technology. I am not in favour of open source at all, I am in favour of free software. The issue is ethical, not technological, and saying "let's not go over the top" does not make sense to me, as it is an ethical position.
[0] https://www.gnu.org/philosophy/javascript-trap.en.html - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJXkM5KAAoJENQqWdRUGk8BncsP/24efH2arxRIriK0YankfO9W D1OT1Vfh4AtMfcBDV0dutMpLWcrllUcd/pvR2cR9uJTbA6t33LfQmhv6clqsHZ97 ZLQ7CoerfLNxCUA8i+H00YSz9JKMvGMpRJhFM/VAn1oF3OMpakCB9ZjeXXf/uwAT LVYWh/HCHRULOMnnQZe2N7IrcxUwuw+/Iy0163zWkeMVlOwmVNYj/mMFNfIuZCcK 6hxUMqNQkhL7Ipm2SSGIvZONboYsFQG7dwmAQH/A1Sj6U3oV0lmKF9Nf2gjhZq6+ FEtEDjQzMrIKtMOX0mNKuYc6i6f5N+LTMfsFddJkITMWkqkwOjbAtgStv/i6wFb7 43SIoYcFMivQ980miy3VVLVdkiFlx5/1wxz1YRaM+ieA9oF2Nl0wVU/+6mLFIUaL q1cyI+o9lywTaMr8zPzr5jSm1BNthQCgwsn4zdno7VCjbzmlsKcX9X6qMPlyAIRn AnvNhe36z2gCJzz75sqJKMmAiJKIa4bYo2/a/uB2aO1LPWrVHxgUiusQ1sGr7gB5 oCIsWyLg9u5alnv/nGKzCbtnvnn2VxRnMiRtywZXN5H7HKiMpBKjBeYt+B9udrKu WFtwWQSPM5UjTbjp5tIEZEu9BlrSGPfO+rfCRrj6IWQIAchqfs8x1OADxX3z92UJ VuJe9cRAtRGtL3/CAM/K =d4lZ -----END PGP SIGNATURE-----
participants (14)
-
Adam Foltzer
-
Alexander Berntsen
-
amindfv@gmail.com
-
Ben Gamari
-
Ben Gamari
-
Carter Schonwald
-
Gershom B
-
Iavor Diatchki
-
Jack Hill
-
Niklas Larsson
-
Richard Eisenberg
-
Sven Panne
-
Thomas Miedema
-
Yuras Shumovich