
I've put a wiki page with a summary of this discussion here: http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/Monomorph ismRestriction Hopefully I've captured most of the important points, please let me know if there's anything I missed, or misrepresented. I'll add a ticket shortly. Given the new evidence that it's actually rather hard to demonstrate any performance loss in the absence of the M-R with GHC, I'm attracted to the option of removing it in favour of a warning. Cheers, Simon

On 1/30/06, Simon Marlow
I've put a wiki page with a summary of this discussion here:
http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/Monomorph ismRestriction
Hopefully I've captured most of the important points, please let me know if there's anything I missed, or misrepresented. I'll add a ticket shortly.
Given the new evidence that it's actually rather hard to demonstrate any performance loss in the absence of the M-R with GHC, I'm attracted to the option of removing it in favour of a warning.
Given that the discussion has focused a lot on how beginners would feel about this, I'll chime in with my two cents. I may not be a beginner in the strictest sense of the word, but I'm probably a lot closer to it than the rest of the participants in this discussion :-) I'm against it. People will want to *understand* the difference between ":=" and "=", and I don't think beginners will really grok something like that without significant difficulties. And it does add a significant extra clutter to the language, IMO. That symbols feels "heavy", somehow, even if it's meaning is subtle (at least from a beginners POV). Also, since it's only a problem very rarely, it could be noted in an "optimization faq" somewhere. Plus, don't we already tell people to add type signatures when something is too slow? Isn't that the first thing you would try when something is surprisingly slow? /S -- Sebastian Sylvan +46(0)736-818655 UIN: 44640862

On Mon, Jan 30, 2006 at 04:45:56PM -0000, Simon Marlow wrote:
Given the new evidence that it's actually rather hard to demonstrate any performance loss in the absence of the M-R with GHC, I'm attracted to the option of removing it in favour of a warning.
I caution against the hope that warnings will contribute to the solution, whichever side you're on. This is a general argument: Either the warning is on by default or off. If off, it does no harm, but doesn't help much either. If on, it either triggers only on code that is almost certainly wrong (or easily disambiguated), or it sometimes triggers on perfectly good code. In the former case, it would be better to make it illegal (or require the disambiguation). In the latter, nobody likes disabling warnings, so they'll grumble and change the code instead. In the present case, people aren't (only) opposing the M-R out of principle, but because they actually use overloaded variable definitions and (at least sometimes) want to leave off the signature. So I don't see how one could claim, as on the wiki, the warning "wouldn't happen much". I suspect it would happen, and annoy people, and defeat the reason that people want to remove the M-R. Andrew

Quoting Andrew Pimlott
On Mon, Jan 30, 2006 at 04:45:56PM -0000, Simon Marlow wrote:
Given the new evidence that it's actually rather hard to demonstrate any performance loss in the absence of the M-R with GHC, I'm attracted to the option of removing it in favour of a warning.
I caution against the hope that warnings will contribute to the solution, whichever side you're on. This is a general argument: Either the warning is on by default or off. If off, it does no harm, but doesn't help much either. If on, it either triggers only on code that is almost certainly wrong (or easily disambiguated), or it sometimes triggers on perfectly good code. In the former case, it would be better to make it illegal (or require the disambiguation). In the latter, nobody likes disabling warnings, so they'll grumble and change the code instead.
In the present case, people aren't (only) opposing the M-R out of principle, but because they actually use overloaded variable definitions and (at least sometimes) want to leave off the signature. So I don't see how one could claim, as on the wiki, the warning "wouldn't happen much". I suspect it would happen, and annoy people, and defeat the reason that people want to remove the M-R.
So I envisage that you'd turn off the warning in the same way as you turn off the M-R today: by a type signature. If you write the type you pretty much show that you know that you're doing something special. -- Lennart

On 1/30/06, lennart@augustsson.net
So I envisage that you'd turn off the warning in the same way as you turn off the M-R today: by a type signature. If you write the type you pretty much show that you know that you're doing something special.
This requires scoped type variables.
--
Taral

On Mon, Jan 30, 2006 at 11:06:29PM +0100, lennart@augustsson.net wrote:
Quoting Andrew Pimlott
: In the present case, people aren't (only) opposing the M-R out of principle, but because they actually use overloaded variable definitions and (at least sometimes) want to leave off the signature. So I don't see how one could claim, as on the wiki, the warning "wouldn't happen much". I suspect it would happen, and annoy people, and defeat the reason that people want to remove the M-R.
So I envisage that you'd turn off the warning in the same way as you turn off the M-R today: by a type signature.
But if people were happy adding type signatures for every polymorphic variable definition, they wouldn't be moving to eliminate the M-R, would they? Or do I misunderstand? Andrew

Quoting Andrew Pimlott
On Mon, Jan 30, 2006 at 11:06:29PM +0100, lennart@augustsson.net wrote:
Quoting Andrew Pimlott
: In the present case, people aren't (only) opposing the M-R out of principle, but because they actually use overloaded variable definitions and (at least sometimes) want to leave off the signature. So I don't see how one could claim, as on the wiki, the warning "wouldn't happen much". I suspect it would happen, and annoy people, and defeat the reason that people want to remove the M-R.
So I envisage that you'd turn off the warning in the same way as you turn off the M-R today: by a type signature.
But if people were happy adding type signatures for every polymorphic variable definition, they wouldn't be moving to eliminate the M-R, would they? Or do I misunderstand?
Well, my feeling is that the M-R is an ugly wart, and I want it gone. But I'm still happy to put a type signature when I want something to be polymorphic. -- Lennart

On Tue, Jan 31, 2006 at 12:57:18AM +0100, lennart@augustsson.net wrote:
Quoting Andrew Pimlott
: On Mon, Jan 30, 2006 at 11:06:29PM +0100, lennart@augustsson.net wrote:
So I envisage that you'd turn off the warning in the same way as you turn off the M-R today: by a type signature.
But if people were happy adding type signatures for every polymorphic variable definition, they wouldn't be moving to eliminate the M-R, would they? Or do I misunderstand?
Well, my feeling is that the M-R is an ugly wart, and I want it gone. But I'm still happy to put a type signature when I want something to be polymorphic.
Ok, I understand your position now. But even given this view, I think the warning will be problematic. First, when will the warning be emitted? For all variable assignments without signatures, or only for those that the implementation fails to monomorphize (as an optimization)? The first is heavy-handed; the second, obviously implementation-dependent (someone using another implementation, or another optimization level, will get the warning and will lose sharing). Second, a warning about "loss of sharing" may befuddle beginners (who are usually not taught to write type signatures at the start). Well, maybe when someone implements this warning, we will find out I'm wrong and it doesn't cause trouble. And I agree with removing the M-R, with or without the warning. Andrew

Second, a warning about "loss of sharing" may befuddle beginners (who are usually not taught to write type signatures at the start).
Are standards documents the place for prescribing which warnings should be raised, and under what circumstances? If someone is using GHC, and has specified -O2 then clearly something that causes vastly more time is a problem. If someone is learning Haskell and is using Hugs then they probably couldn't care less. Perhaps some warnings should be left up to the implementation to decide... Thanks Neil

On Tue, 31 Jan 2006, Neil Mitchell wrote:
Are standards documents the place for prescribing which warnings should be raised, and under what circumstances?
If someone is using GHC, and has specified -O2 then clearly something that causes vastly more time is a problem. If someone is learning Haskell and is using Hugs then they probably couldn't care less. Perhaps some warnings should be left up to the implementation to decide...
That's why my preferred phrasing is "warnings available to the user" - that is, they don't have to be on by default but there should be an option to turn them on. -- flippa@flippac.org 'In Ankh-Morpork even the shit have a street to itself... Truly this is a land of opportunity.' - Detritus, Men at Arms

On Tue, Jan 31, 2006 at 12:52:51AM +0000, Neil Mitchell wrote:
Second, a warning about "loss of sharing" may befuddle beginners (who are usually not taught to write type signatures at the start).
Are standards documents the place for prescribing which warnings should be raised, and under what circumstances?
If someone is using GHC, and has specified -O2 then clearly something that causes vastly more time is a problem. If someone is learning Haskell and is using Hugs then they probably couldn't care less. Perhaps some warnings should be left up to the implementation to decide...
My ultimate point was that the possibility of a warning should carry very little weight (if any) when analyzing the pros and cons of a language change. If you want to argue that a warning would mitigate a disadvantage of a change, you need to think about when the warning would be emitted, which I agree should be outside the scope of a standards discussion. So I am just suggesting that we simplify the discussion by not talking about warnings (which suggestion I will follow as soon as I hit send!). Andrew

Andrew Pimlott wrote:
On Tue, Jan 31, 2006 at 12:52:51AM +0000, Neil Mitchell wrote:
Second, a warning about "loss of sharing" may befuddle beginners (who are usually not taught to write type signatures at the start).
Are standards documents the place for prescribing which warnings should be raised, and under what circumstances?
If someone is using GHC, and has specified -O2 then clearly something that causes vastly more time is a problem. If someone is learning Haskell and is using Hugs then they probably couldn't care less. Perhaps some warnings should be left up to the implementation to decide...
My ultimate point was that the possibility of a warning should carry very little weight (if any) when analyzing the pros and cons of a language change. If you want to argue that a warning would mitigate a disadvantage of a change, you need to think about when the warning would be emitted, which I agree should be outside the scope of a standards discussion. So I am just suggesting that we simplify the discussion by not talking about warnings (which suggestion I will follow as soon as I hit send!).
I agree that a requiring a warning in the language standard is a rather dodgy thing. So let's say we don't have a warning. Is this a tried solution? Yes, nhc does exactly that. It does not have the M-R nor a warning. And I have never heard outcries about how bad nhc is because of this. -- Lennart

On Mon, 30 Jan 2006, Andrew Pimlott wrote:
Ok, I understand your position now. But even given this view, I think the warning will be problematic. First, when will the warning be emitted? For all variable assignments without signatures, or only for those that the implementation fails to monomorphize (as an optimization)?
How about for those a minimal standards-compliant implementation would fail to retain sharing in, coupled with some requirements about sharing equivalent to a specified set of transforms on a dictionary-passing implementation? -- flippa@flippac.org 'In Ankh-Morpork even the shit have a street to itself... Truly this is a land of opportunity.' - Detritus, Men at Arms

There has been many good arguments for and against the M-R in this thread. But the most important argument against M-R hasn't been put forward yet. Well, I think it is the most important argument anyway. Haskell is, and has always been, a non-strict language. *Not* a lazy language. Although many people on this list know this it is so easy to forget that it is worth repeating. Haskell is *not* a lazy language. There are many evaluation strategies one can use to implement Haskell. The problem with the M-R is that it is a concern only in *some* of these evaluation strategies, most notably lazy evaluation. That makes the M-R such a strange thing to have in the specification of Haskell. If you read the motivation section which defines the M-R ( 4.4.5 in Haskell98 report) the report suddenly starts to talk about how many times a certain a certain expression is evaluated. But nowhere in the report is it defined how expressions should be evaluated. This makes the M-R truly butt-ugly! Some might say: Well then, let's make Haskell lazy. All implementations are lazy anyway! No, not all implementations are lazy. And even if that were the case I think it would be a big mistake to explicitly define Haskell's evaluation order. There is a why Haskell was not specified as lazy in the first place. But that doesn't mean that having a standardized lazy semantics for Haskell would be a bad thing. It would be a very nice thing to have an addendum to Haskell' which defined a standard lazy semantics. If it turns out that people want the M-R after all it should be placed in such an addendum, which actually talks about the evaluation order. However, I do realize that such an addendum would be quite a bit of work to put together. Perhaps after Haskell' is fixed we can start thinking about whether we have the need and the stamina to put together such a thing. Cheers, /Josef

Josef Svenningsson wrote:
There are many evaluation strategies one can use to implement Haskell. The problem with the M-R is that it is a concern only in *some* of these evaluation strategies, most notably lazy evaluation.
True, but it's a concern in any evaluation strategy that tries to avoid multiple evaluation of let-bound expressions, which includes lazy, optimistic, and eager evaluation. A strict dialect of ML with type classes would face the same problems.
If you read the motivation section which defines the M-R [...] the report suddenly starts to talk about how many times a certain a certain expression is evaluated. But nowhere in the report is it defined how expressions should be evaluated. This makes the M-R truly butt-ugly!
I agree, but you don't have to specify lazy evaluation in order to justify the M-R. Some sort of nondeterministic graph reduction semantics would be good enough. -- Ben

Following the helpful call to attend to priorities, I reluctantly return to the M-R discussion. I believe a point has been missed that should be a part of this thread. On 2006 January 30, Josef Svenningsson wrote:
But the most important argument against M-R hasn't been put forward yet.
Haskell is, and has always been, a non-strict language. *Not* a lazy language.
That is correct, but it is not a welcome argument. Haskell's unspecified evaluation order is elegant, and gives implementers a happy flexibility. But Haskell has no need to allow innovative experiments within the report. On the contrary, practical Haskell programs and libraries rely on sharing. Without sharing, the humble Fibonacci example takes exponential time. If the report were to clearly mandate the sharing that nearly everyone counts on, it would be a benefit. The := syntax suggested by John Hughes is an obvious point at which sharing could be mandated. The wiki page http://hackage.haskell.org/trac/haskell-prime/wiki/MonomorphismRestriction counts "introducing a concept of sharing into the report" as a negative. In the larger context of bolstering Haskell's support for mainstream applications, sharing is worthwhile.

On Tuesday 31 January 2006 01:37, Andrew Pimlott wrote:
On Tue, Jan 31, 2006 at 12:57:18AM +0100, lennart@augustsson.net wrote:
Quoting Andrew Pimlott
: On Mon, Jan 30, 2006 at 11:06:29PM +0100, lennart@augustsson.net wrote:
So I envisage that you'd turn off the warning in the same way as you turn off the M-R today: by a type signature.
But if people were happy adding type signatures for every polymorphic variable definition, they wouldn't be moving to eliminate the M-R, would they? Or do I misunderstand?
Well, my feeling is that the M-R is an ugly wart, and I want it gone. But I'm still happy to put a type signature when I want something to be polymorphic.
Ok, I understand your position now. But even given this view, I think the warning will be problematic. First, when will the warning be emitted? For all variable assignments without signatures, or only for those that the implementation fails to monomorphize (as an optimization)? The first is heavy-handed;
Agreed.
the second, obviously implementation-dependent (someone using another implementation, or another optimization level, will get the warning and will lose sharing).
And what, if I may ask, is the problem with that? I mean, I am used to different implementations having, sometimes drastic, differences in speed or memory use. Getting a warning about possible loss of efficiency due to loss of sharing might be /very/ helpful; and it can safely be ignored by a beginner, as long as it is unmistakenly flagged as such (i.e. an efficiency warning, nothing /wrong/ with the code). Furthermore, you would turn on such a warning only if you are expressly interested in your program's efficiency. And in this case I would be thankful to get as many (substantial) warnings as possible.
Second, a warning about "loss of sharing" may befuddle beginners (who are usually not taught to write type signatures at the start).
Well, maybe when someone implements this warning, we will find out I'm wrong and it doesn't cause trouble. And I agree with removing the M-R, with or without the warning.
I would make it a strong recommendation for implementations to provide a warning about loss of sharing due to 'accidental' (=inferred) polymorphism/overloading, whenever it /actually happens/. But I would demand that it is not turned on by default, so as not to unsettle the unsuspecting beginner. Ben

"Simon Marlow"
Given the new evidence that it's actually rather hard to demonstrate any performance loss in the absence of the M-R with GHC, I'm attracted to the option of removing it in favour of a warning.
As another data point, today for the first time I received an error (not a warning) from ghc about the M-R: Ambiguous type variable `a' in the constraint: `Ord a' arising from use of `Data.Set.insert' at Pretty.hs:28:11-20 Possible cause: the monomorphism restriction applied to the following: addToSet :: a -> Data.Set.Set a -> Data.Set.Set a (bound at Pretty.hs:28:0) Probable fix: give these definition(s) an explicit type signature or use -fno-monomorphism-restriction So, without the M-R or a type signature, my code is OK. The proposal to accept this code but produce an optional warning is (I think) better than the current error. Regards, Malcolm

On Mon, 30 Jan 2006, "Simon Marlow"
Given the new evidence that it's actually rather hard to demonstrate any performance loss in the absence of the M-R with GHC, I'm attracted to the option of removing it in favour of a warning.
I also want to remove the M-R, because of various reasons that have been mentioned before. However, to stand on more solid ground I suggest that someone runs some performance tests, with and without -fno-monomorphism-restriction, to see whether the M-R has any great impact in practice. There are some performance test suites based on real code out there, right? Nofib? -- /NAD
participants (14)
-
Andrew Pimlott
-
Ben Rudiak-Gould
-
Benjamin Franksen
-
Josef Svenningsson
-
Lennart Augustsson
-
lennart@augustsson.net
-
Malcolm Wallace
-
Neil Mitchell
-
Nils Anders Danielsson
-
Philippa Cowderoy
-
Scott Turner
-
Sebastian Sylvan
-
Simon Marlow
-
Taral