Strange context reduction with partial application and casting

Hi cafe, I found some strange phenomenon when I was hanging around with GHCi. Please see the following interaction: GHCi, version 7.0.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> let f a b c d = a == b && c == d Prelude> :t f f :: (Eq a, Eq a1) => a -> a -> a1 -> a1 -> Bool Prelude> let g = f 1 Prelude> :t g g :: Integer -> () -> () -> Bool Here I expect that the type of g is (Eq a) => Integer -> a -> a -> Bool. But suddenly all the context in the type is gone and I have some strange thing with unit type in it. I tried another example to get what I expected at the first time: Prelude> let g = f :: (Eq a) => Int -> Int -> a -> a -> Bool Prelude> :t g g :: Int -> Int -> () -> () -> Bool The resulting type was different from one that I want to cast into. I know that Haskell tries to reduce the context in the type when there's sufficient information to do it. However, this goes too far from what I think normal. Is this intended behavior or just a bug?

On Saturday 02 July 2011, 20:23:48, Wonchan Lee wrote:
Hi cafe,
I found some strange phenomenon when I was hanging around with GHCi. Please see the following interaction:
GHCi, version 7.0.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Prelude> let f a b c d = a == b && c == d Prelude> :t f f :: (Eq a, Eq a1) => a -> a -> a1 -> a1 -> Bool Prelude> let g = f 1 Prelude> :t g g :: Integer -> () -> () -> Bool
Here I expect that the type of g is (Eq a) => Integer -> a -> a -> Bool. But suddenly all the context in the type is gone and I have some strange thing with unit type in it. I tried another example to get what I expected at the first time:
Prelude> let g = f :: (Eq a) => Int -> Int -> a -> a -> Bool Prelude> :t g g :: Int -> Int -> () -> () -> Bool
The resulting type was different from one that I want to cast into. I know that Haskell tries to reduce the context in the type when there's sufficient information to do it. However, this goes too far from what I think normal.
Is this intended behavior or just a bug?
It's intended, caused by the monomorphism restriction. You bind g by a binding with neither type signature nor function arguments, therefore the monomorphism restriction (cf. language report, section 4.5) says g must have a monomorphic type. Per the report, "let g = f 1" should not compile at all, because one of the requirements for type defaulting is that a numeric class constraint be present, but the second type only has an Eq constraint; however, ghci uses extended default rules (too many ambiguous type errors would arise otherwise) and thus defaults that type variable to (). You can get the expected behaivour by - giving g a type signature (in your last example, the type signature was for f) let g :: Eq a => Int -> Int -> a -> a -> Bool; g = f - binding g with a function binding, i.e. with arguments let g a b c d = f a b c d (let g a = f a is already enough, all that is required to get a polymorphic type for g is one function argument in the binding) - disabling the monomorphism restriction :set -XNoMonomorphismRestriction let g = f

On Sat, Jul 02, 2011 at 09:02:13PM +0200, Daniel Fischer wrote:
- disabling the monomorphism restriction :set -XNoMonomorphismRestriction let g = f
This is the recommended solution. The confusion caused by the MR far outweighs its occasional benefits. -Brent

At Sat, 2 Jul 2011 17:23:50 -0400, Brent Yorgey wrote:
On Sat, Jul 02, 2011 at 09:02:13PM +0200, Daniel Fischer wrote:
- disabling the monomorphism restriction :set -XNoMonomorphismRestriction let g = f
This is the recommended solution. The confusion caused by the MR far outweighs its occasional benefits.
Recommended by some people, but by no means everyone. For instance, Vytiniotis, Peyton Jones, and Schrijvers make a good argument that the monomorphism restriction should effectively be expanded to include both pattern and function bindings in let and where clauses: http://research.microsoft.com/pubs/102483/tldi10-vytiniotis.pdf The above paper is currently implemented in GHC, and on by default if you enable GADTs. (So you would additionally need -XNoMonoLocalBindings if you wanted to use GADTs.) Moreover, even with -XNoMonoLocalBindings, you still run into the fact that bindings are not generalized within a declaration group, which could lead to confusion. In particular, consider the following program: {-# LANGUAGE NoMonomorphismRestriction #-} x = 2 -- The type of x is: Num a => a y = (x + y) :: Int The type of x is what you would expect without the monomorphism restriction. Now say x has a gratuitous use of y: {-# LANGUAGE NoMonomorphismRestriction #-} x = 2 where _ = y -- The type of x is: Int y = (x + y) :: Int If you want x to be polymorphic in this case, you have to give it a type signature anyway: {-# LANGUAGE NoMonomorphismRestriction #-} x :: Num a => a x = 2 where _ = y y = (x + y) :: Int Thus, what I would recommend, instead of -XNoMonoLocalBindings, is to give type signatures to your polymorphic bindings. This makes the code more readable. It has the disadvantage that Haskell doesn't allow you to name monomorphic type variables, which, for local bindings, can require either the use of -XScopedTypeVariables or giving extra function arguments whose only purpose is to bring a type variable into scope. But both of those are probably more future-proof than -XNoMonomorphismRestriction. David

On Sunday 03 July 2011, 09:19:11, dm-list-haskell-cafe@scs.stanford.edu wrote:
At Sat, 2 Jul 2011 17:23:50 -0400,
Brent Yorgey wrote:
On Sat, Jul 02, 2011 at 09:02:13PM +0200, Daniel Fischer wrote:
- disabling the monomorphism restriction
:set -XNoMonomorphismRestriction
let g = f
This is the recommended solution. The confusion caused by the MR far outweighs its occasional benefits.
Recommended by some people, but by no means everyone.
For instance, Vytiniotis, Peyton Jones, and Schrijvers make a good argument that the monomorphism restriction should effectively be expanded to include both pattern and function bindings in let and where clauses:
http://research.microsoft.com/pubs/102483/tldi10-vytiniotis.pdf
The above paper is currently implemented in GHC, and on by default if you enable GADTs. (So you would additionally need -XNoMonoLocalBindings if you wanted to use GADTs.) Moreover, even
It's [No]MonoLocalBinds, which is shorter - but apparently the participle is more intuitive. [However, I'm not advocating a change]
with -XNoMonoLocalBindings, you still run into the fact that bindings are not generalized within a declaration group, which could lead to confusion. In particular, consider the following program:
{-# LANGUAGE NoMonomorphismRestriction #-} x = 2 -- The type of x is: Num a => a y = (x + y) :: Int
The type of x is what you would expect without the monomorphism restriction. Now say x has a gratuitous use of y:
{-# LANGUAGE NoMonomorphismRestriction #-} x = 2 where _ = y -- The type of x is: Int y = (x + y) :: Int
If you want x to be polymorphic in this case, you have to give it a type signature anyway:
{-# LANGUAGE NoMonomorphismRestriction #-} x :: Num a => a x = 2 where _ = y y = (x + y) :: Int
Thus, what I would recommend, instead of -XNoMonoLocalBindings, is to
You meant NoMonomorphismRestriction here, I think.
give type signatures to your polymorphic bindings. This makes the
+1
code more readable. It has the disadvantage that Haskell doesn't allow you to name monomorphic type variables, which, for local bindings, can require either the use of -XScopedTypeVariables or giving extra function arguments whose only purpose is to bring a type variable into scope. But both of those are probably more future-proof than -XNoMonomorphismRestriction.
But as I understand it, the concern is ghci, where truly local bindings are probably rare and type signatures are commonly omitted. So putting ":s -XNoMonomorphismRestriction" in the .ghci file probably prevents more confusion and inconvenience than it causes.

On Sun, Jul 3, 2011 at 2:05 AM, Daniel Fischer < daniel.is.fischer@googlemail.com> wrote:
But as I understand it, the concern is ghci, where truly local bindings are probably rare and type signatures are commonly omitted. So putting ":s -XNoMonomorphismRestriction" in the .ghci file probably prevents more confusion and inconvenience than it causes.
Especially given GHCi's extended defaulting rules which makes g = f 1 not ambiguous. [It sets a1 to ()] -- ryan
participants (5)
-
Brent Yorgey
-
Daniel Fischer
-
dm-list-haskell-cafe@scs.stanford.edu
-
Ryan Ingram
-
Wonchan Lee