GLfloat cast... have I got it right?

As I chunder on through OpenGL-land with Haskell I find things that sometimes confuse me! Is there some kind of assumption being made here about the remaining two zeros? color $ Color3 (0::GLfloat) 0 0 I can see that is is typing the first zero to GLfloat but why don't I need to do it to the remaining two zeros? Color3 is "Color3 !a !a !a" The !a is, IIUIC, a strictness instruction that ensures that whatever expression I put here is evaluated immediately i.e. no thunk is generated and presumably no space leaks either: something rendering at 60 fps with 'interesting' calculations for RGB values for example could cripple the application! But back to the syntax: I am guessing (and hoping I've got it right for the *right* reasons) that it works because the definition says "a" for all three and that explicitly typing the first one automatically tells (infers!) the type inference system that the other two are to be treated as GLfloat types too. :)

On Thu, Jul 7, 2011 at 1:19 PM, Sean Charles
But back to the syntax: I am guessing (and hoping I've got it right for the *right* reasons) that it works because the definition says "a" for all three and that explicitly typing the first one automatically tells (infers!) the type inference system that the other two are to be treated as GLfloat types too.
Yes, that's right =). Cheers, -- Felipe.

The !a is, IIUIC, a strictness instruction that ensures that whatever expression I put here is evaluated immediately i.e. no thunk is generated and presumably no space leaks either: something rendering at 60 fps with 'interesting' calculations for RGB values for example could cripple the application!
Correct. (I believe it forces the expression to WHNF, which basically means 'forces the value' in the case of things like numbers and floats, AFAIK, but I've only just learned about that syntax myself)
But back to the syntax: I am guessing (and hoping I've got it right for the *right* reasons) that it works because the definition says "a" for all three and that explicitly typing the first one automatically tells (infers!) the type inference system that the other two are to be treated as GLfloat types too.
Exactly. As you've specified one of them to be a given type (note that I think it would be slightly off the mark to call it a "cast" here [as per the subject], unless that terminology is the norm for numbers in Haskell -- it reads more [to me] like you're actually informing the typing engine that it *is* a GLfloat, not to make it into one!), the rest must follow by virtue of them sharing that type. Cheers, A

On Friday 08 July 2011, 00:37:56, Arlen Cuss wrote:
The !a is, IIUIC, a strictness instruction that ensures that whatever expression I put here is evaluated immediately i.e. no thunk is generated and presumably no space leaks either: something rendering at 60 fps with 'interesting' calculations for RGB values for example could cripple the application!
Correct. (I believe it forces the expression to WHNF, which basically means 'forces the value' in the case of things like numbers and floats, AFAIK, but I've only just learned about that syntax myself)
Right. Precisely, it forces the fields to WHNF *when the entire value is forced to WHNF*. With data Foo a = F a a (x :: Foo Int) `seq` () forces the constructor F but leaves the fields alone, so (F undefined undefined) `seq` () evaluates to (). With data Bar a = B !a !a (y :: Bar Int) `seq` () forces the constructor B and also the fields [to WHNF] (since the fields have type Int, that means full evaluation), so (B 3 undefined) `seq` () yields _|_. But (B (3:undefined) (4:undefined)) `seq` () evaluates to (). So putting a `!' on a component of, say, type String only forces it enough to see whether it's empty or not. `!' is most useful for types where WHNF implies sufficient evaluation, which means the constructors of that type need to have strict fields too (if any). Types like Integer, Int, Word, Double, Float have (in GHC) strict fields in the form of "raw bytes", Data.Map has `!'-ed fields (but the values are lazy), so with data Quux a b = Q ... !(Map a b) ... forcing a value of such a type to WHNF also forces the entire spine of the Map, which often is sufficient, but not always. If you also need the values in the Map to be evaluated, you have to use other methods (normally it's best to make sure the values are evaluated when they are inserted into the Map, doing that later tends to be expensive).
But back to the syntax: I am guessing (and hoping I've got it right for the *right* reasons) that it works because the definition says "a" for all three and that explicitly typing the first one automatically tells (infers!) the type inference system that the other two are to be treated as GLfloat types too.
Exactly. As you've specified one of them to be a given type (note that I think it would be slightly off the mark to call it a "cast" here [as per the subject], unless that terminology is the norm for numbers in Haskell
It's not. The term "cast" is rarely used in Haskell. For (0 :: Float) one would rather say that one specifies the type. But calling it a cast isn't wrong, since it means "apply this conversion function to that value", what a cast in other languages means too. However, there's a difference. In C (Java, C#, ...), if I have int x, y; // stuff setting the values I can do double d = (double)x / y; so I can tell the compiler explicitly to convert the one monomorphic value to another type and the compiler automatically converts the other to the same type to perform the calculation. In Haskell, a) I cannot invoke a conversion function on a monomorphic value by just giving a type signature, b) I have to explicitly convert all involved values. Re a): Number literals come with an implicit conversion function, 1234 and 5.678 stand for "fromInteger 1234" resp. "fromRational 5.678" and a type signature tells the compiler which fromInteger/fromRational to invoke. Number literals are polymorphic expressions and polymorphic expressions can be "cast" to a specific type by a type signature [which tells the compiler which fromInteger, return, ... to use]. To convert a monomorphic expression, however, the conversion function has to be explicitly invoked.
-- it reads more [to me] like you're actually informing the typing engine that it *is* a GLfloat, not to make it into one!), the rest must follow by virtue of them sharing that type.
Yup. Per the data definition, they all have the same type, so when you know one, you know them all.

Right. Precisely, it forces the fields to WHNF *when the entire value is forced to WHNF*.
A-ha! That gives a good definition for me to work from. Thanks for the seq examples; I'm only just starting to get to the point where (I think) I can consider these issues with any insight.
So putting a `!' on a component of, say, type String only forces it enough to see whether it's empty or not. `!' is most useful for types where WHNF implies sufficient evaluation, which means the constructors of that type need to have strict fields too (if any). Types like Integer, Int, Word, Double, Float have (in GHC) strict fields in the form of "raw bytes", Data.Map has `!'-ed fields (but the values are lazy), so with
data Quux a b = Q ... !(Map a b) ...
forcing a value of such a type to WHNF also forces the entire spine of the Map, which often is sufficient, but not always. If you also need the values in the Map to be evaluated, you have to use other methods (normally it's best to make sure the values are evaluated when they are inserted into the Map, doing that later tends to be expensive).
That makes a lot of sense.
Re a): Number literals come with an implicit conversion function, 1234 and 5.678 stand for "fromInteger 1234" resp. "fromRational 5.678" and a type signature tells the compiler which fromInteger/fromRational to invoke. Number literals are polymorphic expressions and polymorphic expressions can be "cast" to a specific type by a type signature [which tells the compiler which fromInteger, return, ... to use]. To convert a monomorphic expression, however, the conversion function has to be explicitly invoked.
Polymorphic expressions are handy :-) One of the things that surprised me the most - particularly coming to Haskell by way of ML - was terms like `minBound', `maxBound', `read x' and so on. Seeing how any term could be defined per-instance in a typeclass -- not just functions -- was a key moment, as it's easy to get stuck (very hard) in a given mindset. From ML, polymorphic functions made sense, but *plain values*? (Or not so plain.) Of course, where typeclasses are concerned, there's not much difference, but I didn't even consider that at first. A

We need a new Pokemon character: Haskellsaur --- gotta type 'em all! Arlen, Daniel... awesome replies as usual, I was being lazy using "cast" but I couldn't think of a better word to use but I do know! Thanks again, Sean
participants (4)
-
Arlen Cuss
-
Daniel Fischer
-
Felipe Almeida Lessa
-
Sean Charles