Newbie: Is‘type’ synonym hiding two much?

I am learning Haskell working through Simon Thompson book "Haskell The Craft of Functional Programming" (second edition). Solving problems in the book with more or less success I arrived almost to the end of the book at the section 17.5 "Case study: parsing expressions". Most probably the question I am going to ask here is very stupid, so please bear with me. In any case, after going so far in the book I felt certain that I know what function declaration means. So I thought that declaration: F :: a -> b -> c Is the same as: F :: a -> (b -> c) And means either: - a function 'f' of one argument of type 'a' that returns a function of type (b -> c), or it can also be interpreted as: - a function 'f' of two arguments of type 'a' and 'b' returning value of type 'c' Now, in the 17.5 section of a book one may see the following declarations: succeed :: b -> Parse a b *Before looking at 'succeed' function definition* one may think that 'succeed' is a function of *one* argument of type 'b' that returns object of type 'Parse a b'. Yet, function definition given in the book is: succeed val inp = [(val, inp)] Looking at this definition, I am inclined to think that 'succeed' is a function of *two* arguments named 'val' and 'inp' in the definition ! How can such a definition be correct? Ok, lets see what is this 'Parse a b' type is: type Parse a b = [a] -> [(b, [a])] So how does this help to make definition 'succeed val inp = [(val, inp)]' sound right? Well, the only way for me to make sense out of this is as follows. In this case Haskell 'type' makes type synonym for the type: [a] -> [(b, [a])] In order not to get out of my mind I start thinking about 'type' as a macro substitution mechanism. Then I do this substitution *myself as a Haskell runtime* and get in the result the following declaration of a * real function that Haskell runtime* works with: Succeed :: b -> [a] -> [(b, [a])] Great! This last declaration matches perfectly with function definition: succeed val inp = [(val, inp)] So I start feeling better, after all, it looks like my understanding of Haskell function declarations is not flawed too much. Well, but here comes my main questions! So: 1. Should I work every time as a macro translator when I just see *!any!* function declaration? 2. Should I search through main and imported modules for treacherous 'type' constructs? 3. Where, in this case goes implementation abstraction principle? Why I must provide *all* the details about function argument type structure in order to understand how this function works? Another example of a similar function from the book is: alt :: Parse a b -> Parse a b -> Parse a b alt p1 p2 inp = p1 inp ++ p2 inp In function definition I see three parameters: p1 – matches with function declaration perfectly p2 – matches with function declaration perfectly inp – how to match this parameter with function declaration ? I can match 'inp' parameter with 'alt' function declaration *only* after working as macro processor that expands type synonym 'Parse a b' into '[a] -> [(b, [a])]' and getting *real* declaration: alt :: [a] -> [(b, [a])] -> [a] -> [(b, [a])] -> [a] -> [(b, [a])] with that matches alt p1 p2 inp = p1 inp ++ p2 inp where 'inp' matches with *last* '[a]' argument. It seems that life of "human macro processor" becomes really hard when not trivial functions with 'type' synonym arguments come into play! Where I am wrong? Please enlighten me; I am really at a loss! And thanks for reading all this! Below I give a code example of these functions. Thanks, Dima module Parser where import Data.Char type Parse a b = [a] -> [(b, [a])] none :: Parse a b none inp = [] succeed :: b -> Parse a b succeed val inp = [(val, inp)] suc:: b -> [a] -> [(b, [a])] suc val inp = [(val, inp)] spot :: (a -> Bool) -> Parse a a spot p [] = [] spot p (x:xs) | p x = [(x, xs)] | otherwise = [] alt :: Parse a b -> Parse a b -> Parse a b alt p1 p2 inp = p1 inp ++ p2 inp bracket = spot (=='(') dig = spot isDigit t1 = (bracket `alt` dig) "234"

On Thu, Mar 22, 2007 at 06:13:00PM +0300, Dmitri O.Kondratiev wrote:
F :: a -> b -> c
Is the same as:
F :: a -> (b -> c)
Correcting the typo (use f, not F), these mean the same thing.
And means either:
- a function 'f' of one argument of type 'a' that returns a function of type (b -> c), or it can also be interpreted as: - a function 'f' of two arguments of type 'a' and 'b' returning value of type 'c'
Yes. The essential point to understand that these interpretations *are the same*.
Now, in the 17.5 section of a book one may see the following declarations:
succeed :: b -> Parse a b
*Before looking at 'succeed' function definition* one may think that 'succeed' is a function of *one* argument of type 'b' that returns object of type 'Parse a b'.
That's what it is. However, without looking at the definition of Parse a b, you can't tell whether that is a function or not, and therefore all you can say about succeed is that it takes *at least* one argument.
Then I do this substitution *myself as a Haskell runtime* and get in the result the following declaration of a * real function that Haskell runtime* works with:
I'm not sure why you feel the need to talk about runtime. This all happens at compile time.
2. Should I search through main and imported modules for treacherous 'type' constructs?
They are not treacherous. But yes, if you want to know what a type stands for, you need to look it up. The "treacherous" thing here is that in Haskell, returning a function is the same as taking one more parameter. This may feel strange at first, but it is a very important idiom and you do need to learn to live with it if you want to use Haskell.
3. Where, in this case goes implementation abstraction principle? Why I must provide *all* the details about function argument type structure in order to understand how this function works?
"type" is just notational convenience. If you want abstraction, use "newtype" or "data".

On Thu, Mar 22, 2007 at 06:13:00PM +0300, Dmitri O.Kondratiev wrote:
succeed :: b -> Parse a b
*Before looking at 'succeed' function definition* one may think that 'succeed' is a function of *one* argument of type 'b' that returns object of type 'Parse a b'.
Yet, function definition given in the book is:
succeed val inp = [(val, inp)]
It's common to instead write this as succeed :: b -> Parse a b succeed val = \inp -> [(val, inp)] so the definition fits the type signature better.
1. Should I work every time as a macro translator when I just see *!any!* function declaration?
If you are going to be dealing with the actual definitions of something like Parser then you do need to know what the synonym is, yes. Your implementation should be able to help you, e.g. in ghci: Prelude> :i ReadS type ReadS a = String -> [(a, String)] -- Defined in Text.ParserCombinators.ReadP The main advantage of the synonym is when you are /using/ the Parser library, so you can put Parser String's in sequence etc without needing to know that internally they're implemented as a function.
2. Should I search through main and imported modules for treacherous 'type' constructs? 3. Where, in this case goes implementation abstraction principle? Why I must provide *all* the details about function argument type structure in order to understand how this function works?
If you want abstraction then you need to use newtype or data to declare the type. e.g. if you had newtype Parser a b = Parser (a -> [(b, [a])]) then succeed :: b -> Parse a b succeed val inp = ... would be rejected by the compiler. Instead you would have to write succeed :: b -> Parse a b succeed val = Parser (\inp -> ...) Thanks Ian

Dmitri writes:
Now, in the 17.5 section of a book one may see the following declarations: succeed :: b -> Parse a b
*Before looking at 'succeed' function definition* one may think that 'succeed' is a function of *one* argument of type 'b' that returns object of type 'Parse a b'.
Yet, function definition given in the book is: succeed val inp = [(val, inp)]
I may not answer all your questions, but I will make a few observations. I think the main issue here is that from the "user's perspective" the Parse type should be (or is) abstract. Users of Parse should not really have to know how it is internally implemented. But implementors of Parse will (of course) be concerned with those details. How you look at the Parse type depends on which camp you belong to: implementor or user. The definition of succeed is the concern of the implementor, since it is a primitive of the Parse library. So an implementor will know that Parse is internally a function type. It may be nicer if they wrote: succeed val = \inp -> [(val, inp)] But that is a matter of style. If the library is written correctly, the user should not care about what the Parse synonym unfolds to. They will mostly care about how to build "atomic" parsers, how to combine parsers together to form new ones, and how to run parsers. It is very liberating for the user if they don't have to know what goes on inside the Parse type. Perhaps the textbook focuses mostly on the implementor's point of view, where you are forced to be aware of the gory details. Haskell uses abstract types in other places, most notably the IO type. There is quite a bit of complexity hidden in that type, but its user interface is relatively simple. If Haskell has taught me anything, it's that abstraction rules! Cheers, Bernie.
participants (4)
-
Antti-Juhani Kaijanaho
-
Bernie Pope
-
Dmitri O.Kondratiev
-
Ian Lynagh