RE: [Haskell] Nested guards?

[redirecting to Haskell Cafe] | > It is clear that this situation must not stay this way. Bit by bit, | > disciples of Perl and Python discover Haskell and demand that Haskell will | > be plastered with syntactic sugar until the simplicity of the functional | > approach isn’t visible anymore. Sadly, they seem to be successful with this | > as syntax extensions like parallel list comprehensions show. I think it is helpful to distinguish "superficial" complexity from "deep" complexity. All of Haskell's syntactic sugar is just that: it can be translated into a small purely-functional language using straightforward translation schemes. Even something relatively complicated like the "order by/group by" extension that Phil and I proposed at this year's Haskell workshop, has an easy translation that takes a dozen or two lines to give in full detail. In contrast, something like higher order functions, or mutable state, is deep complexity. Both have a pervasive effect on the language semantics and on its implementation. (The effect of mutable state is much, much worse, but they are both deep.) Concerning Haskell, I'm quite relaxed about superficial complexity, as you'll have seen from what happens in GHC. Section 3.6 of the History of Haskell paper addresses this point specifically [1]. You may disagree with the choice made by the Haskell committee at the time, and with subsequent developments, but it was quite a conscious choice, and one not without its advantages. Simon [1] http://research.microsoft.com/%7Esimonpj/papers/history-of-haskell/history.p...

Am Donnerstag, 6. Dezember 2007 10:03 schrieb Simon Peyton Jones:
[redirecting to Haskell Cafe]
| > It is clear that this situation must not stay this way. Bit by bit, | > disciples of Perl and Python discover Haskell and demand that Haskell | > will be plastered with syntactic sugar until the simplicity of the | > functional approach isn’t visible anymore. Sadly, they seem to be | > successful with this as syntax extensions like parallel list | > comprehensions show.
I think it is helpful to distinguish "superficial" complexity from "deep" complexity. All of Haskell's syntactic sugar is just that: it can be translated into a small purely-functional language using straightforward translation schemes. Even something relatively complicated like the "order by/group by" extension that Phil and I proposed at this year's Haskell workshop, has an easy translation that takes a dozen or two lines to give in full detail.
In contrast, something like higher order functions, or mutable state, is deep complexity. Both have a pervasive effect on the language semantics and on its implementation. (The effect of mutable state is much, much worse, but they are both deep.)
The point is that higher order functions, type classes, etc. enable you to “extend the language yourself” to a large degree by just creating libraries. Such powerful concepts give you the ability to create domain specific languages by just writing Haskell code. So they serve the approach of having few concepts in the language which allow you to do many things. On the other hand, syntactic sugar often deals with very special cases. Guards are sugar for case distinction on just a single type (Bool), list comprehensions deal with specific operations (map, filter, etc.) of a specific type ([]). Parallel list comprehensions even sugar a function which is somehow broken in a language without dependent types (zip; because of the ad-hoc solution for zipping lists of different length). In my opinion, syntactic sugar is good if it is about more general things. do and proc expressions are really useful but they don’t deal with specific types but with whole classes of types which are rather large. The ability to define infix operators is really helpful, especially for creating DSLs. (Johannes Waldmann has a different opinion here.)
Concerning Haskell, I'm quite relaxed about superficial complexity, as you'll have seen from what happens in GHC.
Yes, I have seen what happens in GHC and it makes me very sad. I think, since you are a GHC developer, you have a different perspective. You can modify the compiler to provide language extensions. People like me cannot do this. And I think that the solution is not to make the language larger and larger everytime someone wants a feature but to give people the tools to provide features without language changes.
Section 3.6 of the History of Haskell paper addresses this point specifically […].
I want to cite the first paragraph:
A major source of tension both within and between members of the committee was the competition between beauty and utility. On the one hand we passionately wanted to design a simple, elegant language […] On the other hand, we also really wanted Haskell to be a useful language, for both teaching and real applications.
This reasoning is really flawed, in my opinion. The claim is that a language without all kinds of syntactic sugar wouldn’t be useful, especially not for real applications. I claim that I’m writing really useful code in Haskell and I find myself not using many kinds of syntactic sugar. Not because I just have an opinion that syntactic sugar is bad but because I feel that my style of coding is sensible. I never consciously decided against syntactic sugar. My low-sugar style just emerged automatically over the years. As I already said, I definitely want do expressions, proc expressions and definable infix operators. However, I seldomly use where clauses. List comprehensions are more or less absent from my code and I’m even not interested in getting to know how pattern guards and parallel list comprehensions work in detail. And I don’t want to write SQL-like code in Haskell using the order-by/group-by extension.
You may disagree with the choice made by the Haskell committee at the time, and with subsequent developments, but it was quite a conscious choice, and one not without its advantages.
The choice of the Haskell committee might be okay, the subsequent developments in GHC are problematic, in my opinion.
Simon
Side note: I hope you can cope with my direct style of writing. After all, I’m just an unfriendly German. ;-) Best wishes, Wolfgang

Aaron Denney wrote:
On 2007-12-06, Wolfgang Jeltsch
wrote: list comprehensions deal with specific operations (map, filter, etc.) of a specific type ([]).
Ah, so we should bring back monad comprehensions?
I don't miss monad comprehension much, but I'd like to have a way to use comprehension for other sequence types, notably ByteString, array types and Data.Sequence (other than converting to lists and back). Cheers Ben

On Thu, Dec 06, 2007 at 11:40:58PM +0100, Ben Franksen wrote:
Aaron Denney wrote:
On 2007-12-06, Wolfgang Jeltsch
wrote: list comprehensions deal with specific operations (map, filter, etc.) of a specific type ([]).
Ah, so we should bring back monad comprehensions?
I don't miss monad comprehension much, but I'd like to have a way to use comprehension for other sequence types, notably ByteString, array types and Data.Sequence (other than converting to lists and back).
If we had monad comprehensions and some kind of restricted monads (see my post 'Constructor classes considered harmful', but note that I didn't then and still don't have a tolerable replacement), we'd get bytestring / sequence comprehensions for free. Stefan

Am Donnerstag, 6. Dezember 2007 22:47 schrieb Aaron Denney:
On 2007-12-06, Wolfgang Jeltsch
wrote: list comprehensions deal with specific operations (map, filter, etc.) of a specific type ([]).
Ah, so we should bring back monad comprehensions?
No, we already have do expressions. Best wishes, Wolfgang

| And I think that the solution is not to make the language larger and larger | everytime someone wants a feature but to give people the tools to provide | features without language changes. Of course that would be even better! (Provided of course the resulting programs were comprehensible.) Haskell is already pretty good in this respect, thanks to type classes, higher order functions, and laziness; that's why it's so good at embedded domain-specific languages. Template Haskell is another attempt to go further. Geoff Mainland's quasi-quoting idea is another. If you have other ideas for such general tools, then it'd be great to hear about them. They are much easier to wish for than to design. But where such general tools are inadequate, well-designed syntactic sugar can have a powerfully beneficial effect, I think. But it's a topic on which reasonable people can differ, and your judgement may well differ from mine. Simon

On Fri, 7 Dec 2007, Simon Peyton-Jones wrote:
| And I think that the solution is not to make the language larger and larger | everytime someone wants a feature but to give people the tools to provide | features without language changes.
Of course that would be even better! (Provided of course the resulting programs were comprehensible.) Haskell is already pretty good in this respect, thanks to type classes, higher order functions, and laziness; that's why it's so good at embedded domain-specific languages.
When I learned about GROUP BY and HAVING in SQL with its rules about what is allowed in GROUP BY and SELECT I considered GROUP BY a terrible hack, that was just introduced because the SQL people didn't want to allow types different from TABLE, namely lists of tables. I try to convince my data base colleagues that GROUP BY can nicely be handled in relational algebra by allowing sets of sets and that this is a fine combinatorial approach. I think we simply need a function like buckets :: (a -> key) -> [a] -> [(key, [a])] (where (a -> key) is a simple selector) or buckets :: (a -> (key, rest)) -> [a] -> [(key, [rest])] (where (a -> (key, rest)) is a bijection) or buckets :: Ord key => ... ah no :-) buckets :: Indexable key => (a -> (key, rest)) -> [a] -> Map key [rest] buckets f = Map.fromListWith (++) . map (\ a -> let (k,r) = f a in (k, [r])) Btw. since I often need fromListWith with Maps of list types, I wonder whether there should be variants of fromListWith and insertWith, which can use (:) instead of (++): fromListCons :: Indexable k => (b -> a -> a) -- ^ add a new sub-element to the dictionary element, for example (:) -> a -- ^ empty dictionary element, for example [] -> [(k, b)] -> Data.Map.Map k a insertCons :: Indexable k => (b -> a -> a) -> a -> k -> b -> Data.Map.Map k a -> Data.Map.Map k a

Henning Thielemann
On Fri, 7 Dec 2007, Simon Peyton-Jones wrote:
| And I think that the solution is not to make the language larger and
larger
| everytime someone wants a feature but to give people the tools to provide | features without language changes.
Of course that would be even better! (Provided of course the resulting programs were comprehensible.) Haskell is already pretty good in this respect, thanks to type classes, higher order functions, and laziness; that's why it's so good at embedded domain-specific languages.
When I learned about GROUP BY and HAVING in SQL with its rules about what is allowed in GROUP BY and SELECT I considered GROUP BY a terrible hack, that was just introduced because the SQL people didn't want to allow types different from TABLE, namely lists of tables. I try to convince my data base colleagues that GROUP BY can nicely be handled in relational algebra by allowing sets of sets and that this is a fine combinatorial approach. I [snip]
I agree with Henning that HAVING is a 'terrible hack', but then SQL altogether is a terrible hack. I would expect the Haskell approach to be based on the much sounder theoretical principles of Relational Algebra, and I applaud that Wadler+SPJ's 'Comprehensive Comprehensions' restricts itself to a subset of SQL that corresponds to Relational Algebra. In that context, GROUP BY is reasonably well-defined as a mapping from a table to a table. (The hack in SQL vintage 1975 is in trying to squeeze GROUP BY into the structure of SELECT ... FROM ... WHERE ..., the mess now can be blamed on trying to preserve backwards compatability.) As that paper points out, HAVING is unnecessary - it's just a filter on the result set of group-by. And relational theorists agree that HAVING is unneccessary (see for example 'The Importance of Column Names', Hugh Darwen 2003 from www.thethirdmanifesto.com). It's crucial that in Relational Algebra everything is a table. (See Codd's 12 rules). The result of GROUP BY we might want to pass to another GROUP BY, or JOIN to another table, etc -- or does Henning propose a hierarchy of sets of sets ... of tables, presumably with a hierarchy of HAVINGHAVING's?

On Tue, 11 Dec 2007, Anthony Clayden wrote:
I agree with Henning that HAVING is a 'terrible hack', but then SQL altogether is a terrible hack.
Somehow, yes.
As that paper points out, HAVING is unnecessary - it's just a filter on the result set of group-by.
Yep.
It's crucial that in Relational Algebra everything is a table. (See Codd's 12 rules). The result of GROUP BY we might want to pass to another GROUP BY, or JOIN to another table, etc -- or does Henning propose a hierarchy of sets of sets ...
Yes, why not? Works fine in Haskell. Ok, Haskell programs do not construct different query processing strategies and compare them at run-time, so the comparison between Haskell compilers and databases is not quite fair.
of tables, presumably with a hierarchy of HAVINGHAVING's?
map (map (map (filter p))) and so on :-)
participants (7)
-
Aaron Denney
-
Anthony Clayden
-
Ben Franksen
-
Henning Thielemann
-
Simon Peyton-Jones
-
Stefan O'Rear
-
Wolfgang Jeltsch