about Haskell code written to be "too smart"

Hi. In these days I'm discussing with some friends, that mainly use Python as programming language, but know well other languages like Scheme, Prolog, C, and so. These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome". That is, code written to be "too smart", and that end up being totally illegible by Haskell novice. I too have this feeling, from time to time. Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration. Manlio

These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides. Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used. This can be overcome with practice and experience. I'm not trying to say that code can never get too complex. Humans have some complexity budget and its not too hard to push the limits and blow your complexity budget. But that is true in any language. The ability to abstract lets you factor out common patterns that are easy to reuse and remember (with practice) and lets you spend your complexity budget elsewhere. As a programmer you still need to use your judgement to balance complexity against understandability. [Obviously if you are writing code that you want to be readable by people who arent well versed in common Haskell idioms, you'd limit your use of abstractions.]
Manlio
Tim Newsham http://www.thenewsh.com/~newsham/

Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this. But, as an example, when you read a function like: buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns that can be rewritten (argument reversed) as: takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs I think that there is a problem. The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line. It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level. I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
[...]
Manlio

On Tue, 2009-03-24 at 19:42 +0100, Manlio Perillo wrote:
Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
Huh? This is ugly and un-readable. Seriously.
I think that there is a problem.
Damn straight. It should be:
buildPartitions xs ns = zipWith take ns $ init $ scanl (flip drop) xs ns
Or, if you're really worried about blocks/line, you can increase the line count a bit (I do this regularly):
buildPartitions xs ns = zipWith take ns $ -- Select just the indicated prefix of each element init $ -- Skip the last (empty) element scanl (flip drop) xs $ -- Cumulatively remove prefixes of indicated length ns
The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line.
It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level.
Not if I can help it. More seriously, beginner code belongs in the first two-three chapters of Haskell programming textbooks, not anywhere else. It's like putting Fun with Dick & Jane-speak in an adult novel.[1]
I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
jcc [1] Well, not that bad. Beginner-level code is useful for teaching the basics of the language; Fun with Dick & Jane is child abuse.

On Tue, Mar 24, 2009 at 7:48 PM, Jonathan Cast
On Tue, 2009-03-24 at 19:42 +0100, Manlio Perillo wrote:
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
Huh? This is ugly and un-readable. Seriously.
I think this is subjective. Personally I can understand the second definition immediately, but the first one requires some puzzling. But that might be because I'm relatively new to Haskell. Of course the usage of head and tail in the example is unfortunate, one should not use these shadowing names. But aren't these two definitions different algoritms? At first sight I think the second one is more efficient than the first one.

2009/3/24 Peter Verswyvelen
But aren't these two definitions different algoritms? At first sight I think the second one is more efficient than the first one.
Some performance numbers: ---------------------------------------------------------------------- module Main where import System.Environment (getArgs) import Control.Monad.State (State(..), evalState) takeList1, takeList2, takeList3 :: [Int] -> [a] -> [[a]] takeList1 [] _ = [] takeList1 _ [] = [] takeList1 (n : ns) xs = head : takeList1 ns tail where (head, tail) = splitAt n xs takeList2 ns xs = zipWith take ns . init . scanl (flip drop) xs $ ns takeList3 = evalState . mapM (State . splitAt) test :: Int -> [[Int]] test n = takeList1 (take n [1..]) [1..] main :: IO () main = print . sum . map sum . test . read . head =<< getArgs ---------------------------------------------------------------------- compile with: ghc --make TakeList.hs -o takeList1 -O2 $ time ./takeList1 5000 739490938 real 0m6.229s user 0m5.787s sys 0m0.342s $ time ./takeList2 5000 739490938 real 0m5.089s user 0m4.455s sys 0m0.348s $ time ./takeList3 5000 739490938 real 0m6.224s user 0m5.750s sys 0m0.347s ---------------------------------------------------------------------- regards Bas

When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think this is a perfect example. Haskell allows you to abstract out the concepts of recursion, zipping and iteration. Your alternative reproduces these explicitely and intermixes them. You are saying that programmers should avoid using these higher level abstractions and instead fall back to more explicit constructs that are, for you, easier to read.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
It takes practice to read and to write. The benefit is more expressiveness and more code reuse.
Manlio
Tim Newsham http://www.thenewsh.com/~newsham/

The beauty of functional programming is that there doesn't have to be a conflict between those who prefer explicit and those who prefer implicit recursion. Think of them as different views on the same functions - just as with graphical visualizations, pick the view best suited to your purpose and use equational reasoning to transform one view into another, as needed. Improving your experience in reasoning about code is going to help at every level of abstraction, and since you've already paid the price (using a pure language, to make reasoning easier) you might as well avail yourself of the facilities;-) While developing, I might prefer abstraction, as fewer details mean that I can hold more of the problem in my head at any point, increasing my chances of seeing all the way to a solution; if optimizing, or when I haven't found the right abstractions yet, I might have to resort to less abstract code until I've sorted out those details or until GHC deals with the more abstract forms as well as with the less abstract ones. Fine, you say, but I'd never would have thought of abstract views like splitAt as a state transformer. Okay, before this thread, I might not have thought of using that, either. But after this thread, I'd hope for it to become part of my thinking about Haskell code. And the way I do that is by taking the abstract code and unfold it (replacing instances of left-hand sides with instances of right-hand sides of definitions - the source links in the Haddock documentation are very useful for that) until I get to some less abstract code that I might have come up with. That doesn't mean that I'd have had the insights to play the derivation backwards, but by breaking the transformation from less abstract to more abstract view into smaller steps, starting from the abstract form that incorporates the additional insights I was missing, I can increase my understanding of what is going on, and my chances of noticing the opportunities next time. It also confirms whether or not the two solutions really are the same (as has been pointed out, that wasn't the case here). Paraphrasing and tweaking Sjur Gjøstein Karevoll's remark a little: clever Perl code is what you hope you understood in the past, when you wrote it; clever Haskell code is what you hope you'll understand in the future, when you'll write it yourself!-) The derivation below is best followed by replaying it yourself in your editor, but I hope you'll find it helpful anyway. Claus -- view transformation: reducing the level of abstraction -- by turning implicit to explict recursion takeList = evalState . mapM (State . splitAt) -- unfold 'mapM' takeList = evalState . sequence . map (State . splitAt) -- unfold 'sequence' takeList = evalState . foldr k (return []) . map (State . splitAt) where k m m' = do x<-m; xs<-m'; return (x:xs) foldr op n [] = n foldr op n (h:t) = h `op` foldr op n t -- specialize 'foldr' for the call paramenters 'k' and 'return []' takeList = evalState . foldrkn . map (State . splitAt) where k m m' = do x<-m; xs<-m'; return (x:xs) foldrkn [] = return [] foldrkn (h:t) = h `k` foldrkn t -- unfold 'k' takeList = evalState . foldrkn . map (State . splitAt) where foldrkn [] = return [] foldrkn (h:t) = do x<-h; xs<-foldrkn t; return (x:xs) -- foldr op n . map f = foldr (op.f) n takeList = evalState . foldrkn where foldrkn [] = return [] foldrkn (h:t) = do x<-State (splitAt h); xs<-foldrkn t; return (x:xs) -- unfold 'return' for 'State', eta-expand 'splitAt h' takeList = evalState . foldrkn where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = do x<-State (\s->splitAt h s); xs<-foldrkn t; State (\s->(x:xs,s)) -- eta-expand body of 'takeList' takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = do x<-State (\s->splitAt h s); xs<-foldrkn t; State (\s->(x:xs,s)) -- unfold the second '>>=' for 'State' takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = do x<-State (\s->splitAt h s) State (\s->let (xs,s') = runState (foldrkn t) s in runState (State (\s->(x:xs,s))) s') -- runState . State = id takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = do x<-State (\s->splitAt h s) State (\s->let (xs,s') = runState (foldrkn t) s in (\s->(x:xs,s)) s') -- beta-reduce takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = do x<-State (\s->splitAt h s) State (\s->let (xs,s') = runState (foldrkn t) s in (x:xs,s')) -- unfold the remainign '>>=' for 'State' takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = State (\s->let (x,s') = runState (State (\s->splitAt h s)) s in runState (State (\s->let (xs,s') = runState (foldrkn t) s in (x:xs,s'))) s') -- runState . State = id (2x) takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = State (\s->let (x,s') = (\s->splitAt h s) s in (\s->let (xs,s') = runState (foldrkn t) s in (x:xs,s')) s') -- beta-reduce (2x) takeList ns xs = evalState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = State (\s->let (x,s') = splitAt h s in let (xs,s'') = runState (foldrkn t) s' in (x:xs,s'')) -- unfold 'evalState' takeList ns xs = fst $ runState (foldrkn ns) xs where foldrkn [] = State (\s->([],s)) foldrkn (h:t) = State (\s->let (x,s') = splitAt h s in let (xs,s'') = runState (foldrkn t) s' in (x:xs,s'')) -- all calls to 'foldrkn' are wrapped in 'runState', bring it into the definition takeList ns xs = fst $ (foldrkn ns) xs where foldrkn [] = runState $ State (\s->([],s)) foldrkn (h:t) = runState $ State (\s->let (x,s') = splitAt h s in let (xs,s'') = (foldrkn t) s' in (x:xs,s'')) -- runState . State = id (2x) takeList ns xs = fst $ (foldrkn ns) xs where foldrkn [] = \s->([],s) foldrkn (h:t) = \s->let (x,s') = splitAt h s in let (xs,s'') = (foldrkn t) s' in (x:xs,s'') -- clean up takeList ns xs = fst (foldrkn ns xs) where foldrkn [] s = ([],s) foldrkn (h:t) s = let (x,s') = splitAt h s (xs,s'') = foldrkn t s' in (x:xs,s'') -- 'snd (foldrkn _ _)' is never used, remove it takeList ns xs = foldrkn ns xs where foldrkn [] s = [] foldrkn (h:t) s = let (x,s') = splitAt h s xs = foldrkn t s' in x:xs -- remove indirection takeList [] s = [] takeList (h:t) s = x : takeList t s' where (x,s') = splitAt h s

Continuing our adventures into stylistic and semantic differences:-) Comparing the 'State' and explicit recursion versions takeListSt = evalState . mapM (State . splitAt) -- ..with a derivation leading to.. takeListSt [] s = [] takeListSt (h:t) s = x : takeListSt t s' where (x,s') = splitAt h s instead of takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs we can see some differences, leading to different functions: *Main> null $ takeListSt [1] undefined False *Main> null $ takeList [1] undefined *** Exception: Prelude.undefined *Main> takeList [0] [] [] *Main> takeListSt [0] [] [[]] and similarly for the 'scanl' version takeListSc ns xs = zipWith take ns $ init $ scanl (flip drop) xs ns Depending on usage, these differences might not matter, but what if we want these different styles to lead to the same function, with only stylistic and no semantic differences, taking the explicit recursion as our spec? In the 'State' version, the issue is that 'mapM' does not terminate early, while the specification requires an empty list whenever 'xs' (the state) is empty. Following the derivation at http://www.haskell.org/pipermail/haskell-cafe/2009-March/058603.html the first step where we have a handle on that is after unfolding 'sequence': takeListSt = evalState . foldr k (return []) . map (State . splitAt) where k m m' = do x<-m; xs<-m'; return (x:xs) If we change that to takeListSt' = evalState . foldr k (return []) . map (State . splitAt) where k m m' = cutNull $ do x<-m; xs<-m'; return (x:xs) cutNull m = do s<-get; if null s then return [] else m and continue with the modified derivation, we should end up with the right spec (I haven't done this, so you should check!-). This isn't all that elegant any more, but support for 'mapM' with early exit isn't all that uncommon a need, either, so one might expect a 'mapM' variant that takes a 'cut' parameter to make it into the libraries. For the 'scanl' version, we have a more direct handle on the issue: we can simply drop the offending extras from the 'scanl' result, replacing 'init' with 'takeWhile (not.null)': takeListSc' ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs ns A somewhat abbreviated derivation at the end of this message seems to confirm that this matches the spec (as usual with proofs, writing them down doesn't mean that they are correct, but that readers can check whether they are). (btw, both 'takeListSt'' and 'takeListSc'' pass Thomas' 'testP', as does his 'partitions', but 'partitions' is not the same function as 'takeList': consider 'null $ takeList [1] undefined' and 'takeList [0] []' ;-) Someone suggested using 'mapAccumL' instead of 'State', and that does indeed work, only that everything is the wrong way round: takeListMAL = (snd.) . flip (mapAccumL (((snd&&&fst).).(flip splitAt))) This is an example where all the "cleverness" is spent on the irrelevant details, giving them way too much importance. So one might prefer a version that more clearly says that this is mostly 'mapAccumL splitAt', with some administratory complications that might be ignored on cursory inspection: takeListMAL' = mapAccumL' splitAt' where splitAt' l n = swap $ splitAt n l mapAccumL' f l acc = snd $ mapAccumL f acc l swap (x,y) = (y,x) Of course, this suffers from the "does not terminate early" issue, but as this thread encourages us to look at functions we might not otherwise consider, I thought I'd follow the suggestion, and perhaps someone might want to modify it with a 'mapAccumL' with cutoff, and demonstrate whether it matches the spec;-) Claus -- view transformation: reducing the level of abstraction takeList ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs ns -- fetch definitions of 'zipWith', 'takeWhile', and 'scanl' takeList ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs ns where scanl f q ls = q : case ls of [] -> [] x:xs -> scanl f (f q x) xs takeWhile _ [] = [] takeWhile p (x:xs) | p x = x : takeWhile p xs | otherwise = [] zipWith f (a:as) (b:bs) = f a b : zipWith f as bs zipWith _ _ _ = [] -- specialize for 'take', 'not.null', and 'flip drop' takeList ns xs = zipWith ns $ takeWhile $ scanl xs ns where scanl q ls = q : case ls of [] -> [] x:xs -> scanl (drop x q) xs takeWhile [] = [] takeWhile (x:xs) | not (null x) = x : takeWhile xs | otherwise = [] zipWith (a:as) (b:bs) = take a b : zipWith as bs zipWith _ _ = [] -- fuse 'takeWhile' and 'scanl' into 'tws' takeList ns xs = zipWith ns $ tws xs ns where tws q ls | not (null q) = q : case ls of [] -> [] x:xs -> tws (drop x q) xs | otherwise = [] zipWith (a:as) (b:bs) = take a b : zipWith as bs zipWith _ _ = [] -- fuse 'zipWith' and 'tws' into 'ztws' takeList ns xs = ztws ns xs ns where ztws (a:as) q ls | not (null q) = take a q : case ls of [] -> [] x:xs -> ztws as (drop x q) xs | otherwise = [] ztws _ _ _ = [] -- 'ls' is 'as' takeList ns xs = ztws ns xs where ztws (a:as) q | not (null q) = take a q : ztws as (drop a q) | otherwise = [] ztws _ _ = [] -- remove indirection takeList (a:as) q | not (null q) = take a q : takeList as (drop a q) | otherwise = [] takeList _ _ = [] -- replace guard by clause takeList (a:as) [] = [] takeList (a:as) q = take a q : takeList as (drop a q) takeList _ _ = [] -- '_' in last clause has to be '[]' takeList (a:as) [] = [] takeList (a:as) q = take a q : takeList as (drop a q) takeList [] _ = [] -- switch non-overlapping clauses takeList [] _ = [] takeList (a:as) [] = [] takeList (a:as) q = take a q : takeList as (drop a q) -- for second parameter '[]', both ':' and '[]' in first parameter result in '[]' takeList [] _ = [] takeList _ [] = [] takeList (a:as) q = take a q : takeList as (drop a q) -- (take a q,drop a q) = splitAt a q takeList [] _ = [] takeList _ [] = [] takeList (a:as) q = t : takeList as d where (t,d) = splitAt a q

* Claus Reinke wrote:
Continuing our adventures into stylistic and semantic differences:-)
It's good practice to keep a simple minded version of the code and using quickcheck to try to find differences between the optimized and trivial version. It's good practice to even check, that the optimized version is really faster/smaller than the simple one.

Continuing our adventures into stylistic and semantic differences:-)
Can you write this analysis on the wiki?
Hmm, we tried that in the past, and I haven't seen any indication that people search for those things, let alone find them (one particular example I recalled I still haven't been able to find on the wiki..). So I'll try a different approach this time: instead of copying emails to the wiki, I've created a page for collecting and documenting examples of equational reasoning in practice. Please add your favourites!-) Hopefully, the description of the examples will provide sufficient search keywords to make this page findable, and the linked examples from there; there are category links as well. Over to you, wiki style!-) Claus

takeListSt' = evalState . foldr k (return []) . map (State . splitAt) where k m m' = cutNull $ do x<-m; xs<-m'; return (x:xs) cutNull m = do s<-get; if null s then return [] else m
Not only is ths not that elegant anymore, I think it *still* has a
bug, stack overflow against
testP pf = mapM_ putStrLn [
show $ take 5 $ pf (repeat 0) [1,2,3]
, show $ pf ( take 1000 [3,7..] ) [1..100]
, show . pf [3,7,11,15] $ ( take (10^6) [1..])
, show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6]
]
where the first test (with take 5) is new.
whereas the version with explicit recursion and pattern matching
doesn't suffer from this problem
partitions [] xs = []
partitions (n:parts) xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partitions parts xs)
I am starting to think that the tricky part in all these functions is
that by using higher order functions from the prelude, you sweep the
failure case under the rug. Specifically, what happens when splitAt n
doesn't have a list of length n? The answer isn't in fact obvious at
all. I can think of three things that could hapen.
You coud return (list,[]) where list is however many elements there
are left. (Which is what all the partitions functions do so far, and
the default behavior of splitAt.
Or, you could print an error message.
Or, you could return ([],[])
My tentative conclusion is that good haskell style makes error
modalities explicit when error behavior isn't obvious, or when there
is arguably more than one right way to fail. So:
partitionsE = partitionsE' error
partitionsE2 = partitionsE' ( \e n xs -> [])
partitionsE3 = partitionsE' (\e n xs -> [take n xs]) -- corresponds to
the behavior of partitions
partitionsE' err [] xs = []
partitionsE' err (n:parts) xs =
case splitAtE n xs of
Left e -> err e n xs
Right (beg,end) ->
beg : ( case end of
[] -> []
xs -> partitionsE' err parts xs )
where splitAtE n as@(x:xs) | n <= length as = Right $ splitAt n as
splitAtE n ys = Left $ "can't split at " ++ (show n) ++ ": "
++ (show ys)
2009/3/26 Claus Reinke
Continuing our adventures into stylistic and semantic differences:-)
Comparing the 'State' and explicit recursion versions
takeListSt = evalState . mapM (State . splitAt)
-- ..with a derivation leading to..
takeListSt [] s = [] takeListSt (h:t) s = x : takeListSt t s' where (x,s') = splitAt h s
instead of
takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
we can see some differences, leading to different functions:
*Main> null $ takeListSt [1] undefined False *Main> null $ takeList [1] undefined *** Exception: Prelude.undefined *Main> takeList [0] [] [] *Main> takeListSt [0] [] [[]]
and similarly for the 'scanl' version
takeListSc ns xs = zipWith take ns $ init $ scanl (flip drop) xs ns
Depending on usage, these differences might not matter, but what if we want these different styles to lead to the same function, with only stylistic and no semantic differences, taking the explicit recursion as our spec?
In the 'State' version, the issue is that 'mapM' does not terminate early, while the specification requires an empty list whenever 'xs' (the state) is empty. Following the derivation at
http://www.haskell.org/pipermail/haskell-cafe/2009-March/058603.html
the first step where we have a handle on that is after unfolding 'sequence':
takeListSt = evalState . foldr k (return []) . map (State . splitAt) where k m m' = do x<-m; xs<-m'; return (x:xs)
If we change that to
takeListSt' = evalState . foldr k (return []) . map (State . splitAt) where k m m' = cutNull $ do x<-m; xs<-m'; return (x:xs) cutNull m = do s<-get; if null s then return [] else m
and continue with the modified derivation, we should end up with the right spec (I haven't done this, so you should check!-). This isn't all that elegant any more, but support for 'mapM' with early exit isn't all that uncommon a need, either, so one might expect a 'mapM' variant that takes a 'cut' parameter to make it into the libraries.
For the 'scanl' version, we have a more direct handle on the issue: we can simply drop the offending extras from the 'scanl' result, replacing 'init' with 'takeWhile (not.null)':
takeListSc' ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs ns
A somewhat abbreviated derivation at the end of this message seems to confirm that this matches the spec (as usual with proofs, writing them down doesn't mean that they are correct, but that readers can check whether they are).
(btw, both 'takeListSt'' and 'takeListSc'' pass Thomas' 'testP', as does his 'partitions', but 'partitions' is not the same function as 'takeList': consider 'null $ takeList [1] undefined' and 'takeList [0] []' ;-)
Someone suggested using 'mapAccumL' instead of 'State', and that does indeed work, only that everything is the wrong way round:
takeListMAL = (snd.) . flip (mapAccumL (((snd&&&fst).).(flip splitAt)))
This is an example where all the "cleverness" is spent on the irrelevant details, giving them way too much importance. So one might prefer a version that more clearly says that this is mostly 'mapAccumL splitAt', with some administratory complications that might be ignored on cursory inspection:
takeListMAL' = mapAccumL' splitAt' where splitAt' l n = swap $ splitAt n l mapAccumL' f l acc = snd $ mapAccumL f acc l swap (x,y) = (y,x)
Of course, this suffers from the "does not terminate early" issue, but as this thread encourages us to look at functions we might not otherwise consider, I thought I'd follow the suggestion, and perhaps someone might want to modify it with a 'mapAccumL' with cutoff, and demonstrate whether it matches the spec;-)
Claus
-- view transformation: reducing the level of abstraction
takeList ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs ns
-- fetch definitions of 'zipWith', 'takeWhile', and 'scanl'
takeList ns xs = zipWith take ns $ takeWhile (not.null) $ scanl (flip drop) xs ns where scanl f q ls = q : case ls of [] -> [] x:xs -> scanl f (f q x) xs takeWhile _ [] = [] takeWhile p (x:xs) | p x = x : takeWhile p xs | otherwise = [] zipWith f (a:as) (b:bs) = f a b : zipWith f as bs zipWith _ _ _ = []
-- specialize for 'take', 'not.null', and 'flip drop'
takeList ns xs = zipWith ns $ takeWhile $ scanl xs ns where scanl q ls = q : case ls of [] -> [] x:xs -> scanl (drop x q) xs takeWhile [] = [] takeWhile (x:xs) | not (null x) = x : takeWhile xs | otherwise = [] zipWith (a:as) (b:bs) = take a b : zipWith as bs zipWith _ _ = []
-- fuse 'takeWhile' and 'scanl' into 'tws'
takeList ns xs = zipWith ns $ tws xs ns where tws q ls | not (null q) = q : case ls of [] -> [] x:xs -> tws (drop x q) xs | otherwise = [] zipWith (a:as) (b:bs) = take a b : zipWith as bs zipWith _ _ = []
-- fuse 'zipWith' and 'tws' into 'ztws'
takeList ns xs = ztws ns xs ns where ztws (a:as) q ls | not (null q) = take a q : case ls of [] -> [] x:xs -> ztws as (drop x q) xs | otherwise = [] ztws _ _ _ = []
-- 'ls' is 'as'
takeList ns xs = ztws ns xs where ztws (a:as) q | not (null q) = take a q : ztws as (drop a q) | otherwise = [] ztws _ _ = []
-- remove indirection
takeList (a:as) q | not (null q) = take a q : takeList as (drop a q) | otherwise = [] takeList _ _ = []
-- replace guard by clause
takeList (a:as) [] = [] takeList (a:as) q = take a q : takeList as (drop a q) takeList _ _ = []
-- '_' in last clause has to be '[]'
takeList (a:as) [] = [] takeList (a:as) q = take a q : takeList as (drop a q) takeList [] _ = []
-- switch non-overlapping clauses
takeList [] _ = [] takeList (a:as) [] = [] takeList (a:as) q = take a q : takeList as (drop a q)
-- for second parameter '[]', both ':' and '[]' in first parameter result in '[]'
takeList [] _ = [] takeList _ [] = [] takeList (a:as) q = take a q : takeList as (drop a q)
-- (take a q,drop a q) = splitAt a q
takeList [] _ = [] takeList _ [] = [] takeList (a:as) q = t : takeList as d where (t,d) = splitAt a q
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

takeListSt' = evalState . foldr k (return []) . map (State . splitAt) where k m m' = cutNull $ do x<-m; xs<-m'; return (x:xs) cutNull m = do s<-get; if null s then return [] else m
|Not only is ths not that elegant anymore, As I was saying, sequence/mapM with early cutout is common enough that one might want it in the libraries, which would return this variant into readability again. |I think it *still* has a bug, stack overflow against That would really surprise me. Not that it is impossible - as I was also saying, I haven't replayed the derivation for the modified code. However, the modification was arrived at by taking the original derivation, seeing where its result deviated from the explicitly recursive specification, and spotting the aspect of the implicitly recursive version that was responsible for the deviation. Of course, the derivation itself could have an error, but equating the functions themselves gives me rather more confidence/coverage than any finite number of tests could. If I were to enter the derivation into a proof checking tool and be successful, that would further raise the level of confidence/coverage (leaving bugs in the proof checker). Note that I'm not asking whether the original spec did the "right" thing, only whether or not the variations "correctly" do the same thing as the original spec. |testP pf = mapM_ putStrLn [ | show $ take 5 $ pf (repeat 0) [1,2,3] | , show $ pf ( take 1000 [3,7..] ) [1..100] | , show . pf [3,7,11,15] $ ( take (10^6) [1..]) | , show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6] | ] | |where the first test (with take 5) is new. |whereas the version with explicit recursion and pattern matching |doesn't suffer from this problem I get identical results for 'takeListSt'' and the original 'takeList1' (code repeated below). It took me a couple of moments to remember that you had been playing with Control.Monad.State.Strict instead of the default Control.Monad.State(.Lazy). That would invalidate the original derivation (different definition of '>>=', therefore a different end result after unfolding '>>='), let alone the modified code based on it. If you replay the derivation, taking the strictness variations into account, you should arrive at an explicitly recursive version that differs from the spec. Which might make it easier to see what the difference is. |partitions [] xs = [] |partitions (n:parts) xs = | let (beg,end) = splitAt n xs | in beg : ( case end of | [] -> [] | xs -> partitions parts xs) That version cannot be transformed into the original spec, because it doesn't define the same function. As I mentioned:
(btw, both 'takeListSt'' and 'takeListSc'' pass Thomas' 'testP', as does his 'partitions', but 'partitions' is not the same function as 'takeList': consider 'null $ takeList [1] undefined' and 'takeList [0] []' ;-)
With the original spec takeList1 [] _ = [] takeList1 _ [] = [] takeList1 (n : ns) xs = h : takeList1 ns t where (h, t) = splitAt n xs and 'takeList4' being your 'partitions', we get: *Main> null $ takeList1 [1] undefined *** Exception: Prelude.undefined *Main> null $ takeList4 [1] undefined False *Main> takeList1 [0] [] [] *Main> takeList4 [0] [] [[]]
I am starting to think that the tricky part in all these functions is that by using higher order functions from the prelude, you sweep the failure case under the rug.
Yes, the reason that more abstract functions are useful is that they hide irrelevant details, allowing us to spend our limited capacity on relevant details. If all abstract functions happen to hide details that matter, more concrete functions that expose those details can be more helpful. But even that has to be qualified: for instance, at first I found it easier to see the issues with the original 'State' variant in its transformed, explicitly recursive version, but after the derivation had convinced me that there was no magic going on, I realized that it was just the old 'mapM' doesn't stop early issue. So I could have seen the issue in the abstract form, but my mind (and apparently other minds, too;-) refused to think about the cornercases there until prompted. If not for this tendency to ignore details that might be relevant, the abstract code would provide an abstract treatment of the failure case as well: instead of working out the details by trying to find useful tests for the explicit pattern matches, we can just look at wether the definition uses 'mapM' or 'mapMWithCut', or whether it uses 'Control.Monad.State' or 'Control.Monad.State.Strict'. Just exposing all the details all the time isn't going to help, as the 'partition' example demonstrates: we might still ignore the relevant details, this time not because they are hidden in abstractions, but because they are hidden in other irrelevant details. There really isn't a single view of software that will serve all purposes, one has to find appropriate views for every task, including using multiple views of the same piece of software. Which is where program transformation comes in handy!-)
Specifically, what happens when splitAt n doesn't have a list of length n? The answer isn't in fact obvious at all. I can think of three things that could hapen.
I agree that the "right" thing to do can be a matter of dispute, and so I based my definition of "correct" on equivalence to a specific version of the code, the first explicitly recursive version I could find ('takeList1' above). Claus

Thanks Claus,
Indeed the problem was that I was using the Strict state monad, with
lazy state it does the right thing when run through testP. I will try
and get back to this thread if I manage the derivation which "proves"
(or at least supports) that the two versions are equivalent.
2009/4/4 Claus Reinke
takeListSt' = evalState . foldr k (return []) . map (State . splitAt) where k m m' = cutNull $ do x<-m; xs<-m'; return (x:xs) cutNull m = do s<-get; if null s then return [] else m
|Not only is ths not that elegant anymore, As I was saying, sequence/mapM with early cutout is common enough that one might want it in the libraries, which would return this variant into readability again.
|I think it *still* has a bug, stack overflow against
That would really surprise me. Not that it is impossible - as I was also saying, I haven't replayed the derivation for the modified code. However, the modification was arrived at by taking the original derivation, seeing where its result deviated from the explicitly recursive specification, and spotting the aspect of the implicitly recursive version that was responsible for the deviation. Of course, the derivation itself could have an error, but equating the functions themselves gives me rather more confidence/coverage than any finite number of tests could. If I were to enter the derivation into a proof checking tool and be successful, that would further raise the level of confidence/coverage (leaving bugs in the proof checker).
Note that I'm not asking whether the original spec did the "right" thing, only whether or not the variations "correctly" do the same thing as the original spec.
|testP pf = mapM_ putStrLn [ | show $ take 5 $ pf (repeat 0) [1,2,3] | , show $ pf ( take 1000 [3,7..] ) [1..100] | , show . pf [3,7,11,15] $ ( take (10^6) [1..]) | , show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6] | ] | |where the first test (with take 5) is new. |whereas the version with explicit recursion and pattern matching |doesn't suffer from this problem
I get identical results for 'takeListSt'' and the original 'takeList1' (code repeated below). It took me a couple of moments to remember that you had been playing with Control.Monad.State.Strict instead of the default Control.Monad.State(.Lazy). That would invalidate the original derivation (different definition of '>>=', therefore a different end result after unfolding '>>='), let alone the modified code based on it. If you replay the derivation, taking the strictness variations into account, you should arrive at an explicitly recursive version that differs from the spec. Which might make it easier to see what the difference is.
|partitions [] xs = [] |partitions (n:parts) xs = | let (beg,end) = splitAt n xs | in beg : ( case end of | [] -> [] | xs -> partitions parts xs)
That version cannot be transformed into the original spec, because it doesn't define the same function. As I mentioned:
(btw, both 'takeListSt'' and 'takeListSc'' pass Thomas' 'testP', as does his 'partitions', but 'partitions' is not the same function as 'takeList': consider 'null $ takeList [1] undefined' and 'takeList [0] []' ;-)
With the original spec
takeList1 [] _ = [] takeList1 _ [] = [] takeList1 (n : ns) xs = h : takeList1 ns t where (h, t) = splitAt n xs
and 'takeList4' being your 'partitions', we get:
*Main> null $ takeList1 [1] undefined *** Exception: Prelude.undefined *Main> null $ takeList4 [1] undefined False *Main> takeList1 [0] [] [] *Main> takeList4 [0] [] [[]]
I am starting to think that the tricky part in all these functions is that by using higher order functions from the prelude, you sweep the failure case under the rug.
Yes, the reason that more abstract functions are useful is that they hide irrelevant details, allowing us to spend our limited capacity on relevant details. If all abstract functions happen to hide details that matter, more concrete functions that expose those details can be more helpful. But even that has to be qualified: for instance, at first I found it easier to see the issues with the original 'State' variant in its transformed, explicitly recursive version, but after the derivation had convinced me that there was no magic going on, I realized that it was just the old 'mapM' doesn't stop early issue. So I could have seen the issue in the abstract form, but my mind (and apparently other minds, too;-) refused to think about the cornercases there until prompted. If not for this tendency to ignore details that might be relevant, the abstract code would provide an abstract treatment of the failure case as well: instead of working out the details by trying to find useful tests for the explicit pattern matches, we can just look at wether the definition uses 'mapM' or 'mapMWithCut', or whether it uses 'Control.Monad.State' or 'Control.Monad.State.Strict'.
Just exposing all the details all the time isn't going to help, as the 'partition' example demonstrates: we might still ignore the relevant details, this time not because they are hidden in abstractions, but because they are hidden in other irrelevant details. There really isn't a single view of software that will serve all purposes, one has to find appropriate views for every task, including using multiple views of the same piece of software. Which is where program transformation comes in handy!-)
Specifically, what happens when splitAt n doesn't have a list of length n? The answer isn't in fact obvious at all. I can think of three things that could hapen.
I agree that the "right" thing to do can be a matter of dispute, and so I based my definition of "correct" on equivalence to a specific version of the code, the first explicitly recursive version I could find ('takeList1' above).
Claus
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

2009/3/24 Manlio Perillo
Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
Wow, very cool! And very readable; I actually got the idea of the function is going to do after reading the scanl (flip drop) and the rest of the function only convinced me that I was right. The second version is far worse, because it forces me to think about what to do if the lists are empty, how to decompose them if they aren't - all this stuff is 'imperative' and irrelevant to the problem, and is elegantly omitted in the one-liner.
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think that there is a problem.
The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line.
It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level.
I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru

On Tue, Mar 24, 2009 at 1:42 PM, Manlio Perillo
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think that there is a problem.
This crops up all the time even in simple mathematics. One way to provide assistance to newcomers is to provide a quasi-English reading of the notation. Take as an example a simple set comprehension expression (using Z email notation, http://csci.csusb.edu/dick/samples/z.lexis.html): { x : Int | 0 < x < 10 /\ x %e Odd @ 2*x } That's pretty opaque for beginners until they learn to read | as "such that", %e as "member of" and @ as "generate", so that they can express the idea in quasi-English: "form a set by taking all integers x such that ... and ..., then generate the result by doubling them" or the like. Or take | as "filter" and @ as "map"; the point is it helps to be able to express it in something like natural language. Do something similar for your buildPartitions definition and I'll bet you'll end up with something much more user friendly than takeList. -gregg

On Tue, Mar 24, 2009 at 2:42 PM, Manlio Perillo
Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs ...
[...]
Manlio
Correct me if I'm wrong, but isn't this an example against your thesis? Your two definitions apparently define different things. {-# LANGUAGE NoMonomorphismRestriction #-} import Test.QuickCheck test = (\x y -> buildPartitions x y == takeList y x) buildPartitions :: [a] -> [Int] -> [[a]] buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs {- *Main Control.Monad Data.Char Data.List> quickCheck test quickCheck test^J <interactive>:1:11: Warning: Defaulting the following constraint(s) to type `()' `Eq a' arising from a use of `test' at <interactive>:1:11-14 `Arbitrary a' arising from a use of `quickCheck' at <interactive>:1:0-14 `Show a' arising from a use of `quickCheck' at <interactive>:1:0-14 In the first argument of `quickCheck', namely `test' In a stmt of a 'do' expression: it <- quickCheck test *** Failed! Falsifiable (after 2 tests): [] [0] -} -- gwern

What about
import Data.List
partAt n xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partAt n xs)
t = partAt 3 [1..10]
It's tail recursive (I think!) and should be pretty easy to understand
even for a beginner, no?
2009/3/24 Manlio Perillo
Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think that there is a problem.
The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line.
It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level.
I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

sorry, wrong function.
should be
partitions [] xs = []
partitions (n:parts) xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partitions parts xs)
t = partitions [1,2,3] [1..10]
which is not quite as nice, I admit.
2009/3/25 Thomas Hartman
What about
import Data.List
partAt n xs = let (beg,end) = splitAt n xs in beg : ( case end of [] -> [] xs -> partAt n xs)
t = partAt 3 [1..10]
It's tail recursive (I think!) and should be pretty easy to understand even for a beginner, no?
2009/3/24 Manlio Perillo
: Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think that there is a problem.
The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line.
It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level.
I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thomas Hartman wrote:
sorry, wrong function.
should be
partitions [] xs = [] partitions (n:parts) xs = let (beg,end) = splitAt n xs in beg : ( case end of [] -> [] xs -> partitions parts xs)
It's not tail recursive, FWIW. The recursive expression has (:) as it's head before it hits `partitions`. It is however nicely coinductive, which has other good properties. We could make it tail-recursive easily, partitions = go id where go k [] xs = k [] go k (n:ns) xs = let (beg,end) = splitAt n xs k' = k . (beg:) in case end of [] -> k' [] xs' -> go k' ns xs' (Note how this version has `go` as the head of the recursive expression.) ...however this version has different strictness properties. In particular, let both input lists be infinite (and take a finite portion of the result). The original version works fine because it gives a little bit of output (beg:) at each step of the recursion ---which is all "coinductive" means. The tail-recursive version hits _|_ however, because we've delayed giving any input (k []) until one of the two lists hits [] ---we've tried doing induction on co-data and so we hit an infinite loop. This dichotomy between coinduction and tail-recursion is quite common. It's another example of the recently discussed problem of defining foldr in terms of foldl. Whether the termination differences matter depends on how the function is to be used. Another nice property of coinduction is that it means we can do build/fold fusion easily: partitions = \ns xs -> build (\cons nil -> go cons nil ns xs) where go cons nil = go' where go' [] xs = nil go' (n:ns) xs = let (beg,end) = splitAt n xs in beg `cons` case end of [] -> nil xs' -> go' ns xs' By using the GHC.Exts.build wrapper the fusion rules will automatically apply. The second wrapper, go, is just so that the worker, go', doesn't need to pass cons and nil down through the recursion. -- Live well, ~wren

So to be clear with the terminology: inductive = good consumer? coinductive = good producer? So fusion should be possible (automatically? or do I need a GHC rule?) with inductive . coinductive Or have I bungled it? Dan wren ng thornton wrote:
Thomas Hartman wrote:
sorry, wrong function.
should be
partitions [] xs = [] partitions (n:parts) xs = let (beg,end) = splitAt n xs in beg : ( case end of [] -> [] xs -> partitions parts xs)
It's not tail recursive, FWIW. The recursive expression has (:) as it's head before it hits `partitions`. It is however nicely coinductive, which has other good properties.
We could make it tail-recursive easily,
partitions = go id where go k [] xs = k [] go k (n:ns) xs = let (beg,end) = splitAt n xs k' = k . (beg:) in case end of [] -> k' [] xs' -> go k' ns xs'
(Note how this version has `go` as the head of the recursive expression.)
...however this version has different strictness properties. In particular, let both input lists be infinite (and take a finite portion of the result). The original version works fine because it gives a little bit of output (beg:) at each step of the recursion ---which is all "coinductive" means. The tail-recursive version hits _|_ however, because we've delayed giving any input (k []) until one of the two lists hits [] ---we've tried doing induction on co-data and so we hit an infinite loop.
This dichotomy between coinduction and tail-recursion is quite common. It's another example of the recently discussed problem of defining foldr in terms of foldl. Whether the termination differences matter depends on how the function is to be used.
Another nice property of coinduction is that it means we can do build/fold fusion easily:
partitions = \ns xs -> build (\cons nil -> go cons nil ns xs) where go cons nil = go' where go' [] xs = nil go' (n:ns) xs = let (beg,end) = splitAt n xs in beg `cons` case end of [] -> nil xs' -> go' ns xs'
By using the GHC.Exts.build wrapper the fusion rules will automatically apply. The second wrapper, go, is just so that the worker, go', doesn't need to pass cons and nil down through the recursion.

Dan Weston wrote:
So to be clear with the terminology:
inductive = good consumer? coinductive = good producer?
So fusion should be possible (automatically? or do I need a GHC rule?) with inductive . coinductive
Or have I bungled it?
Not quite. Induction means starting from base cases and building things upwards from those. Coinduction is the dual and can be thought of as starting from the ceiling and building your way downwards (until you hit the base cases, or possibly forever). So, if you have potentially infinite data (aka co-data) coming in, then you need to use coinduction because you may never see the basis but you want to make progress anyways. In formal terms, coinduction on co-data gives the same progress guarantees as induction on data, though termination is no longer a conclusion of progress (since coinduction may produce an infinite output from infinite input). Haskell doesn't distinguish data and co-data, but you can imagine data as if all the data constructors are strict, and co-data as if all the constructors are lazy. Another way to think of it is that finite lists (ala OCaml and SML) are data, but streams are co-data. For fusion there's the build/fold type and its dual unfold/destroy, where build/unfold are producers and fold/destroy are consumers. To understand how fusion works, let's look at the types of build and fold. GHC.Exts.build :: (forall b. (a -> b -> b) -> b -> b) -> [a] flip (flip . foldr) :: [a] -> ( (a -> b -> b) -> b -> b ) Together they give an isomorphism between lists as an ADT [a] and as a catamorphism (forall b. (a -> b -> b) -> b -> b), aka Church encoding. When we have build followed by foldr, we can remove the intermediate list and pass the F-algebra down directly: foldr cons nil (build k) = k cons nil For unfold/destroy fusion the idea is the same except that we use unfold (an anamorphism on the greatest fixed point) instead of fold (a catamorphism on the least fixed point). The two fixed points coincide in Haskell. Since Haskell does build/fold fusion, "good producer" requires that the function was written using build, and "good consumer" requires it's written using foldr. Using these functions allows us to apply the rule, though it's not sufficient for "good fusion". Why the functions have the particular types they do and why this is safe has to do with induction and coinduction, but the relationship isn't direct. The reason a coinductive function is easy to make into a good producer has to do with that relationship. Take a canonically coinductive function like f [] = [] f (x:xs) = x : f xs Once we've made one step of recursion, we've generated (x:) and then have a thunk for recursing. Most importantly is that no matter how we evaluate the rest of the list, the head of the return value is already known to be (:) thus we can get to WHNF after one step. Whatever function is consuming this output can then take x and do whatever with it, and then pull on f xs which then takes a single step and returns (x':) along with a thunk f xs'. Because all of those (:) are being produced immediately, it's easy to abstract it out as a functional argument--- thus we can use build. Coinduction doesn't need to do 1-to-1 mapping of input to output, there just needs to be the guarantee that we only need to read a finite amount of input before producing a non-zero finite amount of output. These functions are also coinductive: p [] = [] p [x] = [x] p (x:y:ys) = y : x : p ys q [] = [] q [x] = [] q (x:y:ys) = y : q ys r [] = [] r (x:xs) = x : x : r xs They can also be written using build, though they're chunkier about reading input or producing output. These functions are not coinductive because there's no finite bound on how long it takes to reach WHNF: bad [] = [] bad (x:xs) = bad xs reverse [] = [] reverse (x:xs) = reverse xs ++ [x] Because build/fold is an isomorphism, we can technically use build for writing *any* function that produces a list. However, there's more to fusion than just using the build/fold isomorphism. The big idea behind it all is that when producers and consumers are in 1-to-1 correlation, then we can avoid allocating that 1 (the cons cell) and can just pass the arguments of the constructor directly to the consumer. For example: let buildF [] = [] buildF (x:xs) = x : buildF xs consumeF [] = 0 consumeF (x:xs) = 1 + consumeF xs in consumeF . buildF == let buildF = \xs -> build (f xs) where f [] cons nil = nil f (x:xs) cons nil = x `cons` f xs cons nil consumeF = foldr consumeCons consumeNil where consumeNil = 0 consumeCons x rs = 1 + rs in consumeF . buildF == let f [] cons nil = nil f (x:xs) cons nil = x `cons` f xs cons nil consumeNil = 0 consumeCons x rs = 1 + rs in foldr consumeCons consumeNil . \xs -> build (f xs) == let... in \xs -> foldr consumeCons consumeNil (build (f xs)) == let... in \xs -> (f xs) consumeCons consumeNil And now f never allocates any (:) or [], it just calls the two consumers directly. The first step of choosing to use build and foldr instead of primitive recursion is what enables the compiler to automatically do all the other steps. Leaving it at that is cute since we can avoid allocating the list, however, due to laziness we may still end up allocating a spine of calls to consumeCons, which isn't much better than a spine of calls to (:). This is why "good producers" are ones which are capable of producing a single cons at a time, they never construct a spine before it is needed by the consumer. And this is why "good consumers" are ones which are capable of consuming a single cons at a time, they never force the production of a spine without immediately consuming it. We can relax this goodness from 1-to-1 to chunkier things, but that also reduces the benefits of fusion. All of this can be generalized to other types besides lists, of course. -- Live well, ~wren

Not only is your "simpler" function easier to read, it is also more correct.
partitionsHubris xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
partitionsBeginner :: [Int] -> [a] -> [[a]]
partitionsBeginner [] _ = []
partitionsBeginner _ [] = []
partitionsBeginner (n : ns) xs = head : partitionsBeginner ns tail
where (head, tail) = splitAt n xs
Run both through testP to see why,.
testP pf = mapM_ putStrLn [
show . pf [3,7..] $ [1..10]
, show . pf [3,7,11,15] $ [1..]
, show . head . last $ pf [3,3..] [1..10^6]
]
Of course, I favor
partitions [] xs = []
partitions (n:parts) xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partitions parts xs)
which to my eyes is even easier to read (and also correct).
Pattern matching is awesome language feature. use it!
2009/3/24 Manlio Perillo
Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think that there is a problem.
The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line.
It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level.
I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

s/Pattern matching is awesome language feature. use it!
/Pattern matching is awesome language feature. Don't be ashamed to use it! /
:)
2009/3/25 Thomas Hartman
Not only is your "simpler" function easier to read, it is also more correct.
partitionsHubris xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
partitionsBeginner :: [Int] -> [a] -> [[a]] partitionsBeginner [] _ = [] partitionsBeginner _ [] = [] partitionsBeginner (n : ns) xs = head : partitionsBeginner ns tail where (head, tail) = splitAt n xs
Run both through testP to see why,.
testP pf = mapM_ putStrLn [ show . pf [3,7..] $ [1..10] , show . pf [3,7,11,15] $ [1..] , show . head . last $ pf [3,3..] [1..10^6] ]
Of course, I favor
partitions [] xs = [] partitions (n:parts) xs = let (beg,end) = splitAt n xs in beg : ( case end of [] -> [] xs -> partitions parts xs)
which to my eyes is even easier to read (and also correct).
Pattern matching is awesome language feature. use it!
2009/3/24 Manlio Perillo
: Tim Newsham ha scritto:
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
When you think about it, what you are saying is that Haskell programmers shouldn't take advantage of the extra tools that Haskell provides.
No, I'm not saying this.
But, as an example, when you read a function like:
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
that can be rewritten (argument reversed) as:
takeList :: [Int] -> [a] -> [[a]] takeList [] _ = [] takeList _ [] = [] takeList (n : ns) xs = head : takeList ns tail where (head, tail) = splitAt n xs
I think that there is a problem.
The buildPartition contains too many "blocks". And I have read code with even more "blocks" in one line.
It may not be a problem for a "seasoned" Haskell programmer, but when you write some code, you should never forget that your code will be read by programmers that can not be at your same level.
I think that many Haskell programmers forget this detail, and IMHO this is wrong.
Haskell provides the ability to abstract code beyond what many other programming systems allow. This abstraction gives you the ability to express things much more tersely. This makes the code a lot harder to read for people who are not familiar with the abstractions being used.
The problem is that I have still problems at reading and understanding code that is too much terse... Because I have to assemble in my mind each block, and if there are too many blocks I have problems.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I know what you're saying, in a way. There is much haskell code that's completely illegible to me. I would say there is a difference between Haskell and Perl though, in that Perl code is "too smart" aka. "clever", while Haskell code is usually simply, well, too smart. This means code written using aspects of covariant generalized applicative combinators in a closed Hillbert-space like continuous field ring, and other similar nonsense. There was a time when "monadic parser combinator" sounded equally nonsensical to me. It doesn't anymore, and I'm a better programmer for it, being able to reduce one of my earliest Haskell programs from 200 to 20 lines using that knowledge alone while making it more comprehensible and maintainable at the same time. The difference between Haskell and Perl is that Haskell programmers use clever ideas while Perl programmers use clever abuse of syntax. Ideas, at least, you have a hope of understanding sometime in the future. ty. den 24.03.2009 klokka 18:41 (+0100) skreiv Manlio Perillo:
Hi.
In these days I'm discussing with some friends, that mainly use Python as programming language, but know well other languages like Scheme, Prolog, C, and so.
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Manlio Perillo wrote: | These friends are very interested in Haskell, but it seems that the main | reason why they don't start to seriously learning it, is that when they | start reading some code, they feel the "Perl syndrome". | | That is, code written to be "too smart", and that end up being totally | illegible by Haskell novice. | | I too have this feeling, from time to time. I used to think this as well, but have since changed my mind about most cases. It is simply the case that Haskell code is extremely dense. The more powerful your abstractions, the more functionality you can cram into one line of code. This can give the appearance of being overly clever, since we are accustomed to clever code being unnervingly short and using lots of short variable names and operators. It is generally encouraged to use single-letter variable names in Haskell because there are many cases that you haven't a clue what the type of that variable might be, again due to Haskell's amazing ability to abstract such things away. All these factors combined just means that you have to concentrate just as hard to understand one line of Haskell as you might 10 or 20 lines of other languages. There is 10 or 20 times the amount of information. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJJpIACgkQye5hVyvIUKl8dgCgp+YSwdJpmeVlrlUEnzGGgVBQ VFoAoMSDkOV+YdAoEbmLjtjza+byEUTi =9pZZ -----END PGP SIGNATURE-----

Jake McArthur ha scritto:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Manlio Perillo wrote: | These friends are very interested in Haskell, but it seems that the main | reason why they don't start to seriously learning it, is that when they | start reading some code, they feel the "Perl syndrome". | | That is, code written to be "too smart", and that end up being totally | illegible by Haskell novice. | | I too have this feeling, from time to time.
I used to think this as well, but have since changed my mind about most cases.
The same for me.
[...] All these factors combined just means that you have to concentrate just as hard to understand one line of Haskell as you might 10 or 20 lines of other languages. There is 10 or 20 times the amount of information.
This is right. The problem is that often (IMHO) a function definition can be rewritten so that it is much more readable. As an example, with the takeList function I posted. In other cases, you can just break long lines, introducing intermediate functions that have a descriptive name *and* a type definition. Doing this is an art, but a coding style for Haskell should try to document this.
[...]
Manlio

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Manlio Perillo wrote: | This is right. | The problem is that often (IMHO) a function definition can be rewritten | so that it is much more readable. | | As an example, with the takeList function I posted. I looked at it, found nothing wrong with the original, and absolutely hated your "fixed" version. I might have written it like this, instead: ~ buildPartitions xs ns = zipWith take ns . init . scanl (flip drop) xs $ ns I think this way separates the different "stages" of the function somewhat better, but it's barely a change. The original was fine. | In other cases, you can just break long lines, introducing intermediate | functions that have a descriptive name *and* a type definition. | | Doing this is an art, but a coding style for Haskell should try to | document this. Agreed. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJL1gACgkQye5hVyvIUKnF/ACgjbd+gjolHCiS9tWosbiH3gnX j0EAn2zbeanj9UUQnl1pnQ+GRdPpYiRj =h5bU -----END PGP SIGNATURE-----

| As an example, with the takeList function I posted.
I looked at it, found nothing wrong with the original, and absolutely hated your "fixed" version. I might have written it like this, instead:
~ buildPartitions xs ns = zipWith take ns . init . scanl (flip drop) xs $ ns
Maybe it's just me, but I think that takeList ns xs = evalState (mapM (State . splitAt) ns) xs or even takeList = evalState . map (State . splitAt) would be much clearer than both versions.

Pretty cool once you know what the function does, but I must admit I
wouldn't immediately guess the purpose of the function when written in
this way.
2009/3/24 Miguel Mitrofanov
| As an example, with the takeList function I posted.
I looked at it, found nothing wrong with the original, and absolutely hated your "fixed" version. I might have written it like this, instead:
~ buildPartitions xs ns = zipWith take ns . init . scanl (flip drop) xs $ ns
Maybe it's just me, but I think that
takeList ns xs = evalState (mapM (State . splitAt) ns) xs
or even
takeList = evalState . map (State . splitAt)
would be much clearer than both versions. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru

On Tue, 2009-03-24 at 22:33 +0300, Eugene Kirpichov wrote:
Pretty cool once you know what the function does, but I must admit I wouldn't immediately guess the purpose of the function when written in this way.
I wouldn't immediately guess the purpose of the function written in any way. I think, in general, the best way to document the purpose of the function is -- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]] jcc

Jonathan Cast ha scritto:
[...]
I think, in general, the best way to document the purpose of the function is
-- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]]
Note that I was not speaking about the best way to document a function. I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
[...]
Manlio

On Tue, 2009-03-24 at 22:43 +0100, Manlio Perillo wrote:
Jonathan Cast ha scritto:
[...]
I think, in general, the best way to document the purpose of the function is
-- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]]
Note that I was not speaking about the best way to document a function.
I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
I've already explicitly rejected the claim that professional Haskell code should be written to aid beginning users. Again, that's what textbooks are for. And I was explicitly commenting on the claim that it was obvious, from any version posted thus far, what the function was supposed to do. Your suggested code hardly helps make the function's purpose clear; comments (or, better yet, tests, such as: prop_length = \ ns xn -> sum ns <= length xn ==> map length (takeList ns xn) == ns do a much better job of explaining purpose). jcc

Manlio Perillo wrote:
I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
I've been learning Haskell for about 3 months. I think its a mistake to write code so that its easy for someone learning Haskell to read it. Code should be written to be easily read by other experienced users of the langauge. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

Erik de Castro Lopo ha scritto:
Manlio Perillo wrote:
I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
I've been learning Haskell for about 3 months.
I think its a mistake to write code so that its easy for someone learning Haskell to read it. Code should be written to be easily read by other experienced users of the langauge.
Note that to write code so that its easy to read, does not mean rewrite the code as I did in the example. It also means to add good comments, in the right places.
Erik
Manlio

As (yet another?) Haskell newbie, with a day job using Java (where "keep it simple, stupid" is not a principle, it's a language enforced requirement), I would much prefer the function is implemented in the most concise and idiomatic style that the writer is capable of. That is, either the zipWith...scanl solution (or its variants) or the state solution. I've found that I learn considerably more from functions written this way that also have a good documentation comment than from munching on the standard pattern matching recursion again and again. If the function is well described, and short in purpose and text, I can use the fact that with functional programming (with some exception) ensures that all I need to understand the behavior should be right in front of me and I can spend time learning the patterns. Just my 2 cents, -Ross On Mar 24, 2009, at 5:43 PM, Manlio Perillo wrote:
Jonathan Cast ha scritto:
[...] I think, in general, the best way to document the purpose of the function is -- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]]
Note that I was not speaking about the best way to document a function.
I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I'd love to help newbies get the hang of Haskell without having to jump in
the deep (and smart-infested) end first. And I'd love for people to keep
writing smart code for non-newbies to enjoy.
Perhaps a practical suggestion would be some wiki pages devoted to pointing
out code with various learning qualities, to help haskellers of all levels
of experience learn effectively.
- Conal
On Tue, Mar 24, 2009 at 2:43 PM, Manlio Perillo
Jonathan Cast ha scritto:
[...]
I think, in general, the best way to document the purpose of the function is
-- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]]
Note that I was not speaking about the best way to document a function.
I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
[...]
Manlio
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Conal Elliott ha scritto:
I'd love to help newbies get the hang of Haskell without having to jump in the deep (and smart-infested) end first. And I'd love for people to keep writing smart code for non-newbies to enjoy.
Perhaps a practical suggestion would be some wiki pages devoted to pointing out code with various learning qualities, to help haskellers of all levels of experience learn effectively.
Yes, this is a good start. Advices to people learning Haskell about how to learn reading code. And advices to experienced Haskell programmers about how to document their code so that it may help less experienced programmers. IMHO, this should also go in the future Haskell coding style.
[...]
Manlio

And advices to experienced Haskell programmers about how to document their code so that it may help less experienced programmers.
Manlio -- You may be missing the point of my suggestion, which is to help
people *find* code that suits them, rather than changing anyone's coding
style. Optimizing code for one segment of readers is pessimizing it for
another. Instead of dumbing down the smart code, I'd like to help your
friends to help each other find dumber code, *and* to help others of us find
smarter code.
- Conal
On Tue, Mar 24, 2009 at 3:03 PM, Manlio Perillo
Conal Elliott ha scritto:
I'd love to help newbies get the hang of Haskell without having to jump in the deep (and smart-infested) end first. And I'd love for people to keep writing smart code for non-newbies to enjoy.
Perhaps a practical suggestion would be some wiki pages devoted to pointing out code with various learning qualities, to help haskellers of all levels of experience learn effectively.
Yes, this is a good start.
Advices to people learning Haskell about how to learn reading code. And advices to experienced Haskell programmers about how to document their code so that it may help less experienced programmers.
IMHO, this should also go in the future Haskell coding style.
[...]
Manlio

Conal Elliott ha scritto:
And advices to experienced Haskell programmers about how to document their code so that it may help less experienced programmers.
Manlio -- You may be missing the point of my suggestion,
Ah, sorry.
which is to help people *find* code that suits them, rather than changing anyone's coding style. Optimizing code for one segment of readers is pessimizing it for another. Instead of dumbing down the smart code, I'd like to help your friends to help each other find dumber code, *and* to help others of us find smarter code.
This may be hard to do. However I already suggested to start reading the Prelude code, from the Haskell Report. Manlio

Manlio -- You may be missing the point of my suggestion, which is to help people *find* code that suits them, rather than changing anyone's coding style. Optimizing code for one segment of readers is pessimizing it for another. Instead of dumbing down the smart code, I'd like to help your friends to help each other find dumber code, *and* to help others of us find smarter code.
If he really intended to promote some dumb code as a better alternative to some otherwise equivalent smart code, then I must have missed his point. For me, when people defend a practice with notions like "programmer needs be smarter/more responsible/better educated", that's like the institutional equivalent of a "code smell". You see it everywhere, too. C/C++ programmers will tell you its storage model is fine, just "programmer needs to be more ..." C's storage model does have its advantages, and smart code is presumably a good thing too. But for example, exercises like just stripping a function of extraneous parameter identifiers doesn't make it smart, while it may make it harder for someone to understand it at a glance. I do it myself, even though I claim to detest it, which may tell us something about the appeal of exercises like that. Go ahead and write smart, clearly the benefits outweigh the cost, but tell us that there's no cost, no problem here if a reader who knows Haskell has a hard time following? >> "institution smell." Donn

On Tue, 2009-03-24 at 16:43 -0700, Donn Cave wrote:
If he really intended to promote some dumb code as a better alternative to some otherwise equivalent smart code,
`Smart' is Manlio's term --- or, rather, his characterization of his friends' reaction upon seeing some inscrutable piece of (apparent) Haskell golf or (seemingly) pointless code. The code seems excessively clever to them; when Manlio's example is merely clear, well-written, concise, and declarative, rather than operational, in intention.
...
Go ahead and write smart, clearly the benefits outweigh the cost, but tell us that there's no cost, no problem here if a reader who knows Haskell has a hard time following?
What reader who knows Haskell? We have a programmer who is, self-confessedly, just learning Haskell, not really proficient; we have is friends, who, by his statement of the problem do not know Haskell at all; and we have some un-specified group of other developers who, by selection, barely know Haskell or do not know it at all --- that is, developers who are still in the process of learning. I think your ``reader who knows Haskel'' has no-where to here figured in the discussion. jcc

2009/3/24 Manlio Perillo
Jonathan Cast ha scritto:
[...]
I think, in general, the best way to document the purpose of the function is
-- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]]
*That* was what I craved for. With the type and a name like "partitions", I would hardly have to look at the code at all. The comment is almost superfluous.
Note that I was not speaking about the best way to document a function.
I was speaking about the best way to write a function, so that it may help someone who is learning Haskell.
Then, the first version plus the documentation above would be perfect. Instant understanding about the purpose of the function, and insight about a how to write it. Loup

2009/3/24 Jonathan Cast
On Tue, 2009-03-24 at 22:33 +0300, Eugene Kirpichov wrote:
Pretty cool once you know what the function does, but I must admit I wouldn't immediately guess the purpose of the function when written in this way.
I wouldn't immediately guess the purpose of the function written in any way.
I think, in general, the best way to document the purpose of the function is
-- | Split a function into a sequence of partitions of specified lenth takeList :: [Int] -> [a] -> [[a]]
Thank-you Jonathan. That's the first message in this thread I've manage to understand.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Miguel Mitrofanov wrote: | Maybe it's just me, but I think that | | takeList ns xs = evalState (mapM (State . splitAt) ns) xs | | or even | | takeList = evalState . map (State . splitAt) | | would be much clearer than both versions. Definitely. I stuck with only the functions that were already being used because I figured the point was to make things readable with a limited set of building blocks. Thanks for sharing though. That was definitely not a solution that I was thinking of. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJN9gACgkQye5hVyvIUKn5AACgpLGOwp5asyFxPj6r/sjt4jz/ I7AAoIDDvYbpmWB8/Ag5ui+vNzvbHQ4l =NxfM -----END PGP SIGNATURE-----

Manlio Perillo complained about:
buildPartitions xs ns = zipWith take ns . init . scanl (flip drop) xs $ ns
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
Ha! Bravo! As the author of the offending zipWith/scanl version, I can say that love those State monad one-liners. However, ironically, I stopped using them for pretty much the same reason that Manlio is saying. The difference is that zipWith and scanl are classic Haskell idioms that any Haskell programmer will learn fairly early on. Whereas State monad one-liners used to be thought of as new and fancy and esoteric. But now they are becoming more mainstream, so perhaps I should go back to them. So the bottom line is that Manlio is right, really. It's just that Haskell is still very different than what most programmers are used to. So it does take a while to get a feeling for what is "too smart". Yitz

Yitzchak Gale ha scritto:
[...] So the bottom line is that Manlio is right, really. It's just that Haskell is still very different than what most programmers are used to. So it does take a while to get a feeling for what is "too smart".
Right, you centered the problem! The problem is where to place the separation line between "normal" and "too smart". Your function is readable, once I mentally separate each step. For someone with more experience, this operation may be automatic, and the function may appear totally natural. When writing these "dense" function, it is important, IMHO, to help the reader using comments, or by introducing intermediate functions. Manlio

Another helpful strategy for the reader is to get smarter, i.e. to invest
effort in rising to the level of the writer. Or just choose a different
book if s/he prefers. - Conal
On Tue, Mar 24, 2009 at 1:44 PM, Manlio Perillo
Yitzchak Gale ha scritto:
[...] So the bottom line is that Manlio is right, really. It's just that Haskell is still very different than what most programmers are used to. So it does take a while to get a feeling for what is "too smart".
Right, you centered the problem!
The problem is where to place the separation line between "normal" and "too smart".
Your function is readable, once I mentally separate each step. For someone with more experience, this operation may be automatic, and the function may appear totally natural.
When writing these "dense" function, it is important, IMHO, to help the reader using comments, or by introducing intermediate functions.
Manlio
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Conal Elliott ha scritto:
Another helpful strategy for the reader is to get smarter, i.e. to invest effort in rising to the level of the writer. Or just choose a different book if s/he prefers. - Conal
This strategy is doomed to failure, unfortunately. We live in the real world, compromises are necessary.
[...]
Manlio

"The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man." - George Bernard Shaw
On Tue, Mar 24, 2009 at 2:11 PM, Manlio Perillo
Conal Elliott ha scritto:
Another helpful strategy for the reader is to get smarter, i.e. to invest effort in rising to the level of the writer. Or just choose a different book if s/he prefers. - Conal
This strategy is doomed to failure, unfortunately. We live in the real world, compromises are necessary.
[...]
Manlio

On Tue, Mar 24, 2009 at 10:11 PM, Manlio Perillo
This strategy is doomed to failure, unfortunately.
So it is the good strategy, because Haskell's slogan is "avoid success at all cost" :-)
We live in the real world, compromises are necessary.
I don't think so. It's just that we have different kinds of people with different skills. If you try to please the whole world, you please nobody. As a beginner Haskeller, I just know I need more practice. folding is now natural to me, but monad transformers and applicative stuff not yet, but that's a matter of time. I just need to practice practice practice.

2009/3/24 Peter Verswyvelen
This strategy is doomed to failure, unfortunately.
So it is the good strategy, because Haskell's slogan is "avoid success at all cost" :-)
IN THE YEAR 1987, WAR WAS BEGINNING BIG, IMPERATIVE SOFTWARE BEHEMOTHS CLASHED IN A STATE OF IMPURITY UNDER THE SHADOW OF FEAR AND DOUBT, COLONY BY COLONY FELL INTO TYPELESS ANARCHY WHOLE PLANETS WERE SCROUNGED BY TERRIBLE SEGFAULTS THE HUNGER FOR A NEW PARADIGM WAS GNAWING AT THE ROOTS OF THE CIVILIZED UNIVERSE MEANWHILE, IN A GALAXY FAR, FAR AWAY, A SMALL GROUP OF LAZY FUNCTIONAL PROGRAMMERS CREATED A LANGUAGE IT WAS OUR LAST, BEST HOPE TO AVOID SUCCESS AT ALL COST... IT FAILED IT EVOLVED THERE ARE 8,581 IMPLEMENTATIONS SUPPORTING 935,842,712 EXTENSIONS THEY LOOK AND FEEL ... FUNCTIONAL SOME ARE PROGRAMMED TO THINK THAT THEY AREN'T IMPERATIVE AT ALL AT LEAST ONE IS ACTUALLY USED ONCE, IT HAD BEEN OUR LAST, BEST HOPE TO AVOID SUCCESS IN THE YEAR 2009, IT BECAME SOMETHING GREATER: OUR LAST, BEST HOPE FOR BLASTING THE INFERIOR LANGUAGES OUT OF THE SKY (WITH LAZY CLASS) YOU HAVE NO CHANCE TO SURVIVE MAP YOUR BIND

On Tue, Mar 24, 2009 at 4:11 PM, Manlio Perillo
Conal Elliott ha scritto:
Another helpful strategy for the reader is to get smarter, i.e. to invest effort in rising to the level of the writer. Or just choose a different book if s/he prefers. - Conal
This strategy is doomed to failure, unfortunately. We live in the real world, compromises are necessary.
It depends, IMO. Making changes to the programming style one uses, in particular ones such as you propose, would ultimate lead to programs in haskell being less flexible and/or powerful than if they are. I'm a bit new to haskell myself, but I do understand that one of the primary uses cases and/or motivating factors for using Haskell is when you really just NEED that extra abstraction and power you get from being able to do these types of things. Someone once said that "simple problems should be simple and difficult problems should be possible". That doesn't mean the difficult problems become EASY. One of the best uses for haskell is solving difficult problems. It's obviously still going to be difficult to solve, and as such the writer (and hence by extension the reader) is going to have to be smart as well. C++ is actually beginning to suffer the complexity problem as well, especially with C++0x, but I fundamentally disagree with the added complexity in C++, specifically because it is a language which is supposed to excel at solving solve all kinds of problems. Haskell excels at solving difficult problems, so I don't think the target audience for Haskell necessarily needs to include people who can't figure out difficult code. C++ otoh they need to agree on a target audience or set of problems that it's geared toward, and then either s**t or get off the pot. It's fine if they keep adding complexity until the cows come home, but just agree up front that that's what it is and programmers who aren't cut out for it use a different language. With Haskell I think you have that up-front agreement, so there's no problem.

Zachary Turner ha scritto:
[...] but I do understand that one of the primary uses cases and/or motivating factors for using Haskell is when you really just NEED that extra abstraction and power you get from being able to do these types of things. Someone once said that "simple problems should be simple and difficult problems should be possible". That doesn't mean the difficult problems become EASY. One of the best uses for haskell is solving difficult problems. It's obviously still going to be difficult to solve, and as such the writer (and hence by extension the reader) is going to have to be smart as well.
I agree with you, and in fact I'm still learning Haskell. The reason I'm still learning Haskell is because I like its syntax. And yes, I also like the ability to write efficient function by composing other function. But there is a limit. In C you have the ability to write assembler code, but one usually think twice before doing so, since it will become unreadable to most of the people. If you think that writing low level assembler code is the best solution, you should at least document it well, instead of assuming that the reader is as smart as you. As I have written at the begin of the thread, there are people I know (*much* more smarter then me), that keep themselves away from Haskell because they start to read some code, and they feel something is wrong. They *think* "ah, the author wrote code in this way just to show how smart he is; how can I learn a language if most of the available code is written in this way"? Note the use of the verb "think". This is only a sensation, and it is wrong; but sensations are important.
[...]
Manlio

Perhaps is much easier to create one line compositions of functions in
haskell rather than in C because the type system helps a lot in the process.
However, reusability of source code and maintainability has never been taken
seriously by haskell programmers, simply because there are no industrial
projects in Haskell with dozens of people with different skills that come
and go. Because that, probably the early C programers were far more
exhuberant than the current C++ and Java programmers now. To have a broad
base of users and/or to assure a cheap programmers for your industrial
application has the servitude to "the rule of least power". That is another
reason for the lemma: "Avoid success at all costs"
The rule of least power
(http://www.w3.org/2001/tag/doc/leastPower.html)
http://www.w3.org/2001/tag/doc/leastPower.htmlOriginally
written by Tim Berners Lee;. For publishing (and, arguably, for code
reusability) "the best language is the least powerful".
This depressing conclusions can be overcomed if we consider that the rule of
least power favours turing incomplete DSLs, so every industrial development
can be decomposed in two groups wich demands two different skills: 1)
DSLs creation 2) DSL programming
2009/3/24 Manlio Perillo
Zachary Turner ha scritto:
[...]
but I do understand that one of the primary uses
cases and/or motivating factors for using Haskell is when you really just NEED that extra abstraction and power you get from being able to do these types of things. Someone once said that "simple problems should be simple and difficult problems should be possible". That doesn't mean the difficult problems become EASY. One of the best uses for haskell is solving difficult problems. It's obviously still going to be difficult to solve, and as such the writer (and hence by extension the reader) is going to have to be smart as well.
I agree with you, and in fact I'm still learning Haskell. The reason I'm still learning Haskell is because I like its syntax. And yes, I also like the ability to write efficient function by composing other function.
But there is a limit. In C you have the ability to write assembler code, but one usually think twice before doing so, since it will become unreadable to most of the people.
If you think that writing low level assembler code is the best solution, you should at least document it well, instead of assuming that the reader is as smart as you.
As I have written at the begin of the thread, there are people I know (*much* more smarter then me), that keep themselves away from Haskell because they start to read some code, and they feel something is wrong.
They *think* "ah, the author wrote code in this way just to show how smart he is; how can I learn a language if most of the available code is written in this way"?
Note the use of the verb "think". This is only a sensation, and it is wrong; but sensations are important.
[...]
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

"Alberto G. Corona "
However, reusability of source code and maintainability has never been taken seriously by haskell programmers, simply because there are no industrial projects in Haskell with dozens of people with different skills that come and go.
Now that's a claim. In the sense that I don't do commercial haskell coding, but know what maintainability is, anyway: I've maintained everything from utterly atrocious to mindboggingly elegant java code for a living. I can tell you with 110% confidence that maintainability is about composibility, _on_every_level_: Not just on statement-level. Otherwise, I wouldn't have cussed that much. Curiously enough, as soon as the code didn't make you whince, it was easily maintainable. This is closely related to Linus' observation that good [imperative] code is data-structure centred, and Greenspun's Tenth Rule. With Haskell, there's finally a language that makes large-scale changes as easy as small-scale changes without having to resort to implement an interpreter for a functional language. As the position of changes tends to travel upwards in a bottom-up approach and small-scale changes are easy to pull off (you already understand what you need to do since otherwise you wouldn't have identified the need for a small-scale change and continued to add onion layers), caring about editability on function level just doesn't pay off. That's why I don't care whether or not I have to re-write a whole function to change it: If it's going to change, which isn't all that likely, I can cope with renaming it and write another say 160 characters directly below it. Adding a proper quickcheck property (if it didn't exist, yet, or the semantics changed) is usually more work: You don't only need to get the preposition right, but also the test case generator. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Sometimes that is very hard when the writer is way smarter than the reader
:-)
2009/3/24 Conal Elliott
Another helpful strategy for the reader is to get smarter, i.e. to invest effort in rising to the level of the writer. Or just choose a different book if s/he prefers. - Conal
On Tue, Mar 24, 2009 at 1:44 PM, Manlio Perillo
wrote: Yitzchak Gale ha scritto:
[...] So the bottom line is that Manlio is right, really. It's just that Haskell is still very different than what most programmers are used to. So it does take a while to get a feeling for what is "too smart".
Right, you centered the problem!
The problem is where to place the separation line between "normal" and "too smart".
Your function is readable, once I mentally separate each step. For someone with more experience, this operation may be automatic, and the function may appear totally natural.
When writing these "dense" function, it is important, IMHO, to help the reader using comments, or by introducing intermediate functions.
Manlio
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hah! It sure is. :)
On Tue, Mar 24, 2009 at 2:17 PM, Peter Verswyvelen
Sometimes that is very hard when the writer is way smarter than the reader :-) 2009/3/24 Conal Elliott
Another helpful strategy for the reader is to get smarter, i.e. to invest
effort in rising to the level of the writer. Or just choose a different book if s/he prefers. - Conal
On Tue, Mar 24, 2009 at 1:44 PM, Manlio Perillo
wrote:
Yitzchak Gale ha scritto:
[...] So the bottom line is that Manlio is right, really. It's just that Haskell is still very different than what most programmers are used to. So it does take a while to get a feeling for what is "too smart".
Right, you centered the problem!
The problem is where to place the separation line between "normal" and "too smart".
Your function is readable, once I mentally separate each step. For someone with more experience, this operation may be automatic, and the function may appear totally natural.
When writing these "dense" function, it is important, IMHO, to help the reader using comments, or by introducing intermediate functions.
Manlio
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately. The trick is to see that evalState and State are just noise for the type inferencer so we just need to think about mapM splitAt. This turns a sequence of integers into a sequence of splitAts, each one chewing on the leftovers of the previous one. *Way* easier than both the zipWith one-liner and the explicit version. It says exactly what it means, almost in English. -- Dan

Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
I wrote:
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Dan Piponi wrote:
Are you saying there's a problem with this implementation? It's the only one I could just read immediately... It says exactly what it means, almost in English.
Yes, I agree. But at a time when the majority of experienced Haskellers couldn't easily see that because they weren't comfortable enough with the State monad, using it would have cost more on average (for debugging, refactoring, etc.). Whereas now I don't think that's a problem anymore. Yitz

Dan Piponi ha scritto:
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately.
Yes, you understand it immediately once you know what a state monad is. But how well is introduced, explained and emphasized the state monad in current textbooks? When I started learning Haskell, the first thing I learned was recursion and pattern matching. So, this may be the reason why I find more readable my takeList solution.
[...]
Manlio

On Tue, 2009-03-24 at 23:15 +0100, Manlio Perillo wrote:
Dan Piponi ha scritto:
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately.
Yes, you understand it immediately once you know what a state monad is. But how well is introduced, explained and emphasized the state monad in current textbooks?
When I started learning Haskell, the first thing I learned was recursion and pattern matching.
You know, this might actually need to be looked into. You need to know recursion and pattern-matching to *write* re-usable higher-order functions, but how appropriate is that as the first thing taught? jcc

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jonathan Cast wrote: | You know, this might actually need to be looked into. | | You need to know recursion and pattern-matching to *write* re-usable | higher-order functions, but how appropriate is that as the first thing | taught? An excellent question! Coincidentally, I was just having a conversation with my girlfriend about programming with "building blocks." She described her main hurdle with programming at the moment, which is getting over the fact that she is used to working with tangible objects that you just put together in the appropriate way and her mind expects programming to work the same way, but it doesn't, at least in the languages she has looked at so far. I hypothesized that a language emphasizing combinators might be more intuitive to her than a language emphasizing loops and imperative steps for precisely this reason. I'm not entirely sure that she bought it, but she seemed to agree that it at least sounds nice in theory. Now I just have to convince her to become a willing subject in this experiment. ;) This question makes me wonder... why is explicit recursion taught first? I can't help but think now that it may be because those coming from imperative languages are used to writing loops, and recursion is the closest to loops that we have. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJYC4ACgkQye5hVyvIUKkExwCeLmejblGHyjdGsEkMykJ5bAJY pZ0AniaEpdgHCZzz2AALFYQ7X9WYEzws =R0qo -----END PGP SIGNATURE-----

This question makes me wonder... why is explicit recursion taught first? [...]
Perhaps also because teachers, being older than their students, are often
mired in outdated thinking.
On Tue, Mar 24, 2009 at 3:35 PM, Jake McArthur
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Jonathan Cast wrote: | You know, this might actually need to be looked into. | | You need to know recursion and pattern-matching to *write* re-usable | higher-order functions, but how appropriate is that as the first thing | taught?
An excellent question!
Coincidentally, I was just having a conversation with my girlfriend about programming with "building blocks." She described her main hurdle with programming at the moment, which is getting over the fact that she is used to working with tangible objects that you just put together in the appropriate way and her mind expects programming to work the same way, but it doesn't, at least in the languages she has looked at so far. I hypothesized that a language emphasizing combinators might be more intuitive to her than a language emphasizing loops and imperative steps for precisely this reason. I'm not entirely sure that she bought it, but she seemed to agree that it at least sounds nice in theory.
Now I just have to convince her to become a willing subject in this experiment. ;)
This question makes me wonder... why is explicit recursion taught first? I can't help but think now that it may be because those coming from imperative languages are used to writing loops, and recursion is the closest to loops that we have.
- - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAknJYC4ACgkQye5hVyvIUKkExwCeLmejblGHyjdGsEkMykJ5bAJY pZ0AniaEpdgHCZzz2AALFYQ7X9WYEzws =R0qo -----END PGP SIGNATURE-----
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Manlio,
We live in the age of participation -- of co-education. Don't worry about
text-books. Contribute to some wiki pages & blogs today that share these
smart techniques with others.
<twocents>Learning/progress is mainly results when people respond to their
own incomprehension by moving into new & challenging ideas, not by banishing
them. Puzzlement can be met by resistance or by embracing &
learning.</twocents>
On Tue, Mar 24, 2009 at 3:15 PM, Manlio Perillo
Dan Piponi ha scritto:
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty
much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately.
Yes, you understand it immediately once you know what a state monad is. But how well is introduced, explained and emphasized the state monad in current textbooks?
When I started learning Haskell, the first thing I learned was recursion and pattern matching.
So, this may be the reason why I find more readable my takeList solution.
[...]
Manlio
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Conal Elliott ha scritto:
Manlio,
We live in the age of participation -- of co-education. Don't worry about text-books. Contribute to some wiki pages & blogs today that share these smart techniques with others.
When I started learning Haskell (by my initiative), what I did was: 1) Quick reading of the first tutorial I found on the wiki. http://darcs.haskell.org/yaht/yaht.pdf, if i remember correctly 2) Quick reading the Haskell Report 3) Reading another tutorial: http://www.haskell.org/tutorial/ 4) Reading again the Haskell Report 5) A lot of time spent finding good tutorials. Yet, I did not knew what monads were, I just felt that monads were some strange and advanced feature ... A period where I stop looking for Haskell 6) Found some good tutorial about what monads are, but yet I did not knew anything about state monads, monad transformers, and so. ... Another period were I stop looking for Haskell 7) The Real Word Haskell book. Finally in one book all "advanced" concepts. I read the book online. I found the book good, but i think it is too dispersive in some chapters. I already forgot some of the concepts I read, mostly because in some chapter I get annoyed, and started skipping things, or reading it quickly. I will buying a copy in May, at Pycon Italy (were there will be a stand by O'Really), so that I can read it again. 8) New impetus at learning Haskell. I read again the Haskell Report, and the "A Gentle Introduction to Haskell". I finally started to understand how things works 7) Start to write some "real" code. I now I'm able to understand much of the code I read. But for some kind of code I still have problems. Manlio

Manlio Perillo wrote:
Conal Elliott ha scritto:
Manlio,
We live in the age of participation -- of co-education. Don't worry about text-books. Contribute to some wiki pages & blogs today that share these smart techniques with others.
When I started learning Haskell (by my initiative), what I did was:
[steps 1) - 9), mostly internet tutorials ]
I think you'd have had a much easier time by starting with a proper book right away, like Richard Bird's "Introduction to Functional Programming in Haskell", accompanied by Real World Haskell. You see, the reason that books cost money is (should be) high quality content. :) Regards, apfelmus -- http://apfelmus.nfshost.com

Heinrich Apfelmus ha scritto:
Manlio Perillo wrote:
Conal Elliott ha scritto:
Manlio,
We live in the age of participation -- of co-education. Don't worry about text-books. Contribute to some wiki pages & blogs today that share these smart techniques with others.
When I started learning Haskell (by my initiative), what I did was:
[steps 1) - 9), mostly internet tutorials ]
I think you'd have had a much easier time by starting with a proper book right away, like Richard Bird's "Introduction to Functional Programming in Haskell", accompanied by Real World Haskell.
Unfortunately, one year ago Real World Haskell was not here. And note that I have no problems with basic functional programming concepts. My problems are specific to Haskell.
You see, the reason that books cost money is (should be) high quality content. :)
Manlio

Manlio Perillo wrote:
Heinrich Apfelmus ha scritto:
I think you'd have had a much easier time by starting with a proper book right away, like Richard Bird's "Introduction to Functional Programming in Haskell", accompanied by Real World Haskell.
Unfortunately, one year ago Real World Haskell was not here. And note that I have no problems with basic functional programming concepts. My problems are specific to Haskell.
Despite the title, Bird's book is quite specific to Haskell, in particular concerning the philosophy of composing solutions from building blocks as opposed to primitive recursion. I'd say that every serious Haskell programmer should have it on his bookshelf (even if only for show ;) ). Regards, apfelmus -- http://apfelmus.nfshost.com

After reading the chapter about parsers in Bird's book, I tried to implement a simple parser myself, and this was a great experience, a real eye opener on how declarative and composable Haskell can be. Haskell is... well magic :-) It gave me same kind of joy I had when I made my first moving sprite on the Commodore 64 in 1985. On Thu, Mar 26, 2009 at 12:44 AM, Heinrich Apfelmus < apfelmus@quantentunnel.de> wrote:
Manlio Perillo wrote:
Heinrich Apfelmus ha scritto:
I think you'd have had a much easier time by starting with a proper book right away, like Richard Bird's "Introduction to Functional Programming in Haskell", accompanied by Real World Haskell.
Unfortunately, one year ago Real World Haskell was not here. And note that I have no problems with basic functional programming concepts. My problems are specific to Haskell.
Despite the title, Bird's book is quite specific to Haskell, in particular concerning the philosophy of composing solutions from building blocks as opposed to primitive recursion.
I'd say that every serious Haskell programmer should have it on his bookshelf (even if only for show ;) ).
Regards, apfelmus
-- http://apfelmus.nfshost.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Dan Piponi wrote:
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately. The trick is to see that evalState and State are just noise for the type inferencer so we just need to think about mapM splitAt. This turns a sequence of integers into a sequence of splitAts, each one chewing on the leftovers of the previous one. *Way* easier than both the zipWith one-liner and the explicit version. It says exactly what it means, almost in English.
I couldn't agree more. In other words, splitAt is really to be thought of as a function that lives in the state monad. Regards, apfelmus -- http://apfelmus.nfshost.com

Dan Piponi wrote:
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately. The trick is to see that evalState and State are just noise for the type inferencer so we just need to think about mapM splitAt. This turns a sequence of integers into a sequence of splitAts, each one chewing on the leftovers of the previous one. *Way* easier than both the zipWith one-liner and the explicit version. It says exactly what it means, almost in English.
But it only works out nicely because the ordering of the components of the pair returned by splitAt matches the ordering that the state monad expects (and I can never remember which way around they are in Control.Monad.State). Try doing it with mapAccumL, which is arguably the right abstraction, but has the components the other way around. Cheers, Simon

On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
the ordering that the state monad expects (and I can never remember which way around they are in Control.Monad.State).
Really? I found it obvious once I figured out it how simple it made (>>=). With the order from Control.Monad.State (with constructors ignored): a >>= f = \ s -> case s a of (x, s') -> f x s' Reversing the order of the components of the result gives you a >>= f = \ s -> case s a of (s', x) -> f x s' which just looks weird.
Try doing it with mapAccumL, which is arguably the right abstraction, but has the components the other way around.
Define swap (a, b) = (b, a) You'll never worry about the order of components of a pair again. This function is as indispensable as flip. jcc

On Wed, 2009-03-25 at 03:01 +0000, Robin Green wrote:
On Wed, 25 Mar 2009 08:25:40 -0700 Jonathan Cast
wrote: Define
swap (a, b) = (b, a)
By the way, if you want to be "too smart", there's a generalised version of swap in Control.Category.Braided in the category-extras package.
Thanks, I'll check it out.
That might be a bit overkill though.
What is this word `overkill' you use? jcc

Jonathan Cast wrote:
On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
the ordering that the state monad expects (and I can never remember which way around they are in Control.Monad.State).
Really? I found it obvious once I figured out it how simple it made (>>=). With the order from Control.Monad.State (with constructors ignored):
a >>= f = \ s -> case s a of (x, s') -> f x s'
Reversing the order of the components of the result gives you
a >>= f = \ s -> case s a of (s', x) -> f x s'
which just looks weird.
It might look weird to you, but that's the way that GHC's IO and ST monads do it. It looks perfectly natural to me! (and you have the a and s the wrong way around in 'case s a', BTW).
Try doing it with mapAccumL, which is arguably the right abstraction, but has the components the other way around.
Define
swap (a, b) = (b, a)
ew, that's far too crude. I think you mean swap = uncurry $ flip (,) Cheers, Simon

On Wed, 2009-03-25 at 15:32 +0000, Simon Marlow wrote:
Jonathan Cast wrote:
On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
the ordering that the state monad expects (and I can never remember which way around they are in Control.Monad.State).
Really? I found it obvious once I figured out it how simple it made (>>=). With the order from Control.Monad.State (with constructors ignored):
a >>= f = \ s -> case s a of (x, s') -> f x s'
Reversing the order of the components of the result gives you
a >>= f = \ s -> case s a of (s', x) -> f x s'
which just looks weird.
It might look weird to you, but that's the way that GHC's IO and ST monads do it. It looks perfectly natural to me!
Right. Consider this an argument for fixing IO/ST(/STM?) to conform to the self-evidently correct ordering of Control.Monad.State :)
(and you have the a and s the wrong way around in 'case s a', BTW).
Um, yes. /Mea culpa/.
Try doing it with mapAccumL, which is arguably the right abstraction, but has the components the other way around.
Define
swap (a, b) = (b, a)
ew, that's far too crude. I think you mean
swap = uncurry $ flip (,)
Ah, yes. jcc

On Wed, Mar 25, 2009 at 11:32 AM, Simon Marlow
Jonathan Cast wrote:
Define
swap (a, b) = (b, a)
ew, that's far too crude. I think you mean
swap = uncurry $ flip (,)
On the theme of using monads where you might not expect,
swap = liftA2 (,) snd fst
--
Dave Menendez

or via Arrow:
swap = snd &&& fst
On Wed, Mar 25, 2009 at 9:16 AM, David Menendez
On Wed, Mar 25, 2009 at 11:32 AM, Simon Marlow
wrote: Jonathan Cast wrote:
Define
swap (a, b) = (b, a)
ew, that's far too crude. I think you mean
swap = uncurry $ flip (,)
On the theme of using monads where you might not expect,
swap = liftA2 (,) snd fst
-- Dave Menendez
<http://www.eyrie.org/~zednenem/ http://www.eyrie.org/%7Ezednenem/> _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Wed, Mar 25, 2009 at 8:25 AM, Jonathan Cast
On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
the ordering that the state monad expects (and I can never remember which way around they are in Control.Monad.State).
Really? I found it obvious once I figured out it how simple it made (>>=). With the order from Control.Monad.State (with constructors ignored):
a >>= f = \ s -> case s a of (x, s') -> f x s'
Reversing the order of the components of the result gives you
a >>= f = \ s -> case s a of (s', x) -> f x s'
which just looks weird.
However, if you are used to thinking in terms of type composition, s -> (s, a) makes more sense, because it is effectively (s ->) . (s,) . Identity whose "functor-ness" is automatic via composition of functors: newtype Identity a = Identity a inIdentity f (Identity a) = Identity (f a) instance Functor Identity where fmap f = inIdentity f instance Functor ((,) a) where fmap f (a, x) = (a, f x) instance Functor ((->) a) where fmap f k a = f (k a) newtype O f g x = O (f (g x)) inO f (O a) = O (f a) instance (Functor f, Functor g) => Functor (O f g) where fmap f = inO (fmap (fmap f)) -- or fmap = inO . fmap . fmap -- not valid haskell, but if there were sections at the type level it would be. type State s = (s ->) `O` (s,) `O` Identity -- ryan

On Wed, Mar 25, 2009 at 4:09 PM, Simon Marlow
But it only works out nicely because the ordering of the components of the pair returned by splitAt matches the ordering that the state monad expects (and I can never remember which way around they are in Control.Monad.State).
Now you mention this, I often had to write a little function swap :: (a,b) -> (b,a) It seems many other authors have done the same in their own modules. Maybe this should be part of the Prelude?
Try doing it with mapAccumL, which is arguably the right abstraction, but has the components the other way around.
Cheers, Simon
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Are you saying there's a problem with this implementation? It's the
Yes, there is actually a problem with this implementation.
import Data.List
import Control.Monad.State
import Debug.Trace.Helpers
partitions [] xs = []
partitions (n:parts) xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partitions parts xs)
partitionsSimpleStupidGood = partitions
partitionsTooFrickinClever = evalState . mapM (State . splitAt)
testP pf = mapM_ putStrLn [
show . pf [3,7..] $ [1..10]
, show . pf [3,7,11,15] $ [1..]
, show . head . last $ pf [3,3..] [1..10^6]
]
*Main> testP partitionsSimpleStupidGood
testP partitionsSimpleStupidGood^J[[1,2,3],[4,5,6,7,8,9,10]]
[[1,2,3],[4,5,6,7,8,9,10],[11,12,13,14,15,16,17,18,19,20,21],[22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]]
1000000
Now try testP partitionsTooFrickinClever
Now, I am sure there is a fix for whatever is ailing the State monad
version, and we would all learn a lesson from it about strictness,
laziness, and the State monad.
However, there is something to be said for code that just looks like a
duck and quacks like a duck. It's less likely to surprise you.
So... I insist... Easy for a beginner to read == better!
2009/3/24 Dan Piponi
Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately. The trick is to see that evalState and State are just noise for the type inferencer so we just need to think about mapM splitAt. This turns a sequence of integers into a sequence of splitAts, each one chewing on the leftovers of the previous one. *Way* easier than both the zipWith one-liner and the explicit version. It says exactly what it means, almost in English. -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

However, there is something to be said for code that just looks like a duck and quacks like a duck. It's less likely to surprise you.
So... I insist... Easy for a beginner to read == better!
All you have said is that one building a skyscraper will need scaffolding, blueprints, and a good building inspector. The intended inhabitants of that skyscraper will not want to stare out at scaffolding for the rest of their lives. Put the simple version in a QuickCheck predicate. That is the ideal place for it. All the better if through this process we all learn a lesson about strictness, laziness, and the State monad. Dan Thomas Hartman wrote:
Are you saying there's a problem with this implementation? It's the
Yes, there is actually a problem with this implementation.
import Data.List import Control.Monad.State import Debug.Trace.Helpers
partitions [] xs = [] partitions (n:parts) xs = let (beg,end) = splitAt n xs in beg : ( case end of [] -> [] xs -> partitions parts xs)
partitionsSimpleStupidGood = partitions
partitionsTooFrickinClever = evalState . mapM (State . splitAt)
testP pf = mapM_ putStrLn [ show . pf [3,7..] $ [1..10] , show . pf [3,7,11,15] $ [1..] , show . head . last $ pf [3,3..] [1..10^6] ]
*Main> testP partitionsSimpleStupidGood testP partitionsSimpleStupidGood^J[[1,2,3],[4,5,6,7,8,9,10]] [[1,2,3],[4,5,6,7,8,9,10],[11,12,13,14,15,16,17,18,19,20,21],[22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]] 1000000
Now try testP partitionsTooFrickinClever
Now, I am sure there is a fix for whatever is ailing the State monad version, and we would all learn a lesson from it about strictness, laziness, and the State monad.
However, there is something to be said for code that just looks like a duck and quacks like a duck. It's less likely to surprise you.
So... I insist... Easy for a beginner to read == better!
2009/3/24 Dan Piponi
: takeList = evalState . mapM (State . splitAt) However, ironically, I stopped using them for pretty much the same reason that Manlio is saying. Are you saying there's a problem with this implementation? It's the only one I could just read immediately. The trick is to see that evalState and State are just noise for the type inferencer so we just need to think about mapM splitAt. This turns a sequence of integers into a sequence of splitAts, each one chewing on the leftovers of the
Miguel Mitrofanov wrote: previous one. *Way* easier than both the zipWith one-liner and the explicit version. It says exactly what it means, almost in English. -- Dan
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Wed, 2009-03-25 at 12:48 -0700, Dan Weston wrote:
However, there is something to be said for code that just looks like a duck and quacks like a duck. It's less likely to surprise you.
So... I insist... Easy for a beginner to read == better!
All you have said is that one building a skyscraper will need scaffolding, blueprints, and a good building inspector. The intended inhabitants of that skyscraper will not want to stare out at scaffolding for the rest of their lives.
+1 jcc

Since this thread is ostensibly about haskell style, it should also be
about haskell style *today*.
As I think Yitz noted earlier, this is a moving target.
Adoption of haskell by the masses -- moving.
Skill of haskell hordes -- moving.
Abstractions available as part of "idiomatic haskell", and correctness
of these abstractions, as the state monad for partitions cockup shows
-- also moving.
2009/3/25 Jonathan Cast
On Wed, 2009-03-25 at 12:48 -0700, Dan Weston wrote:
However, there is something to be said for code that just looks like a > duck and quacks like a duck. It's less likely to surprise you. > > So... I insist... Easy for a beginner to read == better!
All you have said is that one building a skyscraper will need scaffolding, blueprints, and a good building inspector. The intended inhabitants of that skyscraper will not want to stare out at scaffolding for the rest of their lives.
+1
jcc

Oh, and incidentally, if you change to Control.Monad.State.Strict
*Main> testP partitionsTooFrickinClever
testP partitionsTooFrickinClever^J*** Exception: stack overflow
Don't get me wrong -- I have learned a lot from this thread, and I
think it would be really cool if there was a way to do this that is
clever, that is *right*.
But since the original point was about style, I think this underscores
the point that good style should be newbie friendly *if possible*.
Especially since being a newbie in haskell isn't like in other
languages -- might mean you have been using it for years as a hobby,
but just don't have comfort in certain monads and idioms.
2009/3/25 Thomas Hartman
Are you saying there's a problem with this implementation? It's the
Yes, there is actually a problem with this implementation.
import Data.List import Control.Monad.State import Debug.Trace.Helpers
partitions [] xs = [] partitions (n:parts) xs = let (beg,end) = splitAt n xs in beg : ( case end of [] -> [] xs -> partitions parts xs)
partitionsSimpleStupidGood = partitions
partitionsTooFrickinClever = evalState . mapM (State . splitAt)
testP pf = mapM_ putStrLn [ show . pf [3,7..] $ [1..10] , show . pf [3,7,11,15] $ [1..] , show . head . last $ pf [3,3..] [1..10^6] ]
*Main> testP partitionsSimpleStupidGood testP partitionsSimpleStupidGood^J[[1,2,3],[4,5,6,7,8,9,10]] [[1,2,3],[4,5,6,7,8,9,10],[11,12,13,14,15,16,17,18,19,20,21],[22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]] 1000000
Now try testP partitionsTooFrickinClever
Now, I am sure there is a fix for whatever is ailing the State monad version, and we would all learn a lesson from it about strictness, laziness, and the State monad.
However, there is something to be said for code that just looks like a duck and quacks like a duck. It's less likely to surprise you.
So... I insist... Easy for a beginner to read == better!
2009/3/24 Dan Piponi
: Miguel Mitrofanov wrote:
takeList = evalState . mapM (State . splitAt)
However, ironically, I stopped using them for pretty much the same reason that Manlio is saying.
Are you saying there's a problem with this implementation? It's the only one I could just read immediately. The trick is to see that evalState and State are just noise for the type inferencer so we just need to think about mapM splitAt. This turns a sequence of integers into a sequence of splitAts, each one chewing on the leftovers of the previous one. *Way* easier than both the zipWith one-liner and the explicit version. It says exactly what it means, almost in English. -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Wed, Mar 25, 2009 at 12:44 PM, Thomas Hartman
Are you saying there's a problem with this implementation? It's the
Yes, there is actually a problem with this implementation.
However, there is something to be said for code that just looks like a duck and quacks like a duck. It's less likely to surprise you.
Well the problem here isn't that the code does something surprising. It's author was making assumptions about the type of input that it's going to get. I think that's an orthogonal issue.
So... I insist... Easy for a beginner to read == better!
Not at all. Beginner list processing code can and often does go awry when presented with infinite lists. The moral here has nothing to do with readability by beginners. It's: if the function you're defining could be extended naturally to infinite lists, and it would be useful to do so, then make it do so. -- Dan

Beginner list processing code can and often does go awry when presented with infinite lists.
I didn't mean code that a beginner would write, I mean code that would
be easy to understand for a beginner to read -- that is, explicit
pattern matching, explicit recursion, no gratuitous use of state
monad.
I don't think I necessarily would have written my favored version when
learning haskell, probably something a lot uglier.
What I like about the pattern matching is the totality -- you see all
the possible inputs, and you see what happens.
With the state version, there's a lot of behind-the-scenes magic, and
as we've seen, things can go wrong.
Also, the issue isn't infinite lists, but lists that are longer than
the sum of the partitions provided. The state monad partition version
goes equally as badly awry if the test is restructured as
testP pf = mapM_ putStrLn [
show . pf ( take 1000 [3,7..] ) $ [1..10]
, show . pf [3,7,11,15] $ ( take (10^6) [1..])
, show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6]
]
(no infinite lists, just long lists)
2009/3/25 Dan Piponi
On Wed, Mar 25, 2009 at 12:44 PM, Thomas Hartman
wrote: Are you saying there's a problem with this implementation? It's the
Yes, there is actually a problem with this implementation.
However, there is something to be said for code that just looks like a duck and quacks like a duck. It's less likely to surprise you.
Well the problem here isn't that the code does something surprising. It's author was making assumptions about the type of input that it's going to get. I think that's an orthogonal issue.
So... I insist... Easy for a beginner to read == better!
Not at all. Beginner list processing code can and often does go awry when presented with infinite lists.
The moral here has nothing to do with readability by beginners. It's: if the function you're defining could be extended naturally to infinite lists, and it would be useful to do so, then make it do so. -- Dan

2009/3/26 Thomas Hartman
Beginner list processing code can and often does go awry when presented with infinite lists.
I didn't mean code that a beginner would write, I mean code that would be easy to understand for a beginner to read
For that, in this particular example, a type signature, would make the function more than readable.
-- that is, explicit pattern matching, explicit recursion, no gratuitous use of state monad.
[…]
What I like about the pattern matching is the totality -- you see all the possible inputs, and you see what happens.
What I read here is "make the operational semantics more explicit". Do you mean it? Personally, I see explicit operational semantics as a last resort, not as a facilitator. Most of the time, I care about what *is*, not what happens. Pattern matching (compared to function composition) is not easier for beginners. It is easier for seasoned imperative programmers, because otherwise, they have to reformat their brain. In my opinion. Now imagine you have to write your function as a solution to a math assignment. Imagine that fold, map, zip, and the like are usual functions (usual like sinus, exp, ln…). Of course, you have to prove your solution correct. Do you prefer a mere composition of four usual functions (possibly in pointed notation), or do you prefer a recursive definition? Back in high school, composition of functions (or nested formulaes) was easy. Recursive definitions were the advanced stuff. If you want the utmost confidence about the correctness of your function, I think you want to reason about the function composition.
With the state version, there's a lot of behind-the-scenes magic, and as we've seen, things can go wrong.
Well, about that, I cannot talk (I'm still a beginner). Loup

On Wed, 25 Mar 2009, Thomas Hartman wrote:
With the state version, there's a lot of behind-the-scenes magic, and as we've seen, things can go wrong.
Also, the issue isn't infinite lists, but lists that are longer than the sum of the partitions provided. The state monad partition version goes equally as badly awry if the test is restructured as
testP pf = mapM_ putStrLn [ show . pf ( take 1000 [3,7..] ) $ [1..10] , show . pf [3,7,11,15] $ ( take (10^6) [1..]) , show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6] ]
This is interesting. It seems to be the familiar issue that sequence does not play as nicely with the GC as one might imagine: http://www.reddit.com/r/haskell/comments/7itbi/mapm_mapm_and_monadic_stateme... I suspect this may be a general problem that we will keep encountering when using higher-order functions, at least with this compiler. I wonder if JHC or some other compiler might work better with these examples? -- Russell O'Connor http://r6.ca/ ``All talk about `theft,''' the general counsel of the American Graphophone Company wrote, ``is the merest claptrap, for there exists no property in ideas musical, literary or artistic, except as defined by statute.''

I wonder if JHC or some other compiler might work better with these examples?
Are you saying that different compilers might give different answers?
Yikes!
Too clever indeed!
2009/3/26
On Wed, 25 Mar 2009, Thomas Hartman wrote:
With the state version, there's a lot of behind-the-scenes magic, and as we've seen, things can go wrong.
Also, the issue isn't infinite lists, but lists that are longer than the sum of the partitions provided. The state monad partition version goes equally as badly awry if the test is restructured as
testP pf = mapM_ putStrLn [ show . pf ( take 1000 [3,7..] ) $ [1..10] , show . pf [3,7,11,15] $ ( take (10^6) [1..]) , show . head . last $ pf (take 1000 $ [3,3..]) [1..10^6] ]
This is interesting. It seems to be the familiar issue that sequence does not play as nicely with the GC as one might imagine: http://www.reddit.com/r/haskell/comments/7itbi/mapm_mapm_and_monadic_stateme...
I suspect this may be a general problem that we will keep encountering when using higher-order functions, at least with this compiler. I wonder if JHC or some other compiler might work better with these examples?
-- Russell O'Connor http://r6.ca/ ``All talk about `theft,''' the general counsel of the American Graphophone Company wrote, ``is the merest claptrap, for there exists no property in ideas musical, literary or artistic, except as defined by statute.'' _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, 2009-03-26 at 12:29 -0700, Thomas Hartman wrote:
I wonder if JHC or some other compiler might work better with these examples?
Are you saying that different compilers might give different answers?
Yikes!
Too clever indeed!
No, they might produce code with different performance characteristics. Which is very much what you want; there is no way to compile Haskell such that reasonable-looking code is a) Fast and b) Predictably performant. The idea of Haskell is to abstract away from the predictable performance of the code by a) using a good compiler, and b) putting absolute un-questioning faith in your profiler. jcc

Well, that's reassuring.
The reason I asked is that the testp function didn't just show poor
performance. The state monad implementation actually gave a different
answer -- nonterminating, where the pattern matching solution
terminated.
2009/3/26 Jonathan Cast
On Thu, 2009-03-26 at 12:29 -0700, Thomas Hartman wrote:
I wonder if JHC or some other compiler might work better with these examples?
Are you saying that different compilers might give different answers?
Yikes!
Too clever indeed!
No, they might produce code with different performance characteristics.
Which is very much what you want; there is no way to compile Haskell such that reasonable-looking code is
a) Fast and b) Predictably performant.
The idea of Haskell is to abstract away from the predictable performance of the code by a) using a good compiler, and b) putting absolute un-questioning faith in your profiler.
jcc

Thomas Hartman
Well, that's reassuring.
The reason I asked is that the testp function didn't just show poor performance. The state monad implementation actually gave a different answer -- nonterminating, where the pattern matching solution terminated.
Indeed, DIFFERENT Haskell programs can give different answers, even on the SAME Haskell implementation. That should not be surprising at all. -- Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig 100 Days to close Guantanamo and end torture http://100dayscampaign.org/ http://www.avaaz.org/en/end_the_war_on_terror/

On Tue, Mar 24, 2009 at 8:29 PM, Miguel Mitrofanov
takeList ns xs = evalState (mapM (State . splitAt) ns) xs
or even
takeList = evalState . map (State . splitAt)
would be much clearer than both versions.
Brilliant. As a newbie, I knew all these functions, I have used them all. When I saw both initial implementations, I tried to write what you did, but failed, I didn't see the pattern, failed to pick the correct functions in my head, failed to make the puzzle. I guess that is the real power of Haskell. In imperative languages, the more you practice, the better you get in avoiding the imperative pitfalls. In functional languages, more practice really results in more and more productivity because you recognize the patterns; the design patterns are not just thoughts but real functions you can reuse.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Code like that is why I love Haskell, while I haven't written a Haskell program in years it is still a joy to read (much more so than the pretty good zipWith version). In reference to later comments: if you don't know Monads, you don't know Haskell; that goes double for high order functions. So really the only place where this code may be inappropriate is in a beginner tutorial (unless you are trying to show why they need to learn more!). C Miguel Mitrofanov wrote:
takeList ns xs = evalState (mapM (State . splitAt) ns) xs
or even
takeList = evalState . map (State . splitAt)

On Tue, Mar 24, 2009 at 10:29:55PM +0300, Miguel Mitrofanov wrote:
Maybe it's just me, but I think that
takeList ns xs = evalState (mapM (State . splitAt) ns) xs
or even
takeList = evalState . map (State . splitAt)
would be much clearer than both versions.
I love it! It wouldn't occur to me to utilize State like this (too used to thinking of it as a black box rather than whats inside of it). quite a lot of useful information to learn can be expressed in a line of haskell. sort of like a zen koan. :) John -- John Meacham - ⑆repetae.net⑆john⑈

Jake McArthur ha scritto:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Manlio Perillo wrote: | This is right. | The problem is that often (IMHO) a function definition can be rewritten | so that it is much more readable. | | As an example, with the takeList function I posted.
I looked at it, found nothing wrong with the original, and absolutely hated your "fixed" version.
With the original version, you have to "follow" 3 separate operations: Prelude> let xs = [1, 2, 3, 4] :: [Int] Prelude> let ns = [3, 1] :: [Int] Prelude> let _1 = scanl (flip drop) xs ns Prelude> let _2 = init _1 Prelude> let _3 = zipWith take ns _2 With my function, instead, you only have to "follow" 1 operation: Prelude> (head, tail) = splitAt n xs
[...]
Manlio

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Manlio Perillo wrote: | With the original version, you have to "follow" 3 separate operations: | | Prelude> let xs = [1, 2, 3, 4] :: [Int] | Prelude> let ns = [3, 1] :: [Int] | Prelude> let _1 = scanl (flip drop) xs ns | Prelude> let _2 = init _1 | Prelude> let _3 = zipWith take ns _2 | | | With my function, instead, you only have to "follow" 1 operation: | | Prelude> (head, tail) = splitAt n xs I think you are way oversimplifying your own code. ~ takeList :: [Int] -> [a] -> [[a]] ~ takeList [] _ = [] ~ takeList _ [] = [] ~ takeList (n : ns) xs = head : takeList ns tail ~ where (head, tail) = splitAt n xs In order to understand this, I have to look at three different cases, an uncons, a splitAt, a cons, *and* a recursive call. This is *seven* different things I have to absorb. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJQ1MACgkQye5hVyvIUKl+hQCfc7Yd8mi8uXDRTZQa11Pn8zeT cZMAnApAcI+pr0wpYUP6Z0jHQ2vtf0ze =Z5ze -----END PGP SIGNATURE-----

Jake McArthur ha scritto:
[...] | With my function, instead, you only have to "follow" 1 operation: | | Prelude> (head, tail) = splitAt n xs
I think you are way oversimplifying your own code.
~ takeList :: [Int] -> [a] -> [[a]] ~ takeList [] _ = [] ~ takeList _ [] = [] ~ takeList (n : ns) xs = head : takeList ns tail ~ where (head, tail) = splitAt n xs
In order to understand this, I have to look at three different cases, an uncons, a splitAt, a cons, *and* a recursive call. This is *seven* different things I have to absorb.
These cases are, IMHO, more "natural". We have a set of equations, pattern matching and recursion. These are one of the basic building block of Haskell. The only "foreign" building block is the splitAt function. But this may be really a question of personal taste or experience. What is more "natural"? 1) pattern matching 2) recursion or 1) function composition 2) high level functions ?
[...]
Manlio

"Recursion is the goto of functional programming". Also, "Do not confuse
what is natural with what is habitual." - Conal
On Tue, Mar 24, 2009 at 1:51 PM, Manlio Perillo
Jake McArthur ha scritto:
[...] | With my function, instead, you only have to "follow" 1 operation: | | Prelude> (head, tail) = splitAt n xs
I think you are way oversimplifying your own code.
~ takeList :: [Int] -> [a] -> [[a]] ~ takeList [] _ = [] ~ takeList _ [] = [] ~ takeList (n : ns) xs = head : takeList ns tail ~ where (head, tail) = splitAt n xs
In order to understand this, I have to look at three different cases, an uncons, a splitAt, a cons, *and* a recursive call. This is *seven* different things I have to absorb.
These cases are, IMHO, more "natural".
We have a set of equations, pattern matching and recursion. These are one of the basic building block of Haskell.
The only "foreign" building block is the splitAt function.
But this may be really a question of personal taste or experience. What is more "natural"?
1) pattern matching 2) recursion or 1) function composition 2) high level functions
?
[...]
Manlio
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

* Manlio Perillo wrote:
But this may be really a question of personal taste or experience. What is more "natural"?
1) pattern matching 2) recursion or 1) function composition 2) high level functions
Composition of library functions is usually much more readable than hand written recursion, simply because the typical idiom is highlighted instead of checking yourself, that there is no strange matching against the obvious case. Composition of library functions is usually much more efficient and preferable than hand written recursion, simply because the fine tuned fusion capabilities.

On Mar 24, 2009, at 1:51 PM, Manlio Perillo wrote:
But this may be really a question of personal taste or experience. What is more "natural"?
1) pattern matching 2) recursion or 1) function composition 2) high level functions
I think, actually, that one of the fundamental intuitions of (modern) Haskell programming is that recursion should *rarely* be explicit, because the majority of places you'd use recursion all fall into a few different patterns (hence, the proliferation of maps and folds). Once you get those recursive operations firmly embedded in your mind, then combining them becomes much simply, and you can reason about more complex transformations much more easily. -johnnnnnn

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Manlio Perillo wrote: | But this may be really a question of personal taste or experience. | What is more "natural"? | | 1) pattern matching | 2) recursion | or | 1) function composition | 2) high level functions Definitely the latter two. They are easier to comprehend (assuming each of the smaller abstractions are already internalized) and more efficient. Arguably, this building-block approach is the whole *point* of Haskell. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJXDwACgkQye5hVyvIUKl/VQCgwspG1HDiGNwEQUFA/Wus6GYD GkkAnRpiP50p17S8Pa9CEvxMFz4cDiZF =/Gi/ -----END PGP SIGNATURE-----

May I suggest that the most important thing missing from all these versions of the function is a comment? Most of the time I shouldn't *care* how the function works. (And that, for me, is one of the key benefits of Haskell.)

Richard O'Keefe wrote:
May I suggest that the most important thing missing from all these versions of the function is a comment? Most of the time I shouldn't *care* how the function works. (And that, for me, is one of the key benefits of Haskell.)
Although in this case, a proper name and type signature is probably enough. :) - Jake

Jake McArthur wrote:
Richard O'Keefe wrote:
May I suggest that the most important thing missing from all these versions of the function is a comment? Most of the time I shouldn't *care* how the function works. (And that, for me, is one of the key benefits of Haskell.)
Although in this case, a proper name and type signature is probably enough. :)
I trust type signatures much more than comments because I know the compiler actually verifies the type signature. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

Manlio Perillo wrote:
But this may be really a question of personal taste or experience. What is more "natural"?
1) pattern matching 2) recursion or 1) function composition 2) high level functions
Which is more "natural": * C-style for-loops (aka assembly while-loops), or * any modern language's foreach loops (aka iterators)? Following directly from the Rule of Least Power, if you can get away with foreach then that's what you should use. Why? Because the less power the construct has, the fewer corner cases and generalizations a reader of the code needs to consider. Now, just because iterators exist does not mean that one should never use the more general tool. If you're fighting to break out of your chosen straitjacket, then chances are it's the wrong one to use in the first place; it'd be clearer to use more power and have less fighting. Both of these conclusions seem quite natural to me, even from before learning Haskell. It seems, therefore, that "naturality" is not the proper metric to discuss. It's oft overlooked, but the fact is that expressivity comes not from more formal power, but from _less_. * A human's (or any vertebrate's) range of motion is severely crippled when compared to that of an amoeba; and yet it is those limitations which provide the structure necessary to perform greater tasks such as grasping, lifting, jumping, etc. * Natural language has a limited range of words and syntactic constructs, but gives the larger-enough building blocks to enable unconstrained communication; whereas a language with a unique word for every utterance (arguably simpler) is impossible to learn. * Regular expressions (and other classes of automata) have severe limitations on formal power, and yet these constraints enable poly-time algorithms for intersection, union, etc. * Haskell's type system (sans extensions) is not Turing complete, yet this enables us to infer types rather than requiring annotations or proofs. The contemporary state of scientific research is focused heavily on the idea of reductionism (the idea of being able to reduce all biology to chemistry, all chemistry to physics, all computer science to mathematics, etc). But as any systems theorist will tell you, this approach is misguided if the goal is a Theory of Everything. As per the famous book: no matter how much you learn about quarks, that tells you nothing about jaguars. At every step of reduction, there is an increase in formal power and a concomitant loss of information. Even perfect knowledge of quarks and perfect simulation software isn't enough, because you've lost the _abstraction_ that is "jaguar". You can simulate it, emulate it, model it, but you've lost the high-level perspective that says jaguars are different and more interesting than an arbitrary simulation of a collection of quarks. (And it's doubtful we'll ever have the omniscience to get even that far.) While primitive recursion and case matching are _fundamental_ (that is, at the bottom of a reductionist tower), that does not entail that they are _central_ (that is, a ubiquitous pattern at every resolution of reduction). Church encoding, SKI combinators, Curry-Howard isomorphism, and the like are also fundamental topics to teach and understand; but they're rarely ones that should be central to a program or library. Now, many Haskellers (like good scientists) bristle at this fundamental nature of things. And in response we're constantly coming up with new generalizations which have little-enough structure to be descriptive while having big-enough structure to be interesting. If there's too much structure, it's boilerplate and therefore unusable; if there's too little, it has no generality and is therefore unhelpful. But somewhere between those extremes someone has to make a judgment call and decide whether some particular pattern measures up to the metric of being helpful and usable. If it does, then everyone (whose domain it covers) should learn it and use it because it simplifies programming from a high-level of design. Giants. Shoulders. Etc. -- Live well, ~wren

wren ng thornton ha scritto:
Manlio Perillo wrote: [...] Following directly from the Rule of Least Power, if you can get away with foreach then that's what you should use. Why? Because the less power the construct has, the fewer corner cases and generalizations a reader of the code needs to consider. Now, just because iterators exist does not mean that one should never use the more general tool. If you're fighting to break out of your chosen straitjacket, then chances are it's the wrong one to use in the first place; it'd be clearer to use more power and have less fighting.
[...]
Note that, as I have already written, I agree with you. And this is one of the reasons i like Haskell. The main problem, here, is that: - recursion and pattern matching are explained in every tutorial about functional programming and Haskell. This is the reason why I find them more "natural". - high level, Haskell specific, abstractions, are *not* explained in normal tutorials or books. The libraries where these concepts are implemented, are not well documented. Most of the "documentation" is in research papers, and a "normal" programmer don't want to read these papers. Only in the recent "Real World Haskell", all these high level abstraction have been properly documented Manlio

Manlio Perillo
The main problem, here, is that: - recursion and pattern matching are explained in every tutorial about functional programming and Haskell.
This is the reason why I find them more "natural".
Well, you're going to have a hard time writing a BASIC tutorial without mentioning GOTO. While surely it has to be there, for the sake of completeness of fundamentals, I completely agree that...
- high level, Haskell specific, abstractions, are *not* explained in normal tutorials or books. The libraries where these concepts are implemented, are not well documented. Most of the "documentation" is in research papers, and a "normal" programmer don't want to read these papers.
Only in the recent "Real World Haskell", all these high level abstraction have been properly documented
...GOTO alone doesn't teach you how to write a loop, trust me, I was stuck there for a good while, eons ago. The prelude, as well as commonly used functions that should be in there, utterly lack accompanying documentation. There should be no function without a usage, as in foldl/sum/product, and no usage without explanation why foldl is chosen over foldl' and foldr. Think of a Preludopedia, accompanying the Typeclassopedia: Documentation where you don't have to snatch understanding out of assem^H^H^H^H^H^H code written using primitive recursion, to make it a bit easier to see the wood despite of all those trees. PS: Shouldn't zipWith be defined in terms of zip, uncurry and map instead of zipWith f (a:as) (b:bs) = f a b : zipWith f as bs zipWith _ _ _ = [] ? -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Manlio Perillo wrote:
The main problem, here, is that: - recursion and pattern matching are explained in every tutorial about functional programming and Haskell.
This is the reason why I find them more "natural".
- high level, Haskell specific, abstractions, are *not* explained in normal tutorials or books. The libraries where these concepts are implemented, are not well documented.
This latter point is indeed the problem. But it may be worth rephrasing a bit. The big problem with the Haskell tutorials I've seen is that they aim to teach orthodoxy rather than orthopraxy. Or to put it less religiously, they teach the nuts and bolts of how the language is _constructed_, instead of teaching the idioms and ideas of how the language is _used_. It's like learning C from Kernighan&Ritchie ---a fine book, don't get me wrong, but it teaches the words of the language instead of the community of the speakers. If you've memorized K&R you're still a novice C programmer. Given our history, this approach made sense. Haskell's been around for a long time, but most of that history has been in academia where it's assumed that people will know what to do if only they knew how to do it. More recently Haskell has been moving from academic toy to industrial tool, and that shift necessitates a similar shift from teaching the language as a collection of interesting features to teaching the language as a collection of interesting libraries. History hinders this transition--- both the internal history of those who know Haskell (and thus can teach it but only as they know it), and also the external history of the mainstream which understands imperativistic thinking but not functional declarative thinking (and thus we must teach the features in order to give the understanding necessary for teaching the libraries). Recently Galois has been focusing on developing the infrastructure necessary for having easy access to libraries. To this day CPAN is the reason why Perl is one of the best languages out there. Other languages have tried emulating that repository, but the only one I've seen that has the community necessary to make it fly has been Hackage; and the development of Cabal/Hackage is very recent and still very bleeding edge (with the scars to prove it). With Galois' support, I think most Haskellers are aware of Hackage now, however it still hasn't made it into the tutorials in the same way that CPAN is integral to the teaching of Perl. Real World Haskell is another groundbreaking, but recent, development. It's a great book in itself and groundbreaking for embracing open-development in the publishing industry, but it's also the first of this shift from teaching Haskell = Patterns + Recursion + Laziness + Class to teaching modern Haskell in a more holistic community-oriented way. It's worth reiterating that RWH was not the cause of the shift in the community, but is rather a result of the ongoing shift. The Typeclassopedia is another drop in this river: excellent, recent. So I agree that most of the tutorials are lagging behind the modern form of Haskell, but I think this is due in part to a very recent change in the growth and direction of the community. As always with avoiding success at all costs, whether we end up the better for it in the end will depend on holding onto enough newcomers who have only ever known this modern Haskell, because they are the ones who will have the proper perspective to write tutorials and teach the language as if it's always been this way. You must be the change you wish to see in the world.
Most of the "documentation" is in research papers, and a "normal" programmer don't want to read these papers.
Yes, and no. There is quite a bit of documentation in research papers, and mainstream programmers don't read research. However, this is a big part of what makes the Haskell community what it is. There are plenty of non-academics here, but they have the willingness to read these papers (even if it's out of the ordinary) and the desire to learn radical new things (because they're out of the ordinary). A good deal of the papers these days are eminently readable by the laity, moreso than other research papers in computer science or programming languages IMO. This is one of the big things that separates Haskell from the mainstream, but it's not something I see going away any time soon. Given the recent surge of interest from the mainstream, I think it's finally time that we take a more proactive approach in trying to teach this aspect as one of the tenants of our community. Presently there's still a "take it or leave it" tenor to these discussions, and that needs to be dispelled before it poisons the relations between the old guard and the young turks. New tutorials need to find some way of introducing non-academics to the idea that the academy is not an ivory tower and that part of what makes Haskell cool is the fact that it takes these theoretical ideas and applies them to the real world. It's challenging to fight anti-intellectualism anywhere, but it's become apparent to me that this is something that we need to develop an organized response to. -- Live well, ~wren

2009/3/25 wren ng thornton
Most of the "documentation" is in research papers, and a "normal" programmer don't want to read these papers.
Yes, and no. There is quite a bit of documentation in research papers, and mainstream programmers don't read research. However, this is a big part of what makes the Haskell community what it is. There are plenty of non-academics here, but they have the willingness to read these papers (even if it's out of the ordinary) and the desire to learn radical new things (because they're out of the ordinary).
Yes. BUT ... when I look up the Haddock-generated documentation for a function, I DON'T appreciate it if that is in the form of a hyperlink to a research paper. And that occurs in several of the libraries shipped with GHC for instance. A reference to a research paper is fine to show where the ideas came from, but that is not where the library documentation should be.

Colin Adams wrote:
2009/3/25 wren ng thornton
: Most of the "documentation" is in research papers, and a "normal" programmer don't want to read these papers.
Yes, and no. There is quite a bit of documentation in research papers, and mainstream programmers don't read research. However, this is a big part of what makes the Haskell community what it is. There are plenty of non-academics here, but they have the willingness to read these papers (even if it's out of the ordinary) and the desire to learn radical new things (because they're out of the ordinary).
Yes. BUT ...
when I look up the Haddock-generated documentation for a function, I DON'T appreciate it if that is in the form of a hyperlink to a research paper. And that occurs in several of the libraries shipped with GHC for instance.
A reference to a research paper is fine to show where the ideas came from, but that is not where the library documentation should be.
Yeah, that's bad. 'Documentation' like that should be corrected with Extreme Prejudice. -- Live well, ~wren

wren ng thornton
Colin Adams wrote:
2009/3/25 wren ng thornton
: when I look up the Haddock-generated documentation for a function, I DON'T appreciate it if that is in the form of a hyperlink to a research paper. And that occurs in several of the libraries shipped with GHC for instance. A reference to a research paper is fine to show where the ideas came from, but that is not where the library documentation should be.
Yeah, that's bad. 'Documentation' like that should be corrected with Extreme Prejudice.
The main problem with research papers as documentation is the papers usually being outdated wrt. the current library version: Literate Haskell is utterly underused. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

2009/3/27 Achim Schneider
wren ng thornton
wrote: Colin Adams wrote:
2009/3/25 wren ng thornton
: when I look up the Haddock-generated documentation for a function, I DON'T appreciate it if that is in the form of a hyperlink to a research paper. And that occurs in several of the libraries shipped with GHC for instance. A reference to a research paper is fine to show where the ideas came from, but that is not where the library documentation should be.
Yeah, that's bad. 'Documentation' like that should be corrected with Extreme Prejudice.
I think I agree with that (I say I think, as I'm not sure what Extreme Prejuidice means).
The main problem with research papers as documentation is the papers usually being outdated wrt. the current library version: Literate Haskell is utterly underused.
That's surely a problem, and a significant one. But what irks me is the time taken to find one small piece of information (how to use a single function). I would guess on average about the time to read 1/3 of the paper (since the back matter needn't be examined).

Colin Adams
2009/3/27 Achim Schneider
: wren ng thornton
wrote: Colin Adams wrote:
A reference to a research paper is fine to show where the ideas came from, but that is not where the library documentation should be.
Yeah, that's bad. 'Documentation' like that should be corrected with Extreme Prejudice.
I think I agree with that (I say I think, as I'm not sure what Extreme Prejuidice means).
Shoot err... rewrite before asking. If in doubt, annihilate. Considering all options, just do it. Pity is a thing for judges, not hackers. Something along those lines.
The main problem with research papers as documentation is the papers usually being outdated wrt. the current library version: Literate Haskell is utterly underused.
That's surely a problem, and a significant one.
But what irks me is the time taken to find one small piece of information (how to use a single function). I would guess on average about the time to read 1/3 of the paper (since the back matter needn't be examined).
Hm. Yes. OTOH, I very much appreciate background information, it usually contains very insightful information about the overall idea and behaviour of a library. I'm by no means a domain expert for any and every library I want to use. In school, we were required to write both user[1] as well as developer[2] documentation alongside to commenting our code. I tended to loathe it, but it's very, very sensible in retrospect. There was some discussion a while back here on the cafe about enabling users to write additional documentation into a wikised hackage; together with an #haskell-doc-tutor irc channel, we could have an excellent solution to both lacking documentation as well as newbies not being sure were to start and/or intimidated by pointless usage of (.). Additionally, you get the chance of earning credits and naming and shaming Haskell's godfathers[1]. [1] In the sense of using the code, either as app or library [2] In the sense of editing/reading the code. Understanding [2] usually involves understanding [1]. [3] Judging from his code, I guess dons' apartment looks just like mine: Lots of left-over bits lying around that you tend to stumble over and are unsure about why they are still there. I swear, someday I'm going to use those two 5 1/4" floppy drives... -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

On Tue, Mar 24, 2009 at 10:32 PM, wren ng thornton
Both of these conclusions seem quite natural to me, even from before learning Haskell. It seems, therefore, that "naturality" is not the proper metric to discuss. It's oft overlooked, but the fact is that expressivity comes not from more formal power, but from _less_.
* Natural language has a limited range of words and syntactic constructs, but gives the larger-enough building blocks to enable unconstrained communication; whereas a language with a unique word for every utterance (arguably simpler) is impossible to learn.
On the other hand, -certain- languages are more expressive than others. As an example, I personally find English far more expressive than both Vietnamese and Japanese, yet English is far more complicated. Japanese, for example, has exactly 1 pronunciation for each "alphabet letter". Hence you'll never find words in English like "lead" and "lead", where the first means to guide someone or something, or to give direction, and the second is a chemical element. Words that are spelled the same in Japanese are pronounced the same 100% of the time. Furthermore, I find that you are far more limited in your choices of how to form ideas into sentences. In English there might be 20 different ways to phrase the exact same sentence for use in a certain context, where the sentences end up being almost identical with the exception of 1 or 2 words changed or shuffled around. In Japanese there would probably be far fewer. In Vietnamese there's a similar problem, in that there are not very many synonyms at all, and NO conjugations. It is complicated by the fact that it's a tonal language, but on the other hand the tonality independent of the expressivity in my experience. Similar to Chinese, although I can't speak for the expressivity of Chinese I would not be surprised at all if written Chinese was extremely expressive, but spoken not so much. Anyway the point of all this is that in English you have more freedom and more power, and hence you use (abuse?) the syntax of the language to create words, sentences, and phrases that express very powerful things. Furthermore, they are things that almost all English speakers would be able to grasp the full meaning of what you've said.

2009/3/25 Zachary Turner
On the other hand, -certain- languages are more expressive than others. As an example, I personally find English far more expressive than both Vietnamese and Japanese, yet English is far more complicated. Japanese, for
Way off topic, but for what it's worth, you can take it as axiomatic that all natural languages are equally expressive, qua languages. They're also equally easy/hard overall. The areas of difficulty are just in different places. Japanese grammar is extraordinarily simple, but achieving mastery of the spoken language *in Japanese society* is next to impossible, because usage reflects social constructions. As you no doubt know, what is not said is sometimes just as expressive as what is said in Japanese; very maddening to a logorrheic American, just as an English speaker's need to explicitly articulate *everything* is no doubt annoying to Japanese. Regarding spelling and phonology: the idea that "one symbol, one sound" is somehow optimal is the Myth That Will Not Die. None other than Chomsky himself argued that English orthography is near-optimal for the English language. All writing systems are designed to serve speakers of the language, and many languages are poorly modeled by a one symbol, one sound system. I'm not sure there's a lesson there for formal language designers and programmers, except maybe that the expressiveness (elegance?) of a text usually depends to a great extent on the writer more than the language. -g

Zachary Turner wrote:
On Tue, Mar 24, 2009 at 10:32 PM, wren ng thornton
wrote: Both of these conclusions seem quite natural to me, even from before learning Haskell. It seems, therefore, that "naturality" is not the proper metric to discuss. It's oft overlooked, but the fact is that expressivity comes not from more formal power, but from _less_.
* Natural language has a limited range of words and syntactic constructs, but gives the larger-enough building blocks to enable unconstrained communication; whereas a language with a unique word for every utterance (arguably simpler) is impossible to learn.
On the other hand, -certain- languages are more expressive than others. As an example, I personally find English far more expressive than both Vietnamese and Japanese, yet English is far more complicated.
That's funny, I find Japanese to be far more expressive than English. (The language itself. Due to familiarity, I myself am more expressive in English.) Japanese has sophisticated forms of address that indicate the distance, degree, and style of the speaker's relationship with the listener. In English we can get the point across but we don't have the formalism and so it's all a lot more handwaving. Japanese can indicate topic and focus directly; whereas English must resort to bold/italics or syntactic contortions to be precise. Japanese has a wide assortment of pronouns which imply measures of respect, arrogance, disdain, abashment, etc; whereas English is limited to a small number that are only deictic. Japanese has many postpositions which capture abstract comparative relations that are difficult to express concisely in English. Japanese sentential particles can express a wide range of affect; whereas English must rely on intonation and context to determine whether something should be interpreted as compassionate, bonding, insulting, ironic, etc. Of course it all depends on what exactly you care to express. Japanese has restricted phonology, as you mentioned, though this is only as meaningful as the character set used for variable names in a programming language. Japanese also lacks certain sophisticated distinctions in English like definite vs indefinite articles, and singular vs plural vs mass-count nouns.
Words that are spelled the same in Japanese are pronounced the same 100% of the time.
False. There are numerous words which have the same spelling and different pronunciations. For example 今日 can be either /kyou/ "today" or /kon'niti/ "every day; daily". This is one reason why learning to read Japanese is so difficult for westerners. An even bigger reason is that countless words can be spelled in a number of different ways, with each spelling having different nuances and implications (sometimes to the point where the spellings are not interchangeable).
Anyway the point of all this is that in English you have more freedom and more power, and hence you use (abuse?) the syntax of the language to create words, sentences, and phrases that express very powerful things. Furthermore, they are things that almost all English speakers would be able to grasp the full meaning of what you've said.
All natural languages are Thinking-complete. Just like with Turing-complete programming languages, the only difference is where they hide the bodies. There are plenty of things that I as a native speaker of English could say which other natives would grasp but which would confuse many of my non-native yet fluent coworkers. The Japanese abuse their syntax just as badly as we abuse English, or far far worse if we include the slang of youth culture. The Japanese have long thought of their language as a toy box ripe for experimentation, and you can see the effects of this everywhere. -- Live well, ~wren

wren ng thornton
All natural languages are Thinking-complete.
No, they aren't. Falsifying the Saphir-Worph thesis, I quite often find myself incapable of expressing a certain thought, or if I succeed, come up with two or more versions in multiple different languages that mean slightly different things, and, in retrospect, all don't fit the thought. On another scale, it's just a waste of time: Why should I spend minutes figuring out how to spell out "Even though X -> Y and X -> nonsense, (Z -> Y) -> nonsense does not necessarily hold" when I already figured that one out. All thoughts are fundamentally ineffable: Therefore, all languages are thinking-incomplete. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Well, I'd say that there is something close to the Perl syndrome, in some sense. After all, code IS usually very smart. The difference is that in Perl all smartness is about knowing how the computer works, or how the interpreter works. In Haskell, instead, the smartness is about knowing - or inventing - the general setting in which the problem looks less complex. On 24 Mar 2009, at 20:41, Manlio Perillo wrote:
Hi.
In these days I'm discussing with some friends, that mainly use Python as programming language, but know well other languages like Scheme, Prolog, C, and so.
These friends are very interested in Haskell, but it seems that the main reason why they don't start to seriously learning it, is that when they start reading some code, they feel the "Perl syndrome".
That is, code written to be "too smart", and that end up being totally illegible by Haskell novice.
I too have this feeling, from time to time.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
Manlio _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Mar 24, 2009 at 12:41 PM, Manlio Perillo
I too have this feeling, from time to time.
So do I, because I haven't had the time to learn what I need to learn in order to read the code smoothly. I find that when I do work out the meaning, most often the style reflects conciseness or expressiveness, not obfuscatory tricks that the language allows.
Since someone is starting to write the Haskell coding style, I really suggest him to take this "problem" into strong consideration.
Rule One of the Haskell Coding Style Handbook: learn Haskell first, then worry about style. After all, nobody in her right mind would tackle a French style manual without learning French first. Although I suppose one could argue that learning Haskell in fact involves learning various styles. ;) -gregg
participants (40)
-
Achim Schneider
-
Alberto G. Corona
-
Bas van Dijk
-
Benja Fallenstein
-
Chung-chieh Shan
-
Claus Reinke
-
Clive Brettingham-Moore
-
Colin Adams
-
Conal Elliott
-
Dan Piponi
-
Dan Weston
-
David Menendez
-
Donn Cave
-
Erik de Castro Lopo
-
Eugene Kirpichov
-
Gregg Reynolds
-
Gwern Branwen
-
Heinrich Apfelmus
-
Jake McArthur
-
Jake McArthur
-
John Meacham
-
John Melesky
-
Jonathan Cast
-
Loup Vaillant
-
Lutz Donnerhacke
-
Manlio Perillo
-
Miguel Mitrofanov
-
Peter Verswyvelen
-
Richard O'Keefe
-
Robin Green
-
roconnor@theorem.ca
-
Ross Mellgren
-
Ryan Ingram
-
Simon Marlow
-
Sjur Gjøstein Karevoll
-
Thomas Hartman
-
Tim Newsham
-
wren ng thornton
-
Yitzchak Gale
-
Zachary Turner