
Hi, I am pleased to announce HLint version 1.2. HLint is a lint-like tool for Haskell that detects and suggests improvements for your code. HLint is compatible with most GHC extensions, and supports a wide variety of suggestions, and can be extended with additional user suggestions. To install: cabal update && cabal install hlint Web page: http://www-users.cs.york.ac.uk/~ndm/hlint/ Biggest new feature: List recursion suggestions. For example, running HLint over the GHC compiler now suggests things such as: Example.hs:3:1: Warning: Use foldr Found: seqList [] b = b seqList (x : xs) b = x `seq` seqList xs b Why not: seqList xs b = foldr seq b xs Example.hs:6:1: Warning: Use foldr Found: seqIds [] = () seqIds (id : ids) = seqId id `seq` seqIds ids Why not: seqIds ids = foldr (seq . seqId) () ids Example.hs:13:1: Warning: Use foldl Found: rev_app [] xs = xs rev_app (y : ys) xs = rev_app ys (y : xs) Why not: rev_app ys xs = foldl (flip (:)) xs ys HLint will automatically detect if you should have used a map, a foldr or a foldl and suggest how to change your code. In the GHC, darcs and Hoogle code bases there are no obvious map-like functions, which is a good sign :-) Changes from last time: * More GHC extensions supported - including record wild cards and fixes to a number of existing extensions. These changes are thanks to Niklas and the haskell-src-exts package. * Many infix operators are now correctly associated with the right priority. * All hints are now ignored, warnings or errors. Hopefully this should make it clearer what hints are most important. As part of this, the format for defining ignore files has changed completely - see the manual for the new format. * Lots of new hints have been added - many from users of HLint. * Many bugs have been fixed. Can any follow-up discussions please be directed to haskell-cafe@. All comments, questions, bug reports are welcome! Neil

ndmitchell:
Hi,
I am pleased to announce HLint version 1.2. HLint is a lint-like tool for Haskell that detects and suggests improvements for your code. HLint is compatible with most GHC extensions, and supports a wide variety of suggestions, and can be extended with additional user suggestions.
To install: cabal update && cabal install hlint
Web page: http://www-users.cs.york.ac.uk/~ndm/hlint/
Biggest new feature: List recursion suggestions. For example, running HLint over the GHC compiler now suggests things such as:
Arch package, http://aur.archlinux.org/packages.php?ID=23099 Making this the 800th Haskell package on Arch. -- Don

On Sun, 11 Jan 2009, Neil Mitchell wrote:
HLint will automatically detect if you should have used a map, a foldr or a foldl and suggest how to change your code. In the GHC, darcs and Hoogle code bases there are no obvious map-like functions, which is a good sign :-)
I found so many 'map' re-implementations in Haskell libraries, even in those, where I thought their programmers must be more experienced than me. Hm, maybe even in libraries by Neil?

Does GHC specialize map? If it doesn't, then hand crafted version
could be faster.
On Sun, Jan 11, 2009 at 11:44 PM, Henning Thielemann
On Sun, 11 Jan 2009, Neil Mitchell wrote:
HLint will automatically detect if you should have used a map, a foldr or a foldl and suggest how to change your code. In the GHC, darcs and Hoogle code bases there are no obvious map-like functions, which is a good sign :-)
I found so many 'map' re-implementations in Haskell libraries, even in those, where I thought their programmers must be more experienced than me. Hm, maybe even in libraries by Neil? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi
Does GHC specialize map? If it doesn't, then hand crafted version could be faster.
GHC doesn't specialize map, and a hand-crafted one could be faster - but you then wouldn't get foldr/build fusion. In general HLint tries to make the code prettier, but sometimes you will need to deviate from its suggestions when you've profiled etc. To stop HLint warning you just create Hints.hs and include the line "ignore = LennartsSuperFastModule.mySpecialisedMap" - full details in the manual.
I found so many 'map' re-implementations in Haskell libraries, even in those, where I thought their programmers must be more experienced than me. Hm, maybe even in libraries by Neil?
I can't really be blamed for making mistakes before HLint ;-) Thanks Neil

Neil Mitchell escribió:
Hi
Does GHC specialize map? If it doesn't, then hand crafted version could be faster.
GHC doesn't specialize map, and a hand-crafted one could be faster - but you then wouldn't get foldr/build fusion. In general HLint tries to make the code prettier, but sometimes you will need to deviate from its suggestions when you've profiled etc. To stop HLint warning you just create Hints.hs and include the line "ignore = LennartsSuperFastModule.mySpecialisedMap" - full details in the manual.
I am really happy with HLint. Being relatively new to haskell world, I tend to be slow in finding ready-made solutions, or using folds fot recursive tasks. HLint 1.0 discovers `on` for me, and only for that I will be infinitely grateful. Now, if HLint 1.2 helps me with the map/fold understanding, I will be 'continuous infinitely' grateful (although real numbers are not representable!) Best regards, Zara

ndmitchell:
Hi
Does GHC specialize map? If it doesn't, then hand crafted version could be faster.
GHC doesn't specialize map, and a hand-crafted one could be faster - but you then wouldn't get foldr/build fusion. In general HLint tries to make the code prettier, but sometimes you will need to deviate from its suggestions when you've profiled etc. To stop HLint warning you just create Hints.hs and include the line "ignore = LennartsSuperFastModule.mySpecialisedMap" - full details in the manual.
I found so many 'map' re-implementations in Haskell libraries, even in those, where I thought their programmers must be more experienced than me. Hm, maybe even in libraries by Neil?
I can't really be blamed for making mistakes before HLint ;-)
But GHC tends to inline and specialise map, due to: "map" [~1] forall f xs. map f xs = build (\c n -> foldr (mapFB c f) n xs) So that, main = print (map toUpper "haskell") Yields: s :: Addr# s = "haskell"# letrec unpack_snX :: Int# -> [Char] unpack_snX = \ (x :: Int#) -> case indexCharOffAddr# s x of i { _ -> ($wtoUpper i) (: @ Char) (unpack_snX (+# x 1) '\NUL' -> [] @ Char Which looks inlined and specialised to my eyes. -- Don

dons:
ndmitchell:
Hi
Does GHC specialize map? If it doesn't, then hand crafted version could be faster.
GHC doesn't specialize map, and a hand-crafted one could be faster - but you then wouldn't get foldr/build fusion. In general HLint tries to make the code prettier, but sometimes you will need to deviate from its suggestions when you've profiled etc. To stop HLint warning you just create Hints.hs and include the line "ignore = LennartsSuperFastModule.mySpecialisedMap" - full details in the manual.
I found so many 'map' re-implementations in Haskell libraries, even in those, where I thought their programmers must be more experienced than me. Hm, maybe even in libraries by Neil?
I can't really be blamed for making mistakes before HLint ;-)
But GHC tends to inline and specialise map, due to:
"map" [~1] forall f xs. map f xs = build (\c n -> foldr (mapFB c f) n xs)
So that,
main = print (map toUpper "haskell")
Yields:
s :: Addr# s = "haskell"#
letrec unpack_snX :: Int# -> [Char] unpack_snX = \ (x :: Int#) -> case indexCharOffAddr# s x of i { _ -> ($wtoUpper i) (: @ Char) (unpack_snX (+# x 1) '\NUL' -> [] @ Char
Which looks inlined and specialised to my eyes.
Oh, I should note the inlining only happens here since the list constant is a 'build', and map is a bulid . foldr, so we get a build/foldr fusion, and an inlined map as a result. If we just use map in isolation, no inlining: A.foo = \ (xs_ala :: [Char]) -> map @ Char @ Char toUpper xs_ala Whereas a worker/wrapper version map :: (a -> b) -> [a] -> [b] map f xs = go xs where go [] = [] go (x:xs) = f x : go xs {-# INLINE map #-} We get an inlined version: go = \ (ds_dm7 :: [Char]) -> case ds_dm7 of wild_B1 { [] -> [] @ Char; : x_all xs_aln -> : @ Char (toUpper x_all) (A.go xs_aln) } -- Don

On Mon, 2009-01-12 at 01:02 +0100, Lennart Augustsson wrote:
Does GHC specialize map? If it doesn't, then hand crafted version could be faster.
No because the current definition are recursive and ghc cannot inline recursive functions. map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = f x : map f xs It has to be manually transformed into a version that is not recursive at the top level: map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'. Obviously this is not quite the same as specialising map since it's per use not per-function being mapped. Though specialisation would be just: mapFoo = map foo I'm not sure if you'd need {-# NOINLINE mapFoo #-} This is exactly how the ghc definitions for foldr and foldl work: foldr k z xs = go xs where go [] = z go (y:ys) = y `k` go ys foldl f z xs = lgo z xs where lgo z [] = z lgo z (x:xs) = lgo (f z x) xs Duncan

On Mon, 12 Jan 2009, Duncan Coutts wrote:
On Mon, 2009-01-12 at 01:02 +0100, Lennart Augustsson wrote:
Does GHC specialize map? If it doesn't, then hand crafted version could be faster.
No because the current definition are recursive and ghc cannot inline recursive functions.
map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = f x : map f xs
It has to be manually transformed into a version that is not recursive at the top level:
map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
Maybe HLint can make such suggestions ...

Hi
No because the current definition are recursive and ghc cannot inline recursive functions.
map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
Maybe HLint can make such suggestions ...
HLint would probably just suggest you use a supercompiler*, for which specialisation such as this is easy (http://www.cs.york.ac.uk/~ndm/supero). HLint is about making code prettier, and only in a few cases does it try for faster (mapM -> mapM_), but even there its more about avoiding a space leak. Thanks Neil * Of course, HLint (and me) should probably add the disclaimer that the chances of that working your code as it stands are fairly close to 0 - but the ideas and initial research has been started!

On Mon, 12 Jan 2009, Neil Mitchell wrote:
Hi
No because the current definition are recursive and ghc cannot inline recursive functions.
map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
Maybe HLint can make such suggestions ...
HLint would probably just suggest you use a supercompiler*, for which specialisation such as this is easy (http://www.cs.york.ac.uk/~ndm/supero). HLint is about making code prettier,
Actually, I find the 'go' variant prettier, because it clearly shows that the 'f' is not "altered" in the recursion.

On Mon, 2009-01-12 at 15:06 +0100, Henning Thielemann wrote:
It has to be manually transformed into a version that is not recursive at the top level:
map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
Maybe HLint can make such suggestions ...
I think HLint's philosophy prefers elegant code to performance hacks. Duncan

On Mon, 12 Jan 2009, Duncan Coutts wrote:
On Mon, 2009-01-12 at 15:06 +0100, Henning Thielemann wrote:
It has to be manually transformed into a version that is not recursive at the top level:
map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
Maybe HLint can make such suggestions ...
I think HLint's philosophy prefers elegant code to performance hacks.
I encountered just another instance of 'jumping into the wrong loop' when experimenting with a function using top-level recursion. Thus I summarized the stylistic reasons pro local recursion: http://www.haskell.org/haskellwiki/Top-level_vs._local_recursion

On Jan 12, 2009, at 9:01 AM, Duncan Coutts wrote:
No because the current definition are recursive and ghc cannot inline recursive functions.
map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = f x : map f xs
It has to be manually transformed into a version that is not recursive at the top level:
map :: (a -> b) -> [a] -> [b] map f = go where go [] = [] go (x:xs) = f x : go xs
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
This seems like exactly the sort of mechanical transformation that computers do quickly and accurately, and humans get wrong. Surely it wouldn't be that hard for GHC to transform self recursion in this way (possibly subject to the condition that the result be worth inlining)? [phc did this, and I think it was inherited from Lennart's program transformations.] -Jan-Willem Maessen

2009/1/12 Jan-Willem Maessen
On Jan 12, 2009, at 9:01 AM, Duncan Coutts wrote:
No because the current definition are recursive and ghc cannot inline recursive functions.
....
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
This seems like exactly the sort of mechanical transformation that computers do quickly and accurately, and humans get wrong. Surely it wouldn't be that hard for GHC to transform self recursion in this way (possibly subject to the condition that the result be worth inlining)?
GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so. Cheers, Max

On Mon, Jan 12, 2009 at 6:47 PM, Max Bolingbroke
GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so.
Great! In the Stream library I'm developing at http://code.haskell.org/Stream I 'closurize' (for lack of a better name) all my functions. Here are a few random examples: repeat :: a -> Stream a repeat x = repeat_x where repeat_x = x ::: repeat_x cycle :: [a] -> Stream a cycle xs = cycle_xs where cycle_xs = foldr (:::) cycle_xs xs deleteBy :: (a -> a -> Bool) -> a -> Stream a -> Stream a deleteBy eq x = deleteBy_eq_x where deleteBy_eq_x (y ::: ys) | eq x y = ys | otherwise = y ::: deleteBy_eq_x ys Closurizing the functions in Data.Stream lead to 10% to 250% speedups! Note that I follow a particular naming convention for the inner worker functions. I use the top level function name and append the 'closurized' arguments to it interspersed with underscores. regards, Bas

On Mon, 12 Jan 2009 19:43:00 +0100
"Bas van Dijk"
On Mon, Jan 12, 2009 at 6:47 PM, Max Bolingbroke
wrote: GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so.
Great!
In the Stream library I'm developing at http://code.haskell.org/Stream I 'closurize' (for lack of a better name) all my functions. Here are a few random examples:
repeat :: a -> Stream a repeat x = repeat_x where repeat_x = x ::: repeat_x
cycle :: [a] -> Stream a cycle xs = cycle_xs where cycle_xs = foldr (:::) cycle_xs xs
deleteBy :: (a -> a -> Bool) -> a -> Stream a -> Stream a deleteBy eq x = deleteBy_eq_x where deleteBy_eq_x (y ::: ys) | eq x y = ys | otherwise = y ::: deleteBy_eq_x ys
Closurizing the functions in Data.Stream lead to 10% to 250% speedups!
Awesome! I tend to use Control.Monad.Fix.fix (which actually has nothing to do with monads, despite the package name) sometimes, for "closurizing" a recursive function. I am curious as to whether the "fix" style of recursive programming is likely to result in the same speedups. The fix-style equivalent to your repeat above, would be something like this: repeat x = fix $ \me -> x ::: me (I use "me" for the name of the recursive call, partly because it reminds me that it's a self-call, and partly because I find it amusing) -- Robin

On Mon, 12 Jan 2009, Robin Green wrote:
I tend to use Control.Monad.Fix.fix (which actually has nothing to do with monads, despite the package name)
That's why it is also available from Data.Function now: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Function.htm...

On Mon, Jan 12, 2009 at 6:06 PM, Robin Green
The fix-style equivalent to your repeat above, would be something like this:
repeat x = fix $ \me -> x ::: me
Interesting. Your repeat and mine are compiled to the same code: Data.Stream.repeat :: forall a_aVi. a_aVi -> Data.Stream.Stream a_aVi Data.Stream.repeat = \ (@ a_a2lT) (x_aZ6 :: a_a2lT) -> letrec { repeat_x_s50K :: Data.Stream.Stream a_a2lT repeat_x_s50K = Data.Stream.::: @ a_a2lT x_aZ6 repeat_x_s50K; } in repeat_x_s50K Bas

On Mon, 2009-01-12 at 20:23 +0100, Bas van Dijk wrote:
On Mon, Jan 12, 2009 at 6:06 PM, Robin Green
wrote: The fix-style equivalent to your repeat above, would be something like this:
repeat x = fix $ \me -> x ::: me
Interesting.
The definition of fix is small and non-recursive, in particular it is: fix f = let x = f x in x When inlined, which it -will- be if optimizations are enabled, fix (x:::) is -identical- to your definition of repeat.

On Mon, 2009-01-12 at 19:43 +0100, Bas van Dijk wrote:
On Mon, Jan 12, 2009 at 6:47 PM, Max Bolingbroke
wrote: GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so.
Great!
In the Stream library I'm developing at http://code.haskell.org/Stream I 'closurize' (for lack of a better name)
One name for this transformation (or one very closely related to it) is lambda-dropping which was chosen to contrast to the better known transformation, lambda lifting.

On Jan 12, 2009, at 12:47 PM, Max Bolingbroke wrote:
2009/1/12 Jan-Willem Maessen
: On Jan 12, 2009, at 9:01 AM, Duncan Coutts wrote:
No because the current definition are recursive and ghc cannot inline recursive functions.
....
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'.
This seems like exactly the sort of mechanical transformation that computers do quickly and accurately, and humans get wrong. Surely it wouldn't be that hard for GHC to transform self recursion in this way (possibly subject to the condition that the result be worth inlining)?
GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so.
This is excellent news, quite apart from Don's observation that it isn't particularly relevant for map (where we are essentially using RULES to instantiate an alternative definition in terms of foldr/ build, if I understand his message rightly). Self recursion is abut so much more than map! -Jan
Cheers, Max

Max Bolingbroke wrote:
2009/1/12 Jan-Willem Maessen
: On Jan 12, 2009, at 9:01 AM, Duncan Coutts wrote:
No because the current definition are recursive and ghc cannot inline recursive functions.
....
Then the map can be inlined at the call site and the 'f' inlined into the body of 'go'. This seems like exactly the sort of mechanical transformation that computers do quickly and accurately, and humans get wrong. Surely it wouldn't be that hard for GHC to transform self recursion in this way (possibly subject to the condition that the result be worth inlining)?
GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so.
!!! That's a surprising result - have you looked closely at the places where the transformation is having a big effect to see what's going on? Cheers, Simon

2009/1/13 Simon Marlow
GHC should indeed be doing so. I'm working (on and off) to work out some suitable heuristics and put the transformation into ghc -O2. There are a few wrinkles that still need sorting out, but preliminary indications are that it decreases the runtime of our standard benchmark suite, nofib, by 12% or so.
!!!
That's a surprising result - have you looked closely at the places where the transformation is having a big effect to see what's going on?
Yes, it is rather better than I expected :-) The main gains seem to come from specialising higher-order functions on particular arguments, like we saw earlier in this thread. There seem to be a number of suitable functions in the standard library that aren't written in static-argument-transformed (SATed) style. Another gain comes from the nofib program "atom", which has a function a lot like this: f x y z = (x, y, z) : f x y z Once this is SATed it becomes a much better function: f = let f' = (x, y, z) : f' in f' Which decreases runtime of atom by 97% :-) The catch is that things written in this style can actually be worse than their lambda-lifted brethren. This happens for 3 main reasons: 1) SATed functions tend to have case liberation applied to them instead of constructor specialisation. Case liberation kind of sucks in comparison to constructor specialisation, so bad things happen (increased allocations and code size) 2) Carrying around a single variable in the SAT closure just adds indirection to the generated code with no benefits, so it's better to remove that indirection (by lambda lifting) just before going to STG - but be careful not to change the unfolding! 3) More SATing means more expressions are loop invariant. This is usually a good thing, but if you float a loop-invariant out of a "cold branch" of a recursive function (a branch that is actually only entered once dynamically) then you end up eagerly allocating a closure for the loop-invariant thing which may never be entered. This is sort of a bug in our current float-out pass. Like I said, I'm working on improving the situation with 1 and 2, which need to be resolved to iron out some of the bad cases in nofib. I need to find time to take a look at this though. Cheers, Max

On Sun, 11 Jan 2009, Neil Mitchell wrote:
I am pleased to announce HLint version 1.2. HLint is a lint-like tool for Haskell that detects and suggests improvements for your code. HLint is compatible with most GHC extensions, and supports a wide variety of suggestions, and can be extended with additional user suggestions.
To install: cabal update && cabal install hlint
Fails for me, because of the base-4 dependency. - I'm still using GHC-6.8.2. Can HLint suggest view-pattern-free expressions, such that the program also runs on GHC-6.8 ? :-)

Hi Henning,
To install: cabal update && cabal install hlint
Fails for me, because of the base-4 dependency. - I'm still using GHC-6.8.2. Can HLint suggest view-pattern-free expressions, such that the program also runs on GHC-6.8 ? :-)
HLint is written using view-patterns so requires GHC 6.10 to compile it. After you have it compiled its just a standard Haskell binary, so can be run on any Haskell code and will never suggest view-patterns anyway. Alas compiling without GHC 6.10 is not possible, but if you can get a binary from elsewhere it should run on your machine (subject to cabal configure stuff being set appropriately, which I don't understand...). Thanks Neil

On Thu, 15 Jan 2009, Neil Mitchell wrote:
Hi Henning,
To install: cabal update && cabal install hlint
Fails for me, because of the base-4 dependency. - I'm still using GHC-6.8.2. Can HLint suggest view-pattern-free expressions, such that the program also runs on GHC-6.8 ? :-)
HLint is written using view-patterns so requires GHC 6.10 to compile it. After you have it compiled its just a standard Haskell binary, so can be run on any Haskell code and will never suggest view-patterns anyway.
My question was, whether HLint can help translating HLint to code without view patterns.

On Thu, 15 Jan 2009, Neil Mitchell wrote:
Hi
My question was, whether HLint can help translating HLint to code without view patterns.
Ah, I misunderstood. Yes, it could (in theory), but it can't automatically apply the hints it generates. Upgrading to GHC 6.10 is probably easier :-)
Also throwing away Hugs ... (YHC too ?)

Ah, I misunderstood. Yes, it could (in theory), but it can't automatically apply the hints it generates. Upgrading to GHC 6.10 is probably easier :-)
Also throwing away Hugs ... (YHC too ?)
Yes :-( I normally develop in Hugs, for a change I wanted to try GHCi. It's also a project that has loads of pattern matching at a fairly complex level, so the benefits offered by view-patterns and pattern-guards were just too hard to pass up. I'm also using SYB and Uniplate on SYB quite extensively. Now we just need Haskell' to standardise the useful bits (pattern guards, rank-2 types, deriving Data) throw away the junk (n+k, monomorphism restriction) and I can write beautiful programs for all Haskell thingies. Thanks Neil PS. I think I threw away Yhc when I imported Data.Map :-)

On Thu, 15 Jan 2009, Neil Mitchell wrote:
I normally develop in Hugs, for a change I wanted to try GHCi. It's also a project that has loads of pattern matching at a fairly complex level, so the benefits offered by view-patterns and pattern-guards were just too hard to pass up.
Are you sure that you can't come up with some nice functions like 'maybe' to replace those view patterns by function calls? Did you really try? I remember the recent discussion on pattern combinators here on Haskell Cafe.

Are you sure that you can't come up with some nice functions like 'maybe' to replace those view patterns by function calls? Did you really try? I remember the recent discussion on pattern combinators here on Haskell Cafe.
I could, but it would look more ugly - and I want my code to be beautiful :-) For HLint, view-patterns are something I could live without, SYB is something I couldn't live without. SYB makes me GHC only, view-patterns make me GHC 6.10 only. I could port it to GHC 6.8.3, but I don't think its worth the effort, and the complication that would ensue. In general a view-pattern, as used in HLint, can be translated away by: foo (view -> RHS) ... = ... foo new_var ... | RHS <- view new_var = ... But its more ugly, requires more intermediate variables, isn't as clear and isn't a generally correct translation (but I think it would almost always work in HLint). Thanks Neil
participants (13)
-
Andrew Coppin
-
Bas van Dijk
-
Derek Elkins
-
Don Stewart
-
Duncan Coutts
-
Henning Thielemann
-
Jan-Willem Maessen
-
Juan Antonio Zaratiegui Vallecillo, a.k.a. Zara
-
Lennart Augustsson
-
Max Bolingbroke
-
Neil Mitchell
-
Robin Green
-
Simon Marlow