Pretty little definitions of left and right folds

Hi, Cafe, For no practical purpose, I wrote new definitions of list folds as chains of partially applied functions, joined by composition. I took the liberty of rearranging the type signature.
foldright :: (a -> b -> b)) -> [a] -> b -> b foldright f = chain where chain (a:as) = (f a).(chain as) chain _ = id
foldleft :: (a -> b -> b) -> [a] -> b -> b foldleft f = chain where chain (a:as) = (chain as).(f a) chain _ = id
These definitions are point free, with respect to the "initializer" argument (which is now the last argument). Also, you can see how similar they are to each other, with the difference boiling down to the order of the composition, e.g.: foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0 We can also see the following identities: foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell"). The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense? Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting. very truly yours, George Kangas

On 2008 Jun 20, at 19:15, George Kangas wrote:
The type signature, which could be written (a -> (b -> b)) -> ([a] -
(b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Foldable.htm... -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote:
foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0
Hi George, This is very cool! I have never thought of folds in quite this way before. It makes a lot of things (such as the identities you point out) obvious and elegant.
We can also see the following identities:
foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id
I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell").
The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
As Brandon points out, you have rediscovered Data.Foldable. =) There's nothing wrong with that, congratulations on discovering it for yourself! But again, I like this way of organizing the type signature: I had never thought of a fold as a sort of 'lift' before. If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> b -> b (or C a -> b -> b, more generally).
Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting.
Not at all! Welcome, and thanks for posting. -Brent

On Fri, 2008-06-20 at 22:31 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote:
foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0
Hi George,
This is very cool! I have never thought of folds in quite this way before. It makes a lot of things (such as the identities you point out) obvious and elegant.
We can also see the following identities:
foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id
I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell").
The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
As Brandon points out, you have rediscovered Data.Foldable. =) There's nothing wrong with that, congratulations on discovering it for yourself! But again, I like this way of organizing the type signature: I had never thought of a fold as a sort of 'lift' before. If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> b -> b (or C a -> b -> b, more generally).
Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting.
Not at all! Welcome, and thanks for posting.
Look into the theory of monoids, monoid homomorphisms, M-sets and free monoids.

On Fri, Jun 20, 2008 at 09:52:36PM -0500, Derek Elkins wrote:
On Fri, 2008-06-20 at 22:31 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote:
foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0
Hi George,
This is very cool! I have never thought of folds in quite this way before. It makes a lot of things (such as the identities you point out) obvious and elegant.
We can also see the following identities:
foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id
I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell").
The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
As Brandon points out, you have rediscovered Data.Foldable. =) There's nothing wrong with that, congratulations on discovering it for yourself! But again, I like this way of organizing the type signature: I had never thought of a fold as a sort of 'lift' before. If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> b -> b (or C a -> b -> b, more generally).
Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting.
Not at all! Welcome, and thanks for posting.
Look into the theory of monoids, monoid homomorphisms, M-sets and free monoids.
Thanks for the pointers! Here's what I've come up with, after re-reading some Barr-Wells lecture notes. First, given finite sets A (representing an 'alphabet') and S (representing 'states'), we can describe a finite state machine by a function phi : A x S -> S, which gives 'transition rules' giving a new state for each combination of alphabet character and state. If we squint and wave our hands and ignore the fact that types aren't exactly sets, and most of the types we care about have infinitely many values, this is very much like the Haskell type (a,s) -> s, or (curried) a -> s -> s, i.e. a -> (s -> s). So we can think of a Haskell function phi :: a -> (s -> s) as a sort of 'state machine'. Also, for a monoid M and set S, an action of M on S is given by a function f : M x S -> S for which (1) f(1,s) = s, and (2) f(mn,s) = f(m,f(n,s)). Of course, in Haskell we would write f :: m -> (s -> s), and we would write criteria (1) and (2) as (1) f mempty = id (2) f (m `mappend` n) = f m . f n Now look at the type of foldright: foldright :: (a -> (s -> s)) -> ([a] -> (s -> s)) We can see that foldright exactly corresponds to the observation that any state machine with alphabet a and states s induces a monoid action on s by the free monoid [a]. It's not hard to check that the function produced by foldright indeed satisfies the requirements to be a monoid action. First, recall that foldright f = chain where chain (a:as) = (f a).(chain as) chain _ = id Now, we can easily prove (1): (foldright f) [] = chain [] = id The proof of (2) is by induction on the length of m. The base case is obvious, given the proof of (1). (foldright f) (m ++ n) = { defn. of foldright, assume m = x:xs } chain ((x:xs) ++ n) = { defn. of (++) } chain (x:(xs ++ n)) = { defn. of chain } f x . chain (xs ++ n) = { inductive hypothesis } f x . chain xs . chain n = { defn. of chain, associativity of (.) } chain (x:xs) . chain n = { defn. of foldright, m } (foldright f) m . (foldright f) n How'd I do? I'm still trying to figure out how the generalization to Traversable fits in here. I'm guessing this is where the monoid homomorphisms come in. -Brent

On Sat, Jun 21, 2008 at 7:11 PM, Brent Yorgey
First, given finite sets A (representing an 'alphabet') and S (representing 'states'), we can describe a finite state machine by a function phi : A x S -> S, which gives 'transition rules' giving a new state for each combination of alphabet character and state. If we squint and wave our hands and ignore the fact that types aren't exactly sets, and most of the types we care about have infinitely many values, this is very much like the Haskell type (a,s) -> s, or (curried) a -> s -> s, i.e. a -> (s -> s). So we can think of a Haskell function phi :: a -> (s -> s) as a sort of 'state machine'.
Also, for a monoid M and set S, an action of M on S is given by a function f : M x S -> S for which
(1) f(1,s) = s, and (2) f(mn,s) = f(m,f(n,s)).
Of course, in Haskell we would write f :: m -> (s -> s), and we would write criteria (1) and (2) as
(1) f mempty = id (2) f (m `mappend` n) = f m . f n
Now look at the type of foldright:
foldright :: (a -> (s -> s)) -> ([a] -> (s -> s))
Wow! I commend you on this excellent, enlightening post! Luke
We can see that foldright exactly corresponds to the observation that any state machine with alphabet a and states s induces a monoid action on s by the free monoid [a]. It's not hard to check that the function produced by foldright indeed satisfies the requirements to be a monoid action.
First, recall that
foldright f = chain where chain (a:as) = (f a).(chain as) chain _ = id
Now, we can easily prove (1):
(foldright f) [] = chain [] = id
The proof of (2) is by induction on the length of m. The base case is obvious, given the proof of (1).
(foldright f) (m ++ n) = { defn. of foldright, assume m = x:xs } chain ((x:xs) ++ n) = { defn. of (++) } chain (x:(xs ++ n)) = { defn. of chain } f x . chain (xs ++ n) = { inductive hypothesis } f x . chain xs . chain n = { defn. of chain, associativity of (.) } chain (x:xs) . chain n = { defn. of foldright, m } (foldright f) m . (foldright f) n
How'd I do? I'm still trying to figure out how the generalization to Traversable fits in here. I'm guessing this is where the monoid homomorphisms come in.
-Brent _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sat, 2008-06-21 at 21:11 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 09:52:36PM -0500, Derek Elkins wrote:
On Fri, 2008-06-20 at 22:31 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote:
foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0
Hi George,
This is very cool! I have never thought of folds in quite this way before. It makes a lot of things (such as the identities you point out) obvious and elegant.
We can also see the following identities:
foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id
I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell").
The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
As Brandon points out, you have rediscovered Data.Foldable. =) There's nothing wrong with that, congratulations on discovering it for yourself! But again, I like this way of organizing the type signature: I had never thought of a fold as a sort of 'lift' before. If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> b -> b (or C a -> b -> b, more generally).
Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting.
Not at all! Welcome, and thanks for posting.
Look into the theory of monoids, monoid homomorphisms, M-sets and free monoids.
Thanks for the pointers! Here's what I've come up with, after re-reading some Barr-Wells lecture notes.
First, given finite sets A (representing an 'alphabet') and S (representing 'states'), we can describe a finite state machine by a function phi : A x S -> S, which gives 'transition rules' giving a new state for each combination of alphabet character and state. If we squint and wave our hands and ignore the fact that types aren't exactly sets, and most of the types we care about have infinitely many values, this is very much like the Haskell type (a,s) -> s, or (curried) a -> s -> s, i.e. a -> (s -> s). So we can think of a Haskell function phi :: a -> (s -> s) as a sort of 'state machine'.
Also, for a monoid M and set S, an action of M on S is given by a function f : M x S -> S for which
(1) f(1,s) = s, and (2) f(mn,s) = f(m,f(n,s)).
Of course, in Haskell we would write f :: m -> (s -> s),
This change is not completely trivial.
and we would write criteria (1) and (2) as
(1) f mempty = id (2) f (m `mappend` n) = f m . f n
So what does this make f? Hint: What is (s -> s)?

On Sat, Jun 21, 2008 at 09:36:06PM -0500, Derek Elkins wrote:
On Sat, 2008-06-21 at 21:11 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 09:52:36PM -0500, Derek Elkins wrote:
On Fri, 2008-06-20 at 22:31 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote:
foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0
Hi George,
This is very cool! I have never thought of folds in quite this way before. It makes a lot of things (such as the identities you point out) obvious and elegant.
We can also see the following identities:
foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id
I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell").
The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
As Brandon points out, you have rediscovered Data.Foldable. =) There's nothing wrong with that, congratulations on discovering it for yourself! But again, I like this way of organizing the type signature: I had never thought of a fold as a sort of 'lift' before. If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> b -> b (or C a -> b -> b, more generally).
Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting.
Not at all! Welcome, and thanks for posting.
Look into the theory of monoids, monoid homomorphisms, M-sets and free monoids.
Thanks for the pointers! Here's what I've come up with, after re-reading some Barr-Wells lecture notes.
First, given finite sets A (representing an 'alphabet') and S (representing 'states'), we can describe a finite state machine by a function phi : A x S -> S, which gives 'transition rules' giving a new state for each combination of alphabet character and state. If we squint and wave our hands and ignore the fact that types aren't exactly sets, and most of the types we care about have infinitely many values, this is very much like the Haskell type (a,s) -> s, or (curried) a -> s -> s, i.e. a -> (s -> s). So we can think of a Haskell function phi :: a -> (s -> s) as a sort of 'state machine'.
Also, for a monoid M and set S, an action of M on S is given by a function f : M x S -> S for which
(1) f(1,s) = s, and (2) f(mn,s) = f(m,f(n,s)).
Of course, in Haskell we would write f :: m -> (s -> s),
This change is not completely trivial.
Hmm... why is that? Is it because of the types-aren't-really-sets thing? Or are there other reasons as well?
and we would write criteria (1) and (2) as
(1) f mempty = id (2) f (m `mappend` n) = f m . f n
So what does this make f? Hint: What is (s -> s)?
Aha! f is a monoid homomorphism to the monoid of endomorphisms on s! Right? -Brent

On Sat, 2008-06-21 at 22:48 -0400, Brent Yorgey wrote:
On Sat, Jun 21, 2008 at 09:36:06PM -0500, Derek Elkins wrote:
On Sat, 2008-06-21 at 21:11 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 09:52:36PM -0500, Derek Elkins wrote:
On Fri, 2008-06-20 at 22:31 -0400, Brent Yorgey wrote:
On Fri, Jun 20, 2008 at 06:15:20PM -0500, George Kangas wrote:
foldright (+) [1, 2, 3] 0 == ( (1 +).(2 +).(3 +).id ) 0 foldleft (+) [1, 2, 3] 0 == ( id.(3 +).(2 +).(1 +) ) 0
Hi George,
This is very cool! I have never thought of folds in quite this way before. It makes a lot of things (such as the identities you point out) obvious and elegant.
We can also see the following identities:
foldright f as == foldright (.) (map f as) id foldleft f as == foldright (flip (.)) (map f as) id
I like that second one, after trying to read another definition of left fold in terms of right fold (in the web book "Real World Haskell").
The type signature, which could be written (a -> (b -> b)) -> ([a] -> (b -> b)), suggests generalization to another type constructor C: (a -> (b -> b)) -> (C a -> (b -> b)). Would a "foldable" typeclass make any sense?
As Brandon points out, you have rediscovered Data.Foldable. =) There's nothing wrong with that, congratulations on discovering it for yourself! But again, I like this way of organizing the type signature: I had never thought of a fold as a sort of 'lift' before. If f :: a -> b -> b, then foldright 'lifts' f to foldright f :: [a] -> b -> b (or C a -> b -> b, more generally).
Okay, it goes without saying that this is useless dabbling, but have I entertained anyone? Or have I just wasted your time? I eagerly await comments on this, my first posting.
Not at all! Welcome, and thanks for posting.
Look into the theory of monoids, monoid homomorphisms, M-sets and free monoids.
Thanks for the pointers! Here's what I've come up with, after re-reading some Barr-Wells lecture notes.
First, given finite sets A (representing an 'alphabet') and S (representing 'states'), we can describe a finite state machine by a function phi : A x S -> S, which gives 'transition rules' giving a new state for each combination of alphabet character and state. If we squint and wave our hands and ignore the fact that types aren't exactly sets, and most of the types we care about have infinitely many values, this is very much like the Haskell type (a,s) -> s, or (curried) a -> s -> s, i.e. a -> (s -> s). So we can think of a Haskell function phi :: a -> (s -> s) as a sort of 'state machine'.
Also, for a monoid M and set S, an action of M on S is given by a function f : M x S -> S for which
(1) f(1,s) = s, and (2) f(mn,s) = f(m,f(n,s)).
Of course, in Haskell we would write f :: m -> (s -> s),
This change is not completely trivial.
Hmm... why is that? Is it because of the types-aren't-really-sets thing? Or are there other reasons as well?
No, it's just that though these types are isomorphic (in our squinted vision), they are still not identical and each is different way of viewing the same thing.
and we would write criteria (1) and (2) as
(1) f mempty = id (2) f (m `mappend` n) = f m . f n
So what does this make f? Hint: What is (s -> s)?
Aha! f is a monoid homomorphism to the monoid of endomorphisms on s! Right?
Yep. So the monoid action, MxS -> S can be curried to give M -> (S->S), a monoid homomorphism, or further we can swap the arguments and curry giving S -> (M->S); one name for this last form, connected to a totally different field, is a flow. In this case, let M be the monoid of time (the non-negative reals under addition) and S be points in space (R^3 say). Usually, in this case, the "monoid action" will be the solution of a differential equation.
participants (5)
-
Brandon S. Allbery KF8NH
-
Brent Yorgey
-
Derek Elkins
-
George Kangas
-
Luke Palmer