Algorithmic Advances by Discovering Patterns in Functions

Hi Folks, This is a story about discovery. It is a story about advancing the state-of-the-art by discovering patterns in functions. Scientists look for recurring patterns in nature. Once a pattern is discovered, its essential ingredients are identified and formalized into a law or mathematical equation. Discovery of a pattern is important and those individuals who make such discoveries are rightfully acknowledged in our annals of science. Scientists are not the only ones who look for recurring patterns, so do functional programmers. Once a pattern is discovered, its essential ingredients are identified and formalized into a function. That function may then become widely adopted by the programming community, thus elevating the programming community to a new level of capability. The following is a fantastic example of discovering a pattern in multiple functions, discerning the essential ingredients of the pattern, and replacing the multiple functions with a single superior function. The example is from Richard Bird's book, Introduction to Functional Programming using Haskell. Before looking at the example let's introduce an important concept in functional programming: partial function application. A function that takes two arguments may be rewritten as a function that takes one argument and returns a function. The function returned also takes one argument. For example, consider the min function which returns the minimum of two integers: min 2 3 -- returns 2 Notice that min is a function that takes two arguments. Suppose min is given only one argument: min 2 [Definition: When fewer arguments are given to a function than it can accept it is called partial application of the function. That is, the function is partially applied.] min 2 is a function. It takes one argument. It returns the minimum of the argument and 2. For example: (min 2) 3 -- returns 2 To see this more starkly, let's assign g to be the function min 2: let g = min 2 g is now a function that takes one argument and returns the minimum of the argument and 2: g 3 -- returns 2 Let's take a second example: the addition operator (+) is a function and it takes two arguments. For example: 2 + 3 -- returns 5 To make it more obvious that (+) is a function, here is the same example in prefix notation: (+) 2 3 -- returns 5 Suppose we provide (+) only one argument: (+) 2 That is a partial application of the (+) function. (+) 2 is a function. It takes one argument. It returns the sum of the argument and 2. For example: ((+) 2) 3 -- returns 5 We can succinctly express (+) 2 as: (+2) Thus, (+2) 3 -- returns 5 Okay, now we are ready to embark on our journey of discovery. We will examine three functions and find their common pattern. The functions process values from this recursive data type: data Nat = Zero | Succ Nat Here are some examples of Nat values: Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ... The following functions perform addition, multiplication, and exponentiation on Nat values. Examine the three functions. Can you discern a pattern? -- Addition of Nat values: m + Zero = m m + Succ n = Succ(m + n) -- Multiplication of Nat values: m * Zero = Zero m * Succ n = (m * n) + m -- Exponentiation of Nat values: m ** Zero = Succ Zero m ** Succ n = (m ** n) * m For each function the right-hand operand is either Zero or Succ n: (+m) Zero (+m) Succ n (*m) Zero (*m) Succ n (**m) Zero (**m) Succ n So abstractly there is a function f that takes as argument either Zero or Succ n: f Zero f Succ n Given Zero as the argument, each function immediately returns a value: (+m) Zero = m (*m) Zero = Zero (**m) Zero = Succ Zero Let c denote the value. So, invoke f with a value for c and Zero: f c Zero = c For addition provide m as the value for c: m + Zero = f m Zero For multiplication provide Zero as the value for c: m * Zero = f Zero Zero For exponentiation provide (Succ Zero) as the value for c: m ** Zero = f (Succ Zero) Zero Next, consider how each of the functions (addition, multiplication, exponentiation) deals with Succ n. Given the argument Succ n, each function processes n and then applies a function to the result: (+m) Succ n = (Succ)((+m) n) (*m) Succ n = ((*m) n) (+m) (**m) Succ n = ((**m) n) (*m) Let h denote the function that is applied to the result. So, Succ n is processed by invoking f with a value for h and Succ n: f h (Succ n) = h (f n) For multiplication provide (+m) as the value for h: m * Succ n = f (+m) (Succ n) Recall that (+m) is a function: it takes one argument and returns the sum of the argument and m. Thus one of the arguments of function f is a function. f is said to be a higher order function. [Definition: A higher order function is a function that takes an argument that is a function or it returns a function.] Continuing on with the other values for h: For addition provide (Succ) as the value for h: m + Succ n = f (Succ) (Succ n) For exponentiation provide (*m) as the value for h: m ** Zero = f (*m) (Succ n) Recap: to process Nat values use these equations: f c Zero = c f h (Succ n) = h (f n) Actually we need to supply f with h, c, and the Nat value: f h c Zero = c f h c (Succ n) = h (f h c n) f has this type signature: its arguments are a function, a value, and a Nat value; the result is a value. So this is f's type signature: f :: function -> value -> Nat -> value That type signature is well-known in functional programming. It is the signature of a fold function. Here is some intuition on why it is called the fold function: Imagine folding a towel. You make your first fold. The second fold builds on top of the first fold. Each fold builds on top of the previous folds. That is analogous to what the function f does. "You make your first fold" is analogous to immediately returning c: f h c Zero = c "Each fold builds on top of the previous folds" is analogous to processing Succ n by applying h to the result of processing the n'th Nat value: f h c (Succ n) = h (f h c n) Recap: In analyzing the functions (addition, multiplication, exponentiation) on Nat we have discovered that they are all doing a fold operation. That is a remarkable discovery. Let's now see how to perform addition, multiplication, and exponentiation using f. However, since we now recognize that f is actually the fold function, let's rename it foldn (fold Nat). Here is its type signature and function definition: foldn :: (a -> a) -> a -> Nat -> a foldn h c Zero = c foldn h c (Succ n) = h (foldn h c n) The functions are implemented as: m + n = foldn Succ m n m * n = foldn (+m) Zero n m ** n = foldn (*m) (Succ Zero) n Let's take an example and trace its execution: Zero + Succ Zero = foldn Succ Zero (Succ Zero) = Succ (foldn Succ Zero Zero) = Succ (Zero) The holy trinity of functional programming is: 1. User-defined recursive data types. 2. Recursively defined functions over recursive data types. 3. Proof by induction: show that some property P(n) holds for each element of a recursive data type. Nat is a user-defined recursive data type. The addition, multiplication, and exponentiation functions are recursively defined functions over Nat. We saw that the fold function can be defined for the Nat data type. In fact, a suitable fold function can be defined for every recursive data type. Wow! There are two advantages of writing recursive definitions in terms of fold: 1. The definitions are shorted; rather than having to write down two equations, we have only to write down one. 2. It is possible to prove general properties of fold and use them to prove properties of specific instantiations. In other words, rather than having to write down many inductive proofs, we have only to write down one. Recap: we have approached programming like a scientist; namely, we have examined a set of algorithms (the addition, multiplication, and exponentiation functions), discovered that they have a recurring pattern, and then evolved the algorithms to a higher order algorithm. That is advancing the field. Incremental advancements such as this is what takes Computer Science to new heights. /Roger

Thanks for this!
On Sat, Sep 15, 2012 at 1:36 AM, Costello, Roger L.
Hi Folks,
This is a story about discovery. It is a story about advancing the state-of-the-art by discovering patterns in functions.
Scientists look for recurring patterns in nature. Once a pattern is discovered, its essential ingredients are identified and formalized into a law or mathematical equation. Discovery of a pattern is important and those individuals who make such discoveries are rightfully acknowledged in our annals of science.
Scientists are not the only ones who look for recurring patterns, so do functional programmers. Once a pattern is discovered, its essential ingredients are identified and formalized into a function. That function may then become widely adopted by the programming community, thus elevating the programming community to a new level of capability.
The following is a fantastic example of discovering a pattern in multiple functions, discerning the essential ingredients of the pattern, and replacing the multiple functions with a single superior function. The example is from Richard Bird's book, Introduction to Functional Programming using Haskell.
Before looking at the example let's introduce an important concept in functional programming: partial function application.
A function that takes two arguments may be rewritten as a function that takes one argument and returns a function. The function returned also takes one argument. For example, consider the min function which returns the minimum of two integers:
min 2 3 -- returns 2
Notice that min is a function that takes two arguments.
Suppose min is given only one argument:
min 2
[Definition: When fewer arguments are given to a function than it can accept it is called partial application of the function. That is, the function is partially applied.]
min 2 is a function. It takes one argument. It returns the minimum of the argument and 2. For example:
(min 2) 3 -- returns 2
To see this more starkly, let's assign g to be the function min 2:
let g = min 2
g is now a function that takes one argument and returns the minimum of the argument and 2:
g 3 -- returns 2
Let's take a second example: the addition operator (+) is a function and it takes two arguments. For example:
2 + 3 -- returns 5
To make it more obvious that (+) is a function, here is the same example in prefix notation:
(+) 2 3 -- returns 5
Suppose we provide (+) only one argument:
(+) 2
That is a partial application of the (+) function.
(+) 2 is a function. It takes one argument. It returns the sum of the argument and 2. For example:
((+) 2) 3 -- returns 5
We can succinctly express (+) 2 as:
(+2)
Thus,
(+2) 3 -- returns 5
Okay, now we are ready to embark on our journey of discovery. We will examine three functions and find their common pattern.
The functions process values from this recursive data type:
data Nat = Zero | Succ Nat
Here are some examples of Nat values:
Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ...
The following functions perform addition, multiplication, and exponentiation on Nat values. Examine the three functions. Can you discern a pattern?
-- Addition of Nat values: m + Zero = m m + Succ n = Succ(m + n)
-- Multiplication of Nat values: m * Zero = Zero m * Succ n = (m * n) + m
-- Exponentiation of Nat values: m ** Zero = Succ Zero m ** Succ n = (m ** n) * m
For each function the right-hand operand is either Zero or Succ n:
(+m) Zero (+m) Succ n
(*m) Zero (*m) Succ n
(**m) Zero (**m) Succ n
So abstractly there is a function f that takes as argument either Zero or Succ n:
f Zero f Succ n
Given Zero as the argument, each function immediately returns a value:
(+m) Zero = m (*m) Zero = Zero (**m) Zero = Succ Zero
Let c denote the value.
So, invoke f with a value for c and Zero:
f c Zero = c
For addition provide m as the value for c:
m + Zero = f m Zero
For multiplication provide Zero as the value for c:
m * Zero = f Zero Zero
For exponentiation provide (Succ Zero) as the value for c:
m ** Zero = f (Succ Zero) Zero
Next, consider how each of the functions (addition, multiplication, exponentiation) deals with Succ n.
Given the argument Succ n, each function processes n and then applies a function to the result:
(+m) Succ n = (Succ)((+m) n) (*m) Succ n = ((*m) n) (+m) (**m) Succ n = ((**m) n) (*m)
Let h denote the function that is applied to the result.
So, Succ n is processed by invoking f with a value for h and Succ n:
f h (Succ n) = h (f n)
For multiplication provide (+m) as the value for h:
m * Succ n = f (+m) (Succ n)
Recall that (+m) is a function: it takes one argument and returns the sum of the argument and m.
Thus one of the arguments of function f is a function.
f is said to be a higher order function.
[Definition: A higher order function is a function that takes an argument that is a function or it returns a function.]
Continuing on with the other values for h:
For addition provide (Succ) as the value for h:
m + Succ n = f (Succ) (Succ n)
For exponentiation provide (*m) as the value for h:
m ** Zero = f (*m) (Succ n)
Recap: to process Nat values use these equations:
f c Zero = c f h (Succ n) = h (f n)
Actually we need to supply f with h, c, and the Nat value:
f h c Zero = c f h c (Succ n) = h (f h c n)
f has this type signature: its arguments are a function, a value, and a Nat value; the result is a value. So this is f's type signature:
f :: function -> value -> Nat -> value
That type signature is well-known in functional programming. It is the signature of a fold function. Here is some intuition on why it is called the fold function:
Imagine folding a towel. You make your first fold. The second fold builds on top of the first fold. Each fold builds on top of the previous folds. That is analogous to what the function f does. "You make your first fold" is analogous to immediately returning c:
f h c Zero = c
"Each fold builds on top of the previous folds" is analogous to processing Succ n by applying h to the result of processing the n'th Nat value:
f h c (Succ n) = h (f h c n)
Recap: In analyzing the functions (addition, multiplication, exponentiation) on Nat we have discovered that they are all doing a fold operation.
That is a remarkable discovery.
Let's now see how to perform addition, multiplication, and exponentiation using f. However, since we now recognize that f is actually the fold function, let's rename it foldn (fold Nat). Here is its type signature and function definition:
foldn :: (a -> a) -> a -> Nat -> a foldn h c Zero = c foldn h c (Succ n) = h (foldn h c n)
The functions are implemented as:
m + n = foldn Succ m n m * n = foldn (+m) Zero n m ** n = foldn (*m) (Succ Zero) n
Let's take an example and trace its execution:
Zero + Succ Zero = foldn Succ Zero (Succ Zero) = Succ (foldn Succ Zero Zero) = Succ (Zero)
The holy trinity of functional programming is:
1. User-defined recursive data types. 2. Recursively defined functions over recursive data types. 3. Proof by induction: show that some property P(n) holds for each element of a recursive data type.
Nat is a user-defined recursive data type. The addition, multiplication, and exponentiation functions are recursively defined functions over Nat.
We saw that the fold function can be defined for the Nat data type. In fact, a suitable fold function can be defined for every recursive data type.
Wow!
There are two advantages of writing recursive definitions in terms of fold:
1. The definitions are shorted; rather than having to write down two equations, we have only to write down one. 2. It is possible to prove general properties of fold and use them to prove properties of specific instantiations. In other words, rather than having to write down many inductive proofs, we have only to write down one.
Recap: we have approached programming like a scientist; namely, we have examined a set of algorithms (the addition, multiplication, and exponentiation functions), discovered that they have a recurring pattern, and then evolved the algorithms to a higher order algorithm. That is advancing the field. Incremental advancements such as this is what takes Computer Science to new heights.
/Roger
_______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners

Roger,
Scientists look for recurring patterns in nature. Once a pattern is discovered, its essential ingredients are identified and formalized into a law or mathematical equation.
You could think of lambda calculus as a general language for formalizing and applying patterns. Squint enough and pinning down a pattern starts to look like lambda abstraction; using the pattern, function application.
Here is some intuition on why it is called the fold function: Imagine folding a towel.
Generally we don't look for insight in the layperson meaning of technical terminology. Sure, folding a list is intuitively like folding a long strip of paper. But folds also apply to trees and even gnarlier datatypes, and there intuition loses steam. Sometimes it's better to internally alpha-rename to a fresh variable to avoid being misled. The meaning lies in the *usage* of the term, so in a way it's not really in English after all.
We saw that the fold function can be defined for the Nat data type. In fact, a suitable fold function can be defined for every recursive data type.
How would you define a fold for
data S a = S (a -> Bool)
?
-- Kim-Ee
On Sat, Sep 15, 2012 at 3:36 PM, Costello, Roger L.
Hi Folks,
This is a story about discovery. It is a story about advancing the state-of-the-art by discovering patterns in functions.
Scientists look for recurring patterns in nature. Once a pattern is discovered, its essential ingredients are identified and formalized into a law or mathematical equation. Discovery of a pattern is important and those individuals who make such discoveries are rightfully acknowledged in our annals of science.
Scientists are not the only ones who look for recurring patterns, so do functional programmers. Once a pattern is discovered, its essential ingredients are identified and formalized into a function. That function may then become widely adopted by the programming community, thus elevating the programming community to a new level of capability.
The following is a fantastic example of discovering a pattern in multiple functions, discerning the essential ingredients of the pattern, and replacing the multiple functions with a single superior function. The example is from Richard Bird's book, Introduction to Functional Programming using Haskell.
Before looking at the example let's introduce an important concept in functional programming: partial function application.
A function that takes two arguments may be rewritten as a function that takes one argument and returns a function. The function returned also takes one argument. For example, consider the min function which returns the minimum of two integers:
min 2 3 -- returns 2
Notice that min is a function that takes two arguments.
Suppose min is given only one argument:
min 2
[Definition: When fewer arguments are given to a function than it can accept it is called partial application of the function. That is, the function is partially applied.]
min 2 is a function. It takes one argument. It returns the minimum of the argument and 2. For example:
(min 2) 3 -- returns 2
To see this more starkly, let's assign g to be the function min 2:
let g = min 2
g is now a function that takes one argument and returns the minimum of the argument and 2:
g 3 -- returns 2
Let's take a second example: the addition operator (+) is a function and it takes two arguments. For example:
2 + 3 -- returns 5
To make it more obvious that (+) is a function, here is the same example in prefix notation:
(+) 2 3 -- returns 5
Suppose we provide (+) only one argument:
(+) 2
That is a partial application of the (+) function.
(+) 2 is a function. It takes one argument. It returns the sum of the argument and 2. For example:
((+) 2) 3 -- returns 5
We can succinctly express (+) 2 as:
(+2)
Thus,
(+2) 3 -- returns 5
Okay, now we are ready to embark on our journey of discovery. We will examine three functions and find their common pattern.
The functions process values from this recursive data type:
data Nat = Zero | Succ Nat
Here are some examples of Nat values:
Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ...
The following functions perform addition, multiplication, and exponentiation on Nat values. Examine the three functions. Can you discern a pattern?
-- Addition of Nat values: m + Zero = m m + Succ n = Succ(m + n)
-- Multiplication of Nat values: m * Zero = Zero m * Succ n = (m * n) + m
-- Exponentiation of Nat values: m ** Zero = Succ Zero m ** Succ n = (m ** n) * m
For each function the right-hand operand is either Zero or Succ n:
(+m) Zero (+m) Succ n
(*m) Zero (*m) Succ n
(**m) Zero (**m) Succ n
So abstractly there is a function f that takes as argument either Zero or Succ n:
f Zero f Succ n
Given Zero as the argument, each function immediately returns a value:
(+m) Zero = m (*m) Zero = Zero (**m) Zero = Succ Zero
Let c denote the value.
So, invoke f with a value for c and Zero:
f c Zero = c
For addition provide m as the value for c:
m + Zero = f m Zero
For multiplication provide Zero as the value for c:
m * Zero = f Zero Zero
For exponentiation provide (Succ Zero) as the value for c:
m ** Zero = f (Succ Zero) Zero
Next, consider how each of the functions (addition, multiplication, exponentiation) deals with Succ n.
Given the argument Succ n, each function processes n and then applies a function to the result:
(+m) Succ n = (Succ)((+m) n) (*m) Succ n = ((*m) n) (+m) (**m) Succ n = ((**m) n) (*m)
Let h denote the function that is applied to the result.
So, Succ n is processed by invoking f with a value for h and Succ n:
f h (Succ n) = h (f n)
For multiplication provide (+m) as the value for h:
m * Succ n = f (+m) (Succ n)
Recall that (+m) is a function: it takes one argument and returns the sum of the argument and m.
Thus one of the arguments of function f is a function.
f is said to be a higher order function.
[Definition: A higher order function is a function that takes an argument that is a function or it returns a function.]
Continuing on with the other values for h:
For addition provide (Succ) as the value for h:
m + Succ n = f (Succ) (Succ n)
For exponentiation provide (*m) as the value for h:
m ** Zero = f (*m) (Succ n)
Recap: to process Nat values use these equations:
f c Zero = c f h (Succ n) = h (f n)
Actually we need to supply f with h, c, and the Nat value:
f h c Zero = c f h c (Succ n) = h (f h c n)
f has this type signature: its arguments are a function, a value, and a Nat value; the result is a value. So this is f's type signature:
f :: function -> value -> Nat -> value
That type signature is well-known in functional programming. It is the signature of a fold function. Here is some intuition on why it is called the fold function:
Imagine folding a towel. You make your first fold. The second fold builds on top of the first fold. Each fold builds on top of the previous folds. That is analogous to what the function f does. "You make your first fold" is analogous to immediately returning c:
f h c Zero = c
"Each fold builds on top of the previous folds" is analogous to processing Succ n by applying h to the result of processing the n'th Nat value:
f h c (Succ n) = h (f h c n)
Recap: In analyzing the functions (addition, multiplication, exponentiation) on Nat we have discovered that they are all doing a fold operation.
That is a remarkable discovery.
Let's now see how to perform addition, multiplication, and exponentiation using f. However, since we now recognize that f is actually the fold function, let's rename it foldn (fold Nat). Here is its type signature and function definition:
foldn :: (a -> a) -> a -> Nat -> a foldn h c Zero = c foldn h c (Succ n) = h (foldn h c n)
The functions are implemented as:
m + n = foldn Succ m n m * n = foldn (+m) Zero n m ** n = foldn (*m) (Succ Zero) n
Let's take an example and trace its execution:
Zero + Succ Zero = foldn Succ Zero (Succ Zero) = Succ (foldn Succ Zero Zero) = Succ (Zero)
The holy trinity of functional programming is:
1. User-defined recursive data types. 2. Recursively defined functions over recursive data types. 3. Proof by induction: show that some property P(n) holds for each element of a recursive data type.
Nat is a user-defined recursive data type. The addition, multiplication, and exponentiation functions are recursively defined functions over Nat.
We saw that the fold function can be defined for the Nat data type. In fact, a suitable fold function can be defined for every recursive data type.
Wow!
There are two advantages of writing recursive definitions in terms of fold:
1. The definitions are shorted; rather than having to write down two equations, we have only to write down one. 2. It is possible to prove general properties of fold and use them to prove properties of specific instantiations. In other words, rather than having to write down many inductive proofs, we have only to write down one.
Recap: we have approached programming like a scientist; namely, we have examined a set of algorithms (the addition, multiplication, and exponentiation functions), discovered that they have a recurring pattern, and then evolved the algorithms to a higher order algorithm. That is advancing the field. Incremental advancements such as this is what takes Computer Science to new heights.
/Roger
_______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners

On Sat, 15 Sep 2012, Costello, Roger L.
Hi Folks,
< ... />
Okay, now we are ready to embark on our journey of discovery. We will examine three functions and find their common pattern.
The functions process values from this recursive data type:
data Nat = Zero | Succ Nat
Here are some examples of Nat values:
Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ...
The following functions perform addition, multiplication, and exponentiation on Nat values. Examine the three functions. Can you discern a pattern?
-- Addition of Nat values: m + Zero = m m + Succ n = Succ(m + n)
-- Multiplication of Nat values: m * Zero = Zero m * Succ n = (m * n) + m
-- Exponentiation of Nat values: m ** Zero = Succ Zero m ** Succ n = (m ** n) * m
For each function the right-hand operand is either Zero or Succ n:
(+m) Zero (+m) Succ n
(*m) Zero (*m) Succ n
(**m) Zero (**m) Succ n
So abstractly there is a function f that takes as argument either Zero or Succ n:
f Zero f Succ n
Given Zero as the argument, each function immediately returns a value:
(+m) Zero = m (*m) Zero = Zero (**m) Zero = Succ Zero
Let c denote the value.
So, invoke f with a value for c and Zero:
f c Zero = c
For addition provide m as the value for c:
m + Zero = f m Zero
For multiplication provide Zero as the value for c:
m * Zero = f Zero Zero
For exponentiation provide (Succ Zero) as the value for c:
m ** Zero = f (Succ Zero) Zero
Next, consider how each of the functions (addition, multiplication, exponentiation) deals with Succ n.
Given the argument Succ n, each function processes n and then applies a function to the result:
(+m) Succ n = (Succ)((+m) n) (*m) Succ n = ((*m) n) (+m) (**m) Succ n = ((**m) n) (*m)
Let h denote the function that is applied to the result.
So, Succ n is processed by invoking f with a value for h and Succ n:
f h (Succ n) = h (f n)
For multiplication provide (+m) as the value for h:
m * Succ n = f (+m) (Succ n)
Recall that (+m) is a function: it takes one argument and returns the sum of the argument and m.
Thus one of the arguments of function f is a function.
f is said to be a higher order function.
[Definition: A higher order function is a function that takes an argument that is a function or it returns a function.]
Continuing on with the other values for h:
For addition provide (Succ) as the value for h:
m + Succ n = f (Succ) (Succ n)
For exponentiation provide (*m) as the value for h:
m ** Zero = f (*m) (Succ n)
Recap: to process Nat values use these equations:
f c Zero = c f h (Succ n) = h (f n)
Actually we need to supply f with h, c, and the Nat value:
f h c Zero = c f h c (Succ n) = h (f h c n)
f has this type signature: its arguments are a function, a value, and a Nat value; the result is a value. So this is f's type signature:
f :: function -> value -> Nat -> value
That type signature is well-known in functional programming. It is the signature of a fold function. Here is some intuition on why it is called the fold function:
Imagine folding a towel. You make your first fold. The second fold builds on top of the first fold. Each fold builds on top of the previous folds. That is analogous to what the function f does. "You make your first fold" is analogous to immediately returning c:
f h c Zero = c
"Each fold builds on top of the previous folds" is analogous to processing Succ n by applying h to the result of processing the n'th Nat value:
f h c (Succ n) = h (f h c n)
Recap: In analyzing the functions (addition, multiplication, exponentiation) on Nat we have discovered that they are all doing a fold operation.
That is a remarkable discovery.
Let's now see how to perform addition, multiplication, and exponentiation using f. However, since we now recognize that f is actually the fold function, let's rename it foldn (fold Nat). Here is its type signature and function definition:
foldn :: (a -> a) -> a -> Nat -> a foldn h c Zero = c foldn h c (Succ n) = h (foldn h c n)
The functions are implemented as:
m + n = foldn Succ m n m * n = foldn (+m) Zero n m ** n = foldn (*m) (Succ Zero) n
Let's take an example and trace its execution:
Zero + Succ Zero = foldn Succ Zero (Succ Zero) = Succ (foldn Succ Zero Zero) = Succ (Zero)
The holy trinity of functional programming is:
1. User-defined recursive data types. 2. Recursively defined functions over recursive data types. 3. Proof by induction: show that some property P(n) holds for each element of a recursive data type.
Nat is a user-defined recursive data type. The addition, multiplication, and exponentiation functions are recursively defined functions over Nat.
We saw that the fold function can be defined for the Nat data type. In fact, a suitable fold function can be defined for every recursive data type.
Wow!
There are two advantages of writing recursive definitions in terms of fold:
1. The definitions are shorted; rather than having to write down two equations, we have only to write down one. 2. It is possible to prove general properties of fold and use them to prove properties of specific instantiations. In other words, rather than having to write down many inductive proofs, we have only to write down one.
Recap: we have approached programming like a scientist; namely, we have examined a set of algorithms (the addition, multiplication, and exponentiation functions), discovered that they have a recurring pattern, and then evolved the algorithms to a higher order algorithm. That is advancing the field. Incremental advancements such as this is what takes Computer Science to new heights.
/Roger
A. V. Borovik has a introduction to mathematical logic, made just of examples, called "Shadows of the Truth": http://www.maths.manchester.ac.uk/~avb/ST.pdf In the draft 0.819, dated 25 January 2012, which I just now pulled off the Net, chapter 5, which starts on page 45 (61 in my pdf reader), has a discussion of how to define addition, multiplication, and exponentiation, of positive integers, starting from (a version of) Peano's Axioms. I looked at the book a few years ago and I remember that in this section there appeared the question: Addition and multiplication obey the commutative law. Addition and multiplication obey the associative law. But exponentiation does not. Why? I do not see this question posed explicitly in the chapter, I may have missed it, but there does appear the fact: OK, addition and multiplication can be defined for Z/n (by which we mean the integers modulo n) by the "same" definition as is used for the positive integers. But that definition fails for exponentiation. In Phil Wadler's video "Church's Coincidences" http://www.youtube.com/watch?v=3D2PJ_DbKGFUA a note by S. C. Kleene is shown. The note lays out a definition of the inverse of the successor function, done for Church's coding of the integers into (some variant, indeed the original) lambda calculus. There was some difficulty in finding the definition. Yesterday oleg@okmij.org posted to Haskell-cafe a note with subject "Church vs Boehm-Berarducci encoding of Lists". There the puzzle of how to define cdr in the encoding, and more generally, how to do destructuring-bind^W"pattern matching", Haskell sense, in the general case, is mentioned. I think these three things 1. the difference between exponentiation and (addition and multiplication) 2. the difficulty of defining subtraction in Church's encoding of the non-negative integers 3. the difficulty of defining pattern matching in the Church-Boehm-Berarducci encoding might have something to do with one another. oo--JS.
participants (4)
-
Costello, Roger L.
-
Darren Grant
-
Jay Sulzberger
-
Kim-Ee Yeoh