Theory about uncurried functions

Lambda calculus is a nice theory in which every function always has one input and one output. Functions with multiple arguments can be simulated because functions are first class and hence a function can "return" a function. Multiple outputs cannot be done, one must embed these outputs in some data type, e.g. using a tuple, or one must use continuation passing style. Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?

On Tue, 2009-03-03 at 10:43 +0100, Peter Verswyvelen wrote:
Lambda calculus is a nice theory in which every function always has one input and one output. Functions with multiple arguments can be simulated because functions are first class and hence a function can "return" a function. Multiple outputs cannot be done, one must embed these outputs in some data type, e.g. using a tuple, or one must use continuation passing style.
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
There's the kappa calculus and also the related Freyd categories which are related to arrows. Theres also the theory induced by cartesian categories or the theory induced by (symmetric) monoidal categories (which are strengthenings of Freyd categories). You could probably also formalize such a language yourself.

On Tue, Mar 3, 2009 at 2:43 AM, Peter Verswyvelen
Lambda calculus is a nice theory in which every function always has one input and one output. Functions with multiple arguments can be simulated because functions are first class and hence a function can "return" a function. Multiple outputs cannot be done, one must embed these outputs in some data type, e.g. using a tuple, or one must use continuation passing style.
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
Well, this is not quite an answer to your question, but the curried multiple arguments convention in Haskell is exactly that: a convention. For example, in ML, most functions which take multiple arguments take them in the form of a tuple. They could just as well be curried, but the culture prefers tuples.

Peter Verswyvelen
Lambda calculus is a nice theory in which every function always has one input and one output. Functions with multiple arguments can be simulated because functions are first class and hence a function can "return" a function. Multiple outputs cannot be done, one must embed these outputs in some data type, e.g. using a tuple, or one must use continuation passing style.
Both input- and output currying are completely orthogonal, for a sufficiently bent definition of orthogonal: forall f :: (a, b) -> c exists g :: a -> b -> c "likewise": forall f :: a -> (b, c) exists (g :: a -> c, h :: a -> c) This is, of course, abstract nonsense[1]: Usually, splitting a function with multiple retvals into two is just a crime against efficiency, if not clarity. I doubt any compiler could be decisive enough to get Sufficiently Smart in this context.
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
I fear it doesn't, at least if the functions are allowed to be passed and return tuples containing bottom or unit, as you'd be left with syntactically awkward versions of single-argument functions, or unary tuples, whatever. Generally speaking, functions are more fundamental than product types[2]: cons :: a -> b -> ((a -> b -> c) -> c) cons a b m = m a b car :: ((a -> b -> a) -> c) -> c) car z = z (\p q -> p) cdr :: ((a -> b -> b) -> c) -> c) cdr z = z (\p q -> q) ...as a comparison, try to define eval and apply in terms of tuples. OTOH, you might be thinking of things like applicative functors, which are product types[2] and can't (necessarily) be escaped from, and not collapsed, either, as would be the case with monads. These are things that are described in terms of a theory, not a theory themselves, though. The important notion is that inside a specific functor, one part of its type always stays the same (or it wouldn't be that functor anymore... duh.) So, to conclude: You aren't likely to find any fundamental theory restricting inputs and outputs to tuples, but you are going to find fundamental theories that enable you to say very interesting things about functions which have their inputs and outputs restricted to arbitrary types, be it sums or products. [1] Specifically, distilled from http://golem.ph.utexas.edu/category/2008/03/computer_scientists_needed_now.h... , which is a _very_ good and enlightening read [2] Stolen straight from Exercise 2.4 in the Wizard Book [3] Of a type tag and a value. Stop nitpicking. Tagged lists spoiled me. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

On 3 Mar 2009, at 10:43, Peter Verswyvelen wrote:
Lambda calculus is a nice theory in which every function always has one input and one output. Functions with multiple arguments can be simulated because functions are first class and hence a function can "return" a function. Multiple outputs cannot be done, one must embed these outputs in some data type, e.g. using a tuple, or one must use continuation passing style.
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
I think most of the replies have already described it, but the Church's lambda-calculus is just a minimal theory describing a way to construct functions, which turns out to be quite general, including all algorithms. The first lambda-calculus he describes in his book does not even include the constant functions - for some reason it is convenient. So if you want to have more, just add it. That is why functional computer languages like Haskell can exist. As for currying, it builds on the fact that in the category of sets (but others may not have it) one has a natural equivalence (arguments can also be functions) psi: Hom(A x B, C) --> Hom(A, Hom(B, C)) f |-> (a |-> (b |-> f(a, b))) ((a, b) |-> g(a)(b)) <-| g So it simply means that set-theoretic products can be rewritten into iterated functions. (Strictly speaking, currying involves a choice of natural equivalence psi.) In axiomatic set theory, it is common to construct tuplets (i.e., set- theoretic products) so that (x) = x, (x_1, ..., x_n) = ((x_1, ..., x_(n-1), x_n), and one might set () to the empty set (so that, for example, the real R^0 has only one point). The recursive formula has slipped into functional languages. Pure math, unless there are special requirements, uses naive set theory in which that recursion formula would not be used: in computer lingo, it corresponds to the implementation (axiomatic set theory), and is not part of the interface (naive set theory). By contrast, in lists, [x] is not the same as x. (If S is a set, the set of lists with elements from S is the free monoid on S.) But you might view f() as passing the object () to f, f(x) passes x to f, and f(x_1, ..., x_n) passes (x_1, ..., x_n) to f. So the only addition needed is to add the objects (x_1, ..., x_n), n = 0, 1, 2, ..., to your language. You can still curry the functions if you like to - a convention, just as already noted. Hans Aberg

Thank you all for this information. It was very enlightening.
Too bad I don't know category theory, since I think it would give me a
better view on the different forms and essence of "computing".
Maybe this raises a new question: does understanding category theory
makes you a better *programmer*?
On Tue, Mar 3, 2009 at 8:45 PM, Hans Aberg
On 3 Mar 2009, at 10:43, Peter Verswyvelen wrote:
Lambda calculus is a nice theory in which every function always has one input and one output. Functions with multiple arguments can be simulated because functions are first class and hence a function can "return" a function. Multiple outputs cannot be done, one must embed these outputs in some data type, e.g. using a tuple, or one must use continuation passing style.
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
I think most of the replies have already described it, but the Church's lambda-calculus is just a minimal theory describing a way to construct functions, which turns out to be quite general, including all algorithms. The first lambda-calculus he describes in his book does not even include the constant functions - for some reason it is convenient.
So if you want to have more, just add it. That is why functional computer languages like Haskell can exist.
As for currying, it builds on the fact that in the category of sets (but others may not have it) one has a natural equivalence (arguments can also be functions) psi: Hom(A x B, C) --> Hom(A, Hom(B, C)) f |-> (a |-> (b |-> f(a, b))) ((a, b) |-> g(a)(b)) <-| g So it simply means that set-theoretic products can be rewritten into iterated functions. (Strictly speaking, currying involves a choice of natural equivalence psi.)
In axiomatic set theory, it is common to construct tuplets (i.e., set-theoretic products) so that (x) = x, (x_1, ..., x_n) = ((x_1, ..., x_(n-1), x_n), and one might set () to the empty set (so that, for example, the real R ^0 has only one point). The recursive formula has slipped into functional languages. Pure math, unless there are special requirements, uses naive set theory in which that recursion formula would not be used: in computer lingo, it corresponds to the implementation (axiomatic set theory), and is not part of the interface (naive set theory).
By contrast, in lists, [x] is not the same as x. (If S is a set, the set of lists with elements from S is the free monoid on S.)
But you might view f() as passing the object () to f, f(x) passes x to f, and f(x_1, ..., x_n) passes (x_1, ..., x_n) to f.
So the only addition needed is to add the objects (x_1, ..., x_n), n = 0, 1, 2, ..., to your language. You can still curry the functions if you like to - a convention, just as already noted.
Hans Aberg

Not a better programmer, just a better human being. Peter Verswyvelen wrote:
Thank you all for this information. It was very enlightening.
Too bad I don't know category theory, since I think it would give me a better view on the different forms and essence of "computing".
Maybe this raises a new question: does understanding category theory makes you a better *programmer*?

Peter Verswyvelen
Maybe this raises a new question: does understanding category theory makes you a better *programmer*?
Possibly yes, possibly no. In my experience, you have to have a look at how CT is applied to other fields to appreciate its clarity. Doing so, you may succeed in promoting some of your understanding of code to a more general level. I see abstract nonsense as way less fuzzy than lambda-based abstraction, and a lot more flexible (mentally speaking) than type theory, or logic, in general. The fact that it encompasses both makes it even more attractive (although you can express both of them in terms of the other as it stands) There's not much to understand about CT, anyway: It's actually nearly as trivial as set theory. One part of the benefit starts when you begin to categorise different kind of categories, in the same way that understanding monads is easiest if you just consider their difference to applicative functors. It's a system inviting you to tackle a problem with scrutiny, neither tempting you to generalise way beyond computability, nor burdening you with formal proof requirements or shackling you to some other ball and chain. Sadly, almost all texts about CT are absolutely useless: They tend to focus either on pure mathematical abstraction, lacking applicability, or tell you the story for a particular application of CT to a specific topic, loosing themselves in detail without providing the bigger picture. That's why I liked that Rosetta stone paper so much: I still don't understand anything more about physics, but I see how working inside a category with specific features and limitations is the exact right thing to do for those guys, and why you wouldn't want to do a PL that works in the same category. Throwing lambda calculus at a problem that doesn't happen to be a DSL or some other language of some sort is a bad idea. I seem to understand that for some time now, being especially fond of automata[1] to model autonomous, interacting agents, but CT made me grok it. The future will show how far it will pull my thinking out of the Turing tarpit. [1] Which aren't, at all, objects. Finite automata don't go bottom in any case, at least not if you don't happen to shoot them and their health drops below zero. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

On Wed, Mar 4, 2009 at 3:38 PM, Achim Schneider
There's not much to understand about CT, anyway: It's actually nearly as trivial as set theory.
You mean that theory which predicts the existence of infinitely many infinities; in fact for any cardinal, there are at least that many cardinals? That theory in which aleph_1 and 2^aleph_0 are definitely comparable, but we provably cannot compare them? The theory which has omega_0 < omega_1 < omega_2 < ... omega_omega < ..., where obviously omega_a is much larger than a... except for when it catches its tail and omega_alpha = alpha for some crazy-ass alpha. I don't think set theory is trivial in the least. I think it is complicated, convoluted, often anti-intuitive and nonconstructive. Category theory is much more trivial, and that's what makes it powerful. (Although training yourself to think categorically is quite difficult, I'm finding)
One part of the benefit starts when you begin to categorise different kind of categories, in the same way that understanding monads is easiest if you just consider their difference to applicative functors. It's a system inviting you to tackle a problem with scrutiny, neither tempting you to generalise way beyond computability, nor burdening you with formal proof requirements or shackling you to some other ball and chain.
Sadly, almost all texts about CT are absolutely useless: They tend to focus either on pure mathematical abstraction, lacking applicability, or tell you the story for a particular application of CT to a specific topic, loosing themselves in detail without providing the bigger picture. That's why I liked that Rosetta stone paper so much: I still don't understand anything more about physics, but I see how working inside a category with specific features and limitations is the exact right thing to do for those guys, and why you wouldn't want to do a PL that works in the same category.
Throwing lambda calculus at a problem that doesn't happen to be a DSL or some other language of some sort is a bad idea. I seem to understand that for some time now, being especially fond of automata[1] to model autonomous, interacting agents, but CT made me grok it. The future will show how far it will pull my thinking out of the Turing tarpit.
[1] Which aren't, at all, objects. Finite automata don't go bottom in any case, at least not if you don't happen to shoot them and their health drops below zero.
-- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Ha. There's even a wiki page on the paradoxes of set theory
http://en.wikipedia.org/wiki/Paradoxes_of_set_theory
If I recall correctly, a math professor once told me that it is not yet
proven if the cardinality of the power set of the natural numbers is larger
or smaller or equal than the cardinality of the real numbers... But that is
many many years ago so don't shoot me if I'm wrong :)
2009/3/4 Luke Palmer
On Wed, Mar 4, 2009 at 3:38 PM, Achim Schneider
wrote: There's not much to understand about CT, anyway: It's actually nearly as trivial as set theory.
You mean that theory which predicts the existence of infinitely many infinities; in fact for any cardinal, there are at least that many cardinals? That theory in which aleph_1 and 2^aleph_0 are definitely comparable, but we provably cannot compare them? The theory which has omega_0 < omega_1 < omega_2 < ... omega_omega < ..., where obviously omega_a is much larger than a... except for when it catches its tail and omega_alpha = alpha for some crazy-ass alpha.
I don't think set theory is trivial in the least. I think it is complicated, convoluted, often anti-intuitive and nonconstructive.
Category theory is much more trivial, and that's what makes it powerful. (Although training yourself to think categorically is quite difficult, I'm finding)
One part of the benefit starts when you begin to categorise different kind of categories, in the same way that understanding monads is easiest if you just consider their difference to applicative functors. It's a system inviting you to tackle a problem with scrutiny, neither tempting you to generalise way beyond computability, nor burdening you with formal proof requirements or shackling you to some other ball and chain.
Sadly, almost all texts about CT are absolutely useless: They tend to focus either on pure mathematical abstraction, lacking applicability, or tell you the story for a particular application of CT to a specific topic, loosing themselves in detail without providing the bigger picture. That's why I liked that Rosetta stone paper so much: I still don't understand anything more about physics, but I see how working inside a category with specific features and limitations is the exact right thing to do for those guys, and why you wouldn't want to do a PL that works in the same category.
Throwing lambda calculus at a problem that doesn't happen to be a DSL or some other language of some sort is a bad idea. I seem to understand that for some time now, being especially fond of automata[1] to model autonomous, interacting agents, but CT made me grok it. The future will show how far it will pull my thinking out of the Turing tarpit.
[1] Which aren't, at all, objects. Finite automata don't go bottom in any case, at least not if you don't happen to shoot them and their health drops below zero.
-- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Am Donnerstag, 5. März 2009 13:09 schrieb Peter Verswyvelen:
Ha. There's even a wiki page on the paradoxes of set theory http://en.wikipedia.org/wiki/Paradoxes_of_set_theory
If I recall correctly, a math professor once told me that it is not yet proven if the cardinality of the power set of the natural numbers is larger or smaller or equal than the cardinality of the real numbers... But that is many many years ago so don't shoot me if I'm wrong :)
In standard NBG set theory, it is easy to prove that card(P(N)) == card(R).

On 5 Mar 2009, at 13:29, Daniel Fischer wrote:
In standard NBG set theory, it is easy to prove that card(P(N)) == card(R).
No, it is an axiom: Cohen showed in 1963 (mentioned in Mendelson, "Introduction to Mathematical Logic") that the continuum hypothesis (CH) is independent of NBG+(AC)+(Axiom of Restriction), where AC is the axiom of choice. Thus you can assume CH or its negation (which is intuitively somewhat strange). AC is independent of NGB, so you can assume it or its negation (also intuitively strange), though GHC (generalized CH, for any cardinality) + NBG implies AC (result by Sierpinski 1947 and Specker 1954). GHC says that for any set x, there are no cardinalities between card x and card 2^x (the power-set cardinality). Since card ω < card R by Cantors diagonal method, and card R <= card 2^ω since R can be constructed out of binary sequences (and since the interval [0, 1] and R can be shown having the same cardinalities), GHC implies card R = card 2^ω. (Here, ω is a lower case omega, denoting the first infinite ordinal.) Hans Aberg

Am Donnerstag, 5. März 2009 14:58 schrieb Hans Aberg:
On 5 Mar 2009, at 13:29, Daniel Fischer wrote:
In standard NBG set theory, it is easy to prove that card(P(N)) == card(R).
No, it is an axiom: Cohen showed in 1963 (mentioned in Mendelson, "Introduction to Mathematical Logic") that the continuum hypothesis (CH) is independent of NBG+(AC)+(Axiom of Restriction), where AC is the axiom of choice.
Yes, but the continuum hypothesis is 2^Aleph_0 == Aleph_1, which is quite something different from 2^Aleph_0 == card(R). You can show the latter easily with the Cantor-Bernstein theorem, independent of CH or AC.
Thus you can assume CH or its negation (which is intuitively somewhat strange). AC is independent of NGB, so you can assume it or its negation (also intuitively strange), though GHC (generalized CH, for any cardinality) + NBG implies AC (result by Sierpinski 1947 and Specker 1954). GHC says that for any set x, there are no cardinalities between card x and card 2^x (the power-set cardinality). Since card ω < card R by Cantors diagonal method, and card R <= card 2^ω since R can be constructed out of binary sequences (and since the interval [0, 1] and R can be shown having the same cardinalities), GHC implies card R = card 2^ω. (Here, ω is a lower case omega, denoting the first infinite ordinal.)
Hans Aberg

Am Donnerstag, 5. März 2009 15:12 schrieb Daniel Fischer:
Yes, but the continuum hypothesis is 2^Aleph_0 == Aleph_1, which is quite something different from 2^Aleph_0 == card(R).
You can show the latter easily with the Cantor-Bernstein theorem, independent of CH or AC.
Just to flesh this up a bit: let f : P(N) -> R be given by f(M) = sum [2*3^(-k) | k <- M ] f is easily seen to be injective. define g : (0,1) -> P(N) by let x = sum [a_k*2^(-k) | k in N (\{0}), a_k in {0,1}, infinitely many a_k = 1] and then g(x) = {k | a_k = 1} again clearly g is an injection. Now the Cantor-Bernstein theorem asserts there is a bijection between the two sets.

On 5 Mar 2009, at 15:23, Daniel Fischer wrote:
Just to flesh this up a bit:
let f : P(N) -> R be given by f(M) = sum [2*3^(-k) | k <- M ] f is easily seen to be injective.
define g : (0,1) -> P(N) by let x = sum [a_k*2^(-k) | k in N (\{0}), a_k in {0,1}, infinitely many a_k = 1] and then g(x) = {k | a_k = 1}
again clearly g is an injection. Now the Cantor-Bernstein theorem asserts there is a bijection between the two sets.
I think one just defines an equivalence relation of elements mapped to each other by a finite number of iterations of f o g and g o f. The equivalence classes are then at most countable. So one can set up a bijection on each equivalence class: easy for at most countable sets. Then paste it together. The axiom of choice probably implicit here. Hans Aberg

Am Donnerstag, 5. März 2009 16:55 schrieb Hans Aberg:
On 5 Mar 2009, at 15:23, Daniel Fischer wrote:
Just to flesh this up a bit:
let f : P(N) -> R be given by f(M) = sum [2*3^(-k) | k <- M ] f is easily seen to be injective.
define g : (0,1) -> P(N) by let x = sum [a_k*2^(-k) | k in N (\{0}), a_k in {0,1}, infinitely many a_k = 1] and then g(x) = {k | a_k = 1}
again clearly g is an injection. Now the Cantor-Bernstein theorem asserts there is a bijection between the two sets.
I think one just defines an equivalence relation of elements mapped to each other by a finite number of iterations of f o g and g o f. The equivalence classes are then at most countable. So one can set up a bijection on each equivalence class: easy for at most countable sets. Then paste it together. The axiom of choice probably implicit here.
Cantor-Bernstein doesn't require choice (may be different for intuitionists). http://en.wikipedia.org/wiki/Cantor-Bernstein_theorem
Hans Aberg
Cheers, Daniel

On 5 Mar 2009, at 17:06, Daniel Fischer wrote:
Cantor-Bernstein doesn't require choice (may be different for intuitionists). http://en.wikipedia.org/wiki/Cantor-Bernstein_theorem
Yes, that is right, Mendelson says that. - I find it hard to figure out when it is used, as it is so intuitive. Mendelson says AC is in fact equivalent proving all x, y: card x <= card y or card y <= card x Hans Aberg

On 5 Mar 2009, at 15:12, Daniel Fischer wrote:
No, it is an axiom: Cohen showed in 1963 (mentioned in Mendelson, "Introduction to Mathematical Logic") that the continuum hypothesis (CH) is independent of NBG+(AC)+(Axiom of Restriction), where AC is the axiom of choice.
Yes, but the continuum hypothesis is 2^Aleph_0 == Aleph_1, which is quite something different from 2^Aleph_0 == card(R).
Yes, right, card R = 2^Aleph_0, as you said, and Aleph_1 is defined as the smallest cardinal greater than Aleph_0. Hans Aberg

Luke Palmer
I don't think set theory is trivial in the least. I think it is complicated, convoluted, often anti-intuitive and nonconstructive.
Waaaaaaagh! I mean trivial in the mathematical sense, as in how far away from the axioms you are. The other kind of "triviality" of set theory just proves the point I made about CT vs. lambda calculus. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

To wrap up: While formalising, there is always a tradeoff between complexity of the theory you're using and the complexity of it being applied to some specific topic. Category theory hits a very, very sweet spot there. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

On Tue, 3 Mar 2009, Peter Verswyvelen wrote:
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
I don't think one can forbid currying. It's just a question, whether people use it commonly or not. But that was already said in this thread. Somehow related: http://haskell.org/haskellwiki/Composing_functions_with_multiple_values

On Wed, 2009-03-04 at 01:35 +0100, Henning Thielemann wrote:
On Tue, 3 Mar 2009, Peter Verswyvelen wrote:
Now, does a similar theory exist of functions that always have one input and one output, but these inputs and outputs are *always* tuples? Or maybe this does not make any sense?
I don't think one can forbid currying.
Sure you can, just work in a category with finite products (and possibly finite co-products) but not exponentials. Of course, then your language is first order --- and this doesn't do anything to stop partial application, which is still easy --- but it's quite possible. jcc
participants (10)
-
Achim Schneider
-
Colin Adams
-
Dan Weston
-
Daniel Fischer
-
Derek Elkins
-
Hans Aberg
-
Henning Thielemann
-
Jonathan Cast
-
Luke Palmer
-
Peter Verswyvelen