Category Theory woes

Hi all, I'm trying to learn Haskell and have come across Monads. I kind of understand monads now, but I would really like to understand where they come from. So I got a copy of Barr and Well's Category Theory for Computing Science Third Edition, but the book has really left me dumbfounded. It's a good book. But I'm just having trouble with the proofs in Chapter 1--let alone reading the rest of the text. Are there any references to things like "Hom Sets" and "Hom Functions" in the literature somewhere and how to use them? The only book I know that uses them is this one. Has anyone else found it frustratingly difficult to find details on easy-to-diget material on Category theory. The Chapter that I'm stuck on is actually labelled Preliminaries and so I reason that if I can't do this, then there's not much hope for me understanding the rest of the book... Maybe there are books on Discrete maths or Algebra or Set Theory that deal more with Hom Sets and Hom Functions? Thanks, Mark Spezzano.

I should probably add that I am trying various proofs that involve injective and surjective properties of Hom Sets and Hom functions. Does anyone know what Hom stands for? I need a text for a newbie. Mark On 02/02/2010, at 9:56 PM, Mark Spezzano wrote:
Hi all,
I'm trying to learn Haskell and have come across Monads. I kind of understand monads now, but I would really like to understand where they come from. So I got a copy of Barr and Well's Category Theory for Computing Science Third Edition, but the book has really left me dumbfounded. It's a good book. But I'm just having trouble with the proofs in Chapter 1--let alone reading the rest of the text.
Are there any references to things like "Hom Sets" and "Hom Functions" in the literature somewhere and how to use them? The only book I know that uses them is this one.
Has anyone else found it frustratingly difficult to find details on easy-to-diget material on Category theory. The Chapter that I'm stuck on is actually labelled Preliminaries and so I reason that if I can't do this, then there's not much hope for me understanding the rest of the book...
Maybe there are books on Discrete maths or Algebra or Set Theory that deal more with Hom Sets and Hom Functions?
Thanks,
Mark Spezzano.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hom(A, B) is just a set of morphisms from A to B. Mark Spezzano wrote:
I should probably add that I am trying various proofs that involve injective and surjective properties of Hom Sets and Hom functions.
Does anyone know what Hom stands for?
I need a text for a newbie.
Mark
On 02/02/2010, at 9:56 PM, Mark Spezzano wrote:
Hi all,
I'm trying to learn Haskell and have come across Monads. I kind of understand monads now, but I would really like to understand where they come from. So I got a copy of Barr and Well's Category Theory for Computing Science Third Edition, but the book has really left me dumbfounded. It's a good book. But I'm just having trouble with the proofs in Chapter 1--let alone reading the rest of the text.
Are there any references to things like "Hom Sets" and "Hom Functions" in the literature somewhere and how to use them? The only book I know that uses them is this one.
Has anyone else found it frustratingly difficult to find details on easy-to-diget material on Category theory. The Chapter that I'm stuck on is actually labelled Preliminaries and so I reason that if I can't do this, then there's not much hope for me understanding the rest of the book...
Maybe there are books on Discrete maths or Algebra or Set Theory that deal more with Hom Sets and Hom Functions?
Thanks,
Mark Spezzano.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

You may try Pierce's "Basic Category Theory for Computer Scientists" or
Awodey's "Category Theory", whose style is rather introductory. Both of them
(I think) have a chapter about functors where they explain the Hom functor
and related topics.
Alvaro.
2010/2/2 Mark Spezzano
I should probably add that I am trying various proofs that involve injective and surjective properties of Hom Sets and Hom functions.
Does anyone know what Hom stands for?
I need a text for a newbie.
Mark
On 02/02/2010, at 9:56 PM, Mark Spezzano wrote:
Hi all,
I'm trying to learn Haskell and have come across Monads. I kind of understand monads now, but I would really like to understand where they come from. So I got a copy of Barr and Well's Category Theory for Computing Science Third Edition, but the book has really left me dumbfounded. It's a good book. But I'm just having trouble with the proofs in Chapter 1--let alone reading the rest of the text.
Are there any references to things like "Hom Sets" and "Hom Functions" in the literature somewhere and how to use them? The only book I know that uses them is this one.
Has anyone else found it frustratingly difficult to find details on easy-to-diget material on Category theory. The Chapter that I'm stuck on is actually labelled Preliminaries and so I reason that if I can't do this, then there's not much hope for me understanding the rest of the book...
Maybe there are books on Discrete maths or Algebra or Set Theory that deal more with Hom Sets and Hom Functions?
Thanks,
Mark Spezzano.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

2010/2/2 Álvaro García Pérez
You may try Pierce's "Basic Category Theory for Computer Scientists" or Awodey's "Category Theory", whose style is rather introductory. Both of them (I think) have a chapter about functors where they explain the Hom functor and related topics.
I think Awodey's book is pretty fantastic, actually, but I'd avoid Pierce. Unlike "Types and Programming Languages", I think "Basic Category Theory..." is a bit eccentric in its presentation and doesn't help the reader build intuition.

On Tue, 02 Feb 2010 09:16:03 -0800, Creighton Hogg wrote:
2010/2/2 Álvaro García Pérez
You may try Pierce's "Basic Category Theory for Computer Scientists" or Awodey's "Category Theory", whose style is rather introductory. Both of them (I think) have a chapter about functors where they explain the Hom functor and related topics.
I think Awodey's book is pretty fantastic, actually, but I'd avoid Pierce. Unlike "Types and Programming Languages", I think "Basic Category Theory..." is a bit eccentric in its presentation and doesn't help the reader build intuition.
I have written an overview of various category theory books, which you may find useful, at the following site: Learning Haskell through Category Theory, and Adventuring in Category Land: Like Flatterland, Only About Categories http://dekudekuplex.wordpress.com/2009/01/16/learning-haskell-through-catego... Hope this helps. -- Benjamin L. Russell -- Benjamin L. Russell / DekuDekuplex at Yahoo dot com http://dekudekuplex.wordpress.com/ Translator/Interpreter / Mobile: +011 81 80-3603-6725 "Furuike ya, kawazu tobikomu mizu no oto." -- Matsuo Basho^

On Sun, 07 Feb 2010 01:38:08 +0900
"Benjamin L. Russell"
On Tue, 02 Feb 2010 09:16:03 -0800, Creighton Hogg wrote:
2010/2/2 Álvaro García Pérez
You may try Pierce's "Basic Category Theory for Computer Scientists" or Awodey's "Category Theory", whose style is rather introductory. Both of them (I think) have a chapter about functors where they explain the Hom functor and related topics.
I think Awodey's book is pretty fantastic, actually, but I'd avoid Pierce. Unlike "Types and Programming Languages", I think "Basic Category Theory..." is a bit eccentric in its presentation and doesn't help the reader build intuition.
I have written an overview of various category theory books, which you may find useful, at the following site:
Learning Haskell through Category Theory, and Adventuring in Category Land: Like Flatterland, Only About Categories http://dekudekuplex.wordpress.com/2009/01/16/learning-haskell-through-catego...
Hope this helps.
It does. Does anybody have any opinions on Pitt, "Category Theory and Computer Science" ? Brian

Mark Spezzano wrote:
I need a text for a newbie.
While the other books suggested are excellent, I think that they would be hard going if you find Barr & Wells difficult. The simplest introduction to the ideas of category theory that I know is "Conceptual Mathematics" by F W Lawvere & S H Schanuel. There are a great many online resources including many good books on category theory. But Barr & Wells is one of the best for application to Computing. ael

Mark Spezzano
Does anyone know what Hom stands for?
'Hom' stands for 'homomorphism' --a way of changing (morphism) between two structures while keeping some information the same (homo-). Any algebra text will define morphisms aplenty --homomorphisms, epimorphisms, monomorphisms, and the like. These are maps on groups that preserve group operations (or on rings that preserve ring operations, etc.) In a topology text, you will find information on what are called continuous functions; they're morphisms too, in disguise. You can find a thinner disguise when you look at continuously invertible continuous functions, which are called homeomorphisms. If you proceed to differential geometry, you'll see smooth maps --they're morphisms too, and the invertible ones are called diffeomorphisms. This-morphisms, that-morphisms --if you're trying to come up with a general theory that describes all of them, it's natural just to call them 'morphisms'; but, as with the word 'colonel', the word and the symbol come to us via different routes, so that 'Hom(omorphism)' survives instead as the abbreviation. The crucial point in learning category theory is the realisation that, despite all the fancy terminology, it is at heart nothing but a way of talking about groups, rings, topological spaces, partial orders, etc. --all at once, so no wonder it seems abstract!

Mark Spezzano
Maybe there are books on Discrete maths or Algebra or Set Theory that deal
more with Hom Sets and Hom Functions?
Googling "haskell category theory" I got: http://en.wikibooks.org/wiki/Haskell/Category_theory http://www.haskell.org/haskellwiki/Category_theory and many others. The latter has a list of books. Perhaps people could update with books they are familiar with and add comments? Dominic.

On Tue, Feb 2, 2010 at 5:26 AM, Mark Spezzano
Hi all,
Has anyone else found it frustratingly difficult to find details on easy-to-diget material on Category theory. The Chapter that I'm stuck on is actually labeled Preliminaries and so I reason that if I can't
I've looked through at least a dozen. For neophytes, the best of the bunch BY FAR is Goldblatt, Topoi: the categorial analysis of logichttp://digital.library.cornell.edu/cgi/t/text/text-idx?c=math;cc=math;q1=Gol.... Don't be put off by the title. He not only explains the stuff, but he explains the problems that motivated the invention of the stuff. He doesn't cover monads, but he covers all the basics very clearly, so once you've got that down you can move to another author for monads. -gregg

On Feb 16, 2010, at 9:43 AM, Gregg Reynolds wrote:
I've looked through at least a dozen. For neophytes, the best of the bunch BY FAR is Goldblatt, Topoi: the categorial analysis of logic . Don't be put off by the title. He not only explains the stuff, but he explains the problems that motivated the invention of the stuff. He doesn't cover monads, but he covers all the basics very clearly, so once you've got that down you can move to another author for monads.
He does cover monads, briefly. They're called "triples" in this context, and the chapter on interpretations of the intuitionistic logic depend on functorial/monadic techniques. If I remember correctly, he uses the techniques and abstracts from them.

I haven't seen anybody mentioning «Joy of Cats» by Adámek, Herrlich & Strecker: It is available online, and is very well-equipped with thorough explanations, examples, exercises & funny illustrations, I would say best of university lecture style: http://katmat.math.uni-bremen.de/acc/. (Actually, the name of the book is a joke on the set theorists' book «Joy of Set», which again is a joke on «Joy of Sex», which I once found in my parents' bookshelf... ;-)) Another alternative: Personally, I had difficulties with the somewhat arbitrary terminology, at times a hindrance to intuitive understanding - and found intuitive access by programming examples, and the book was «Computational Category Theory» by Rydeheart & Burstall, also now available online at http://www.cs.man.ac.uk/~david/categories/book/, done with the functional language ML. Later I translated parts of it to Haskell which was great fun, and the books content is more beginner level than any other book I've seen yet. The is also a programming language project dedicated to category theory, «Charity», at the university of Calgary: http://pll.cpsc.ucalgary.ca/charity1/www/home.html. Any volunteers in doing a RENAME REFACTORING of category theory together with me?? ;-)) Cheers, Nick Mark Spezzano wrote:
Hi all,
I'm trying to learn Haskell and have come across Monads. I kind of understand monads now, but I would really like to understand where they come from. So I got a copy of Barr and Well's Category Theory for Computing Science Third Edition, but the book has really left me dumbfounded. It's a good book. But I'm just having trouble with the proofs in Chapter 1--let alone reading the rest of the text.
Are there any references to things like "Hom Sets" and "Hom Functions" in the literature somewhere and how to use them? The only book I know that uses them is this one.
Has anyone else found it frustratingly difficult to find details on easy-to-diget material on Category theory. The Chapter that I'm stuck on is actually labelled Preliminaries and so I reason that if I can't do this, then there's not much hope for me understanding the rest of the book...
Maybe there are books on Discrete maths or Algebra or Set Theory that deal more with Hom Sets and Hom Functions?
Thanks,
Mark Spezzano.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Feb 18, 2010 at 04:27, Nick Rudnick wrote:
I haven't seen anybody mentioning «Joy of Cats» by Adámek, Herrlich & Strecker:
It is available online, and is very well-equipped with thorough explanations, examples, exercises & funny illustrations, I would say best of university lecture style: http://katmat.math.uni-bremen.de/acc/. (Actually, the name of the book is a joke on the set theorists' book «Joy of Set», which again is a joke on «Joy of Sex», which I once found in my parents' bookshelf... ;-))
This book reads quite nicely! I love the illustrations that pervade the technical description, providing comedic relief. I might have to go back a re-learn CT... again. Excellent recommendation! For those looking for resources on category theory, here are my collected references: http://www.citeulike.org/user/spl/tag/category-theory Sean

IM(H??)O, a really introductive book on category theory still is to be written -- if category theory is really that fundamental (what I believe, due to its lifting of restrictions usually implicit at 'orthodox maths'), than it should find a reflection in our every day's common sense, shouldn't it? In this case, I would regard it as desirable to -- in best refactoring manner -- to identify a wording in this language instead of the abuse of terminology quite common in maths, e.g. * the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open), or * the abuse of abandoning imaginary notions in favour person's last names in tribute to successful mathematicians... Actually, that pupils get to know a certain lemma as «Zorn's lemma» does not raise public conciousness of Mr. Zorn (even among mathematicians, I am afraid) very much, does it? * 'folkloristic' dropping of terminology -- even in Germany, where the term «ring» seems to originate from, since at least a century nowbody has the least idea it once had an alternative meaning «gang,band,group», which still seems unsatisfactory... Here computing science has explored ways to do much better than this, and it might be time category theory is claimed by computer scientists in this regard. Once such a project has succeeded, I bet, mathematicians will pick up themselves these work to get into category theory... ;-) As an example, let's play a little: Arrows: Arrows are more fundamental than objects, in fact, categories may be defined with arrows only. Although I like the term arrow (more than 'morphism'), I intuitively would find the term «reference» less contradictive with the actual intention, as this term * is very general, * reflects well dual asymmetry, * does harmoniously transcend the atomary/structured object perspective -- a an object may be in reference to another *by* substructure (in the beginning, I was quite confused lack of explicit explicatation in this regard, as «arrow/morphism» at least to me impled objekt mapping XOR collection mapping). Categories: In every day's language, a category is a completely different thing, without the least association with a reference system that has a composition which is reflective and associative. To identify a more intuitive term, we can ponder its properties, * reflexivity: This I would interpret as «the references of a category may be regarded as a certain generalization of id», saying that references inside a category represent some kind of similarity (which in the most restrictive cases is equality). * associativity: This I would interpret as «you can *fold* it», i.e. the behaviour is invariant to the order of composing references to composite references -- leading to «the behaviour is completely determined by the lower level reference structure» and therefore «derivations from lower level are possible» Here, finding an appropriate term seems more delicate; maybe a neologism would do good work. Here one proposal: * consequence/?consequentiality? : Pro: Reflects well reflexivity, associativity and duality; describing categories as «structures of (inner) consequence» seems to fit exceptionally well. The pictorial meaning of a «con-sequence» may well reflect the graphical structure. Gives a fine picture of the «intermediating forces» in observation and the «psychologism» becoming possible (-> cf. CCCs, Toposes). Con: Personalized meaning has an association with somewhat unfriendly behaviour. Anybody to drop a comment on this? Cheers, Nick Sean Leather wrote:
On Thu, Feb 18, 2010 at 04:27, Nick Rudnick wrote:
I haven't seen anybody mentioning «Joy of Cats» by Adámek, Herrlich & Strecker:
It is available online, and is very well-equipped with thorough explanations, examples, exercises & funny illustrations, I would say best of university lecture style: http://katmat.math.uni-bremen.de/acc/. (Actually, the name of the book is a joke on the set theorists' book «Joy of Set», which again is a joke on «Joy of Sex», which I once found in my parents' bookshelf... ;-))
This book reads quite nicely! I love the illustrations that pervade the technical description, providing comedic relief. I might have to go back a re-learn CT... again. Excellent recommendation!
For those looking for resources on category theory, here are my collected references: http://www.citeulike.org/user/spl/tag/category-theory
Sean ------------------------------------------------------------------------
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 18 Feb 2010, at 14:48, Nick Rudnick wrote:
* the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open),
I take "closed" as coming from being closed under limit operations - the origin from analysis. A closure operation c is defined by the property c(c(x)) = c(x). If one takes c(X) = the set of limit points of X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed. Hans

Hi Hans, agreed, but, in my eyes, you directly point to the problem: * doesn't this just delegate the problem to the topic of limit operations, i.e., in how far is the term «closed» here more perspicuous? * that's (for a very simple concept) the way that maths prescribes: + historical background: «I take "closed" as coming from being closed under limit operations - the origin from analysis.» + definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x). If one takes c(X) = the set of limit points of X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.» 418 bytes in my file system... how many in my brain...? Is it efficient, inevitable? The most fundamentalist justification I heard in this regard is: «It keeps people off from thinking the could go without the definition...» Meanwhile, we backtrack definition trees filling books, no, even more... In my eyes, this comes equal to claiming: «You have nothing to understand this beyond the provided authoritative definitions -- your understanding is done by strictly following these.» Back to the case of open/closed, given we have an idea about sets -- we in most cases are able to derive the concept of two disjunct sets facing each other ourselves, don't we? The only lore missing is just a Bool: Which term fits which idea? With a reliable terminology using «bordered/unbordered», there is no ambiguity, and we can pass on reading, without any additional effort. Picking such an opportunity thus may save a lot of time and even error -- allowing you to utilize your individual knowledge and experience. I have hope that this approach would be of great help in learning category theory. All the best, Nick Hans Aberg wrote:
On 18 Feb 2010, at 14:48, Nick Rudnick wrote:
* the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open),
I take "closed" as coming from being closed under limit operations - the origin from analysis. A closure operation c is defined by the property c(c(x)) = c(x). If one takes c(X) = the set of limit points of X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.
Hans

Am Donnerstag 18 Februar 2010 19:19:36 schrieb Nick Rudnick:
Hi Hans,
agreed, but, in my eyes, you directly point to the problem:
* doesn't this just delegate the problem to the topic of limit operations, i.e., in how far is the term «closed» here more perspicuous?
It's fairly natural in German, abgeschlossen: closed, finished, complete; offen: open, ongoing.
* that's (for a very simple concept)
That concept (open and closed sets, topology more generally) is *not* very simple. It has many surprising aspects.
the way that maths prescribes: + historical background: «I take "closed" as coming from being closed under limit operations - the origin from analysis.» + definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x).
Actually, that's incomplete, missing are - c(x) contains x - c(x) is minimal among the sets containing x with y = c(y).
If one takes c(X) = the set of limit points of
Not limit points, "Berührpunkte" (touching points).
X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.»
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable? The most fundamentalist justification I heard in this regard is: «It keeps people off from thinking the could go without the definition...» Meanwhile, we backtrack definition trees filling books, no, even more... In my eyes, this comes equal to claiming: «You have nothing to understand this beyond the provided authoritative definitions -- your understanding is done by strictly following these.»
But you can't understand it except by familiarising yourself with the definitions and investigating their consequences. The name of a concept can only help you remembering what the definition was. Choosing "obvious" names tends to be misleading, because there usually are things satisfying the definition which do not behave like the "obvious" name implies.
Back to the case of open/closed, given we have an idea about sets -- we in most cases are able to derive the concept of two disjunct sets facing each other ourselves, don't we? The only lore missing is just a Bool: Which term fits which idea? With a reliable terminology using «bordered/unbordered», there is no ambiguity, and we can pass on reading, without any additional effort.
And we'd be very wrong. There are sets which are simultaneously open and closed. It is bad enough with the terminology as is, throwing in the boundary (which is an even more difficult concept than open/closed) would only make things worse.
Picking such an opportunity thus may save a lot of time and even error -- allowing you to utilize your individual knowledge and experience. I
When learning a formal theory, individual knowledge and experience (except coming from similar enough disciplines) tend to be misleading more than helpful.
have hope that this approach would be of great help in learning category theory.
All the best,
Nick
Hans Aberg wrote:
On 18 Feb 2010, at 14:48, Nick Rudnick wrote:
* the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open),
I take "closed" as coming from being closed under limit operations - the origin from analysis. A closure operation c is defined by the property c(c(x)) = c(x). If one takes c(X) = the set of limit points of X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.
Hans
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 18 Feb 2010, at 20:20, Daniel Fischer wrote:
+ definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x).
Actually, that's incomplete, ...
That's right, it is just the idempotency relation.
...missing are - c(x) contains x - c(x) is minimal among the sets containing x with y = c(y).
It suffices*) with a lattice L with relation <= (inclusion in the case of sets) satifying i. x <= y implies c(x) <= c(y) ii. x <= c(x) for all x in L. iii. c(c(x)) = x. Hans *) The definition in a book on lattice theory by Balbes & Dwinger.

Am Donnerstag 18 Februar 2010 21:47:02 schrieb Hans Aberg:
On 18 Feb 2010, at 20:20, Daniel Fischer wrote:
+ definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x).
Actually, that's incomplete, ...
That's right, it is just the idempotency relation.
...missing are - c(x) contains x - c(x) is minimal among the sets containing x with y = c(y).
It suffices*) with a lattice L with relation <= (inclusion in the case of sets) satifying i. x <= y implies c(x) <= c(y) ii. x <= c(x) for all x in L. iii. c(c(x)) = x.
Typo, iii. c(c(x)) = c(x), of course. If we replace "set" by "lattice element" and "contains" by ">=", the definitions are equivalent. The one you quoted is better, though.
Hans
*) The definition in a book on lattice theory by Balbes & Dwinger.

On 18 Feb 2010, at 22:06, Daniel Fischer wrote:
...missing are - c(x) contains x - c(x) is minimal among the sets containing x with y = c(y).
It suffices*) with a lattice L with relation <= (inclusion in the case of sets) satifying i. x <= y implies c(x) <= c(y) ii. x <= c(x) for all x in L. iii. c(c(x)) = x.
Typo, iii. c(c(x)) = c(x), of course.
Sure.
If we replace "set" by "lattice element" and "contains" by ">=", the definitions are equivalent.
Right.
The one you quoted is better, though.
It is a powerful concept. I think of a function closure as what one gets when adding all an expression binds to, though I'm not sure that is why it is called a closure. Hans

On Feb 18, 2010, at 1:28 PM, Hans Aberg wrote:
It is a powerful concept. I think of a function closure as what one gets when adding all an expression binds to, though I'm not sure that is why it is called a closure.
Its because a monadic morphism into the same type carrying around data is a closure operator on the type. It's basically a direct sum of the "inner" type, and the "data" type.

Daniel Fischer wrote:
Am Donnerstag 18 Februar 2010 19:19:36 schrieb Nick Rudnick:
Hi Hans,
agreed, but, in my eyes, you directly point to the problem:
* doesn't this just delegate the problem to the topic of limit operations, i.e., in how far is the term «closed» here more perspicuous?
It's fairly natural in German, abgeschlossen: closed, finished, complete; offen: open, ongoing.
* that's (for a very simple concept)
That concept (open and closed sets, topology more generally) is *not* very simple. It has many surprising aspects.
«concept» is a word of many meanings; to become more specific: Its *definition* is...
the way that maths prescribes: + historical background: «I take "closed" as coming from being closed under limit operations - the origin from analysis.» + definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x).
Actually, that's incomplete, missing are - c(x) contains x - c(x) is minimal among the sets containing x with y = c(y).
Even more workload to master... This strengthens the thesis that definition recognition requires a considerable amount of one's effort...
If one takes c(X) = the set of limit points of
Not limit points, "Berührpunkte" (touching points).
X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.»
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable? The most fundamentalist justification I heard in this regard is: «It keeps people off from thinking the could go without the definition...» Meanwhile, we backtrack definition trees filling books, no, even more... In my eyes, this comes equal to claiming: «You have nothing to understand this beyond the provided authoritative definitions -- your understanding is done by strictly following these.»
But you can't understand it except by familiarising yourself with the definitions and investigating their consequences. The name of a concept can only help you remembering what the definition was. Choosing "obvious" names tends to be misleading, because there usually are things satisfying the definition which do not behave like the "obvious" name implies.
So if you state that the used definitions are completely unpredictable so that they have to be studied completely -- which already ignores that human brain is an analogous «machine» --, you, by information theory, imply that these definitions are somewhat arbitrary, don't you? This in my eyes would contradict the concept such definition systems have about themselves. To my best knowledge it is one of the best known characteristics of category theory that it revealed in how many cases maths is a repetition of certain patterns. Speaking categorically it is good practice to transfer knowledge from on domain to another, once the required isomorphisms could be established. This way, category theory itself has successfully torn down borders between several subdisciplines of maths and beyond. I just propose to expand the same to common sense matters...
Back to the case of open/closed, given we have an idea about sets -- we in most cases are able to derive the concept of two disjunct sets facing each other ourselves, don't we? The only lore missing is just a Bool: Which term fits which idea? With a reliable terminology using «bordered/unbordered», there is no ambiguity, and we can pass on reading, without any additional effort.
And we'd be very wrong. There are sets which are simultaneously open and closed. It is bad enough with the terminology as is, throwing in the boundary (which is an even more difficult concept than open/closed) would only make things worse.
Really? As «open == not closed» can similarly be implied, bordered/unbordered even in this concern remains at least equal...
Picking such an opportunity thus may save a lot of time and even error -- allowing you to utilize your individual knowledge and experience. I
When learning a formal theory, individual knowledge and experience (except coming from similar enough disciplines) tend to be misleading more than helpful.
Why does the opposite work well for computing science? All the best, Nick

On Feb 18, 2010, at 4:49 PM, Nick Rudnick wrote:
Why does the opposite work well for computing science?
Does it? I remember a peer trying to convince me to use "the factory pattern" in a language that supports functors. I told him I would do my task my way, and he could change it later if he wanted. He told me an hour later he tried a trivial implementation, and found that the source was twice as long as my REAL implementation, split across multiple files in an unattractive way, all while losing conceptual clarity. He immediately switched to using functors too. He didn't even know he wanted a functor, because the name "factory" clouded his interpretation. Software development is full of people inventing creative new ways to use the wrong tool for the job.

Hi Alexander, please be more specific -- what is your proposal? Seems as if you had more to say... Nick Alexander Solla wrote:
On Feb 18, 2010, at 4:49 PM, Nick Rudnick wrote:
Why does the opposite work well for computing science?
Does it? I remember a peer trying to convince me to use "the factory pattern" in a language that supports functors. I told him I would do my task my way, and he could change it later if he wanted. He told me an hour later he tried a trivial implementation, and found that the source was twice as long as my REAL implementation, split across multiple files in an unattractive way, all while losing conceptual clarity. He immediately switched to using functors too. He didn't even know he wanted a functor, because the name "factory" clouded his interpretation.
Software development is full of people inventing creative new ways to use the wrong tool for the job.

Am Freitag 19 Februar 2010 01:49:05 schrieb Nick Rudnick:
Daniel Fischer wrote:
Am Donnerstag 18 Februar 2010 19:19:36 schrieb Nick Rudnick:
Hi Hans,
agreed, but, in my eyes, you directly point to the problem:
* doesn't this just delegate the problem to the topic of limit operations, i.e., in how far is the term «closed» here more perspicuous?
It's fairly natural in German, abgeschlossen: closed, finished, complete; offen: open, ongoing.
* that's (for a very simple concept)
That concept (open and closed sets, topology more generally) is *not* very simple. It has many surprising aspects.
«concept» is a word of many meanings; to become more specific: Its *definition* is...
It isn't. You can make it look simple (Given a topology T, a set V is called "open in T", if V is an element of T.) by moving all of the difficult parts to the other definitions, but the entire group of definitions contains a nontrivial amount of difficulties (I've seen fairly bright students take a couple of weeks to wrap their head well around it, even though they've been familiar with the stuff in the context of euclidean space.).
the way that maths prescribes: + historical background: «I take "closed" as coming from being closed under limit operations - the origin from analysis.» + definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x).
Actually, that's incomplete, missing are - c(x) contains x - c(x) is minimal among the sets containing x with y = c(y).
Even more workload to master... This strengthens the thesis that definition recognition requires a considerable amount of one's effort...
I don't know what "recognition" should mean here, but certainly, understanding a definition, its (near but not trivial) consequences and its purpose requires considerable effort. Especially if it's an abstract and very general definition.
If one takes c(X) = the set of limit points of
Not limit points, "Berührpunkte" (touching points).
X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.»
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable? The most fundamentalist justification I heard in this regard is: «It keeps people off from thinking the could go without the definition...» Meanwhile, we backtrack definition trees filling books, no, even more... In my eyes, this comes equal to claiming: «You have nothing to understand this beyond the provided authoritative definitions -- your understanding is done by strictly following these.»
But you can't understand it except by familiarising yourself with the definitions and investigating their consequences. The name of a concept can only help you remembering what the definition was. Choosing "obvious" names tends to be misleading, because there usually are things satisfying the definition which do not behave like the "obvious" name implies.
So if you state that the used definitions are completely unpredictable
I don't.
so that they have to be studied completely
Many definitions contain details which you probably wouldn't think about before you've banged your head against a wall very hard several times because you didn't know such details even existed. If you decide to ignore the hard work and experience that have gone into the carefully crafted definitions, you are bound to make the same mistakes, run up the same blind alleys as those who have shaped the definition to what it now is.
-- which already ignores that human brain is an analogous «machine» --,
What is an "analogous machine", and why would such a machine not be suitable for studying definitions?
you, by information theory, imply that these definitions are somewhat arbitrary, don't you?
In a sense, of course the definitions are completely arbitrary. You could go ahead and define whatever you wish. But of course, some definitions are more useful than others, so the definitions in use aren't very arbitrary, they're mostly the ones determined to be most useful. The names given to the defined concepts are more arbitrary. You could call an open set a Pangalactic Gargleblaster and a closed set a Ravenous Bugblatter Beast of Traal. Mathematically, it would make no difference. It would just be harder to remember which was which. A good name invokes enough imagery to remind the hearer/reader what the definition was [not the details, but the general idea], but not so much as to give false ideas about the consequences of the definition.
This in my eyes would contradict the concept such definition systems have about themselves.
To my best knowledge it is one of the best known characteristics of category theory that it revealed in how many cases maths is a repetition of certain patterns.
Backwards. Category Theory is a product of the realisation how often certain patterns appear in different guise in different parts of mathematics. Of course, once started, it revealed many more.
Speaking categorically it is good practice to transfer knowledge from on domain to another, once the required isomorphisms could be established. This way, category theory itself has successfully torn down borders between several subdisciplines of maths and beyond.
I just propose to expand the same to common sense matters...
Back to the case of open/closed, given we have an idea about sets -- we in most cases are able to derive the concept of two disjunct sets facing each other ourselves, don't we? The only lore missing is just a Bool: Which term fits which idea? With a reliable terminology using «bordered/unbordered», there is no ambiguity, and we can pass on reading, without any additional effort.
And we'd be very wrong. There are sets which are simultaneously open and closed. It is bad enough with the terminology as is, throwing in the boundary (which is an even more difficult concept than open/closed) would only make things worse.
Really? As «open == not closed» can similarly be implied, bordered/unbordered even in this concern remains at least equal...
I'm not saying current terminology is optimal - it isn't. But to replace established terminology, the proposed replacement should be clearly superior. I don't think bordered/unbordered fits that criterion (especially, since the topologigal term is boundary, not border).
Picking such an opportunity thus may save a lot of time and even error -- allowing you to utilize your individual knowledge and experience. I
When learning a formal theory, individual knowledge and experience (except coming from similar enough disciplines) tend to be misleading more than helpful.
Why does the opposite work well for computing science?
Does it?
All the best,
Nick
To you too, Daniel

A place in the hall of fame and thank you for mentioning clopen... ;-) Just wanting to present open/closed as and example of improvable maths terminology, I oversaw this even more evident defect in it and even copied it into my improvement proposal, bordered/unbordered: It is questionable style to name two properties, if they can occur combined, as an antagonistic pair...! Acccordingly, it is more elegant to draw such terms from independent domains. This subject seems to drive me crazy... I actually pondered on improvement, and came to: «faceless» in replacement of «open» Rough explanation: The «limit» of a closed set can by the limit of another closed set that may even share only this limit -- a faceless set has -- under the given perspective -- no such part to «face» to beyond. Any comments? But the big question is now: What (non antagonistic) name can be found for the other property?? Any ideas...?? Cheers, Nick Ergonomic terminology comes not for free, giving a quick answer here would be «maths style» with replacing Michael Matsko wrote:
Nick,
Actually, clopen is a set that is both closed and open. Not one that is neither. Except in the case of half-open intervals, I can't remember talking much in topology about sets with a partial boundary.
Alexander Solla wrote:
Clopen means a set is both closed and open, not that it's "partially bordered".
Daniel Fischer wrote:
And we'd be very wrong. There are sets which are simultaneously open and closed. It is bad enough with the terminology as is, throwing in the boundary (which is an even more difficult concept than open/closed) would only make things worse.

On Feb 18, 2010, at 10:19 AM, Nick Rudnick wrote:
Back to the case of open/closed, given we have an idea about sets -- we in most cases are able to derive the concept of two disjunct sets facing each other ourselves, don't we? The only lore missing is just a Bool: Which term fits which idea? With a reliable terminology using «bordered/unbordered», there is no ambiguity, and we can pass on reading, without any additional effort.
There are sets that only partially contain their boundary. They are neither open nor closed, in the usual topology. Consider (0,1] in the Real number line. It contains 1, a boundary point. It does not contain 0. It is not an open set OR a closed set in the usual topology for R.

Hi Alexander, my actual posting was about rename refactoring category theory; closed/open was just presented as an example for suboptimal terminology in maths. But of course, bordered/unbordered would be extended by e.g. «partially bordered» and the same holds. Cheers, Nick Alexander Solla wrote:
On Feb 18, 2010, at 10:19 AM, Nick Rudnick wrote:
Back to the case of open/closed, given we have an idea about sets -- we in most cases are able to derive the concept of two disjunct sets facing each other ourselves, don't we? The only lore missing is just a Bool: Which term fits which idea? With a reliable terminology using «bordered/unbordered», there is no ambiguity, and we can pass on reading, without any additional effort.
There are sets that only partially contain their boundary. They are neither open nor closed, in the usual topology. Consider (0,1] in the Real number line. It contains 1, a boundary point. It does not contain 0. It is not an open set OR a closed set in the usual topology for R.

On Feb 18, 2010, at 2:08 PM, Nick Rudnick wrote:
my actual posting was about rename refactoring category theory; closed/open was just presented as an example for suboptimal terminology in maths. But of course, bordered/unbordered would be extended by e.g. «partially bordered» and the same holds.
And my point was that your terminology was suboptimal for just the same reasons. The difficulty of mathematics is hardly the funny names. Perhaps you're not familiar with the development of Category theory. Hans Aberg gave a brief development. Basically, Category theory is the RESULT of the refactoring you're asking about. Category theory's beginnings are found in work on differential topology (where functors and higher order constructs took on a life of their own), and the unification of topology, lattice theory, and universal algebra (in order to ground that higher order stuff). Distinct models and notions of computation were unified, using arrows and objects. Now, you could have a legitimate gripe about current category theory terminology. But I am not so sure. We can "simplify" lots of things. Morphisms can become arrows or functions. Auto- can become "self-". "Homo-" can become "same-". Functors can become "Category arrows". Does it help? You tell me. But if we're ever going to do anything interesting with Category theory, we're going to have to go into the realm of dealing with SOME kind of algebra. We need examples, and the mathematically tractable ones have names like "group", "monoid", "ring", "field", "sigma- algebras", "lattices", "logics", "topologies", "geometries". They are arbitrary names, grounded in history. Any other choice is just as arbitrary, if not more so. The closest thing algebras have to a unique name is their signature -- basically their axiomatization -- or a long descriptive name in terms of arbitrary names and adjectives ("the Cartesian product of a Cartesian closed category and a groupoid with groupoid addition induced by...."). The case for Pareto efficiency is here: is changing the name of these kinds of structures wholesale a win for efficiency? The answer is "no". Everybody would have to learn the new, arbitrary names, instead of just some people having to learn the old arbitrary names. Let's compare this to the "monad fallacy". It is said every beginner Haskell programmer write a monad tutorial, and often falls into the "monad fallacy" of thinking that there is only one interpretation for monadism. Monads are relatively straightforward. Their power comes from the fact that many different kinds of things are "monadic" -- sequencing, state, function application. What name should we use for monads instead? Which interpretation must we favor, despite the fact that others will find it counter-intuitive? Or should we choose to not favor one, and just pick a new arbitrary name?

Alexander Solla wrote:
On Feb 18, 2010, at 2:08 PM, Nick Rudnick wrote:
my actual posting was about rename refactoring category theory; closed/open was just presented as an example for suboptimal terminology in maths. But of course, bordered/unbordered would be extended by e.g. «partially bordered» and the same holds.
And my point was that your terminology was suboptimal for just the same reasons. The difficulty of mathematics is hardly the funny names.
:-) Criticism... Criticism is good at this place... Opens up things...
Perhaps you're not familiar with the development of Category theory. Hans Aberg gave a brief development. Basically, Category theory is the RESULT of the refactoring you're asking about. Category theory's beginnings are found in work on differential topology (where functors and higher order constructs took on a life of their own), and the unification of topology, lattice theory, and universal algebra (in order to ground that higher order stuff). Distinct models and notions of computation were unified, using arrows and objects.
Now, you could have a legitimate gripe about current category theory terminology. But I am not so sure. We can "simplify" lots of things. Morphisms can become arrows or functions. Auto- can become "self-". "Homo-" can become "same-". Functors can become "Category arrows". Does it help? You tell me.
I think I understand what you mean and completely agree... The project in my imagination is different, I read on...
But if we're ever going to do anything interesting with Category theory, we're going to have to go into the realm of dealing with SOME kind of algebra. We need examples, and the mathematically tractable ones have names like "group", "monoid", "ring", "field", "sigma-algebras", "lattices", "logics", "topologies", "geometries". They are arbitrary names, grounded in history. Any other choice is just as arbitrary, if not more so. The closest thing algebras have to a unique name is their signature -- basically their axiomatization -- or a long descriptive name in terms of arbitrary names and adjectives ("the Cartesian product of a Cartesian closed category and a groupoid with groupoid addition induced by...."). The case for Pareto efficiency is here: is changing the name of these kinds of structures wholesale a win for efficiency? The answer is "no". Everybody would have to learn the new, arbitrary names, instead of just some people having to learn the old arbitrary names.
Ok...
Let's compare this to the "monad fallacy". It is said every beginner Haskell programmer write a monad tutorial, and often falls into the "monad fallacy" of thinking that there is only one interpretation for monadism. Monads are relatively straightforward. Their power comes from the fact that many different kinds of things are "monadic" -- sequencing, state, function application. What name should we use for monads instead? Which interpretation must we favor, despite the fact that others will find it counter-intuitive? Or should we choose to not favor one, and just pick a new arbitrary name?
The short answer: If the work I imagine would be done by exchanging here a word and there on the quick -- it would be again maths style, with difference only in justifying it with naivity instead of resignation. The idea I have is different: DEEP CONTEMPLATION stands in the beginning, gathering the constructive criticism of the sharpest minds possible, hard discussions and debates full of temperament -- all of this already rewarding in itself. The participants are united in the spirit to create a masterpiece, and to explore details in depths for which time was missing before. It could be great fun for everybody to improve one's deep intuition of category theory. This book might be comparable to a programming language, hypertext like a wikibook and maybe in development forever. It will have an appendix (or later a special mode) with a translation of all new termini into the original ones. I do believe deeply that this is possible. By all criticism on Bourbaki -- I was among the generation of pupils taught set theory in elementary school; looking back, I regard it as a rewarding effort. Why should category theory not be able to achieve the same, maybe with other means than plastic chips? All the best, Nick

On 18 Feb 2010, at 19:19, Nick Rudnick wrote:
agreed, but, in my eyes, you directly point to the problem:
* doesn't this just delegate the problem to the topic of limit operations, i.e., in how far is the term «closed» here more perspicuous?
* that's (for a very simple concept) the way that maths prescribes: + historical background: «I take "closed" as coming from being closed under limit operations - the origin from analysis.» + definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x). If one takes c(X) = the set of limit points of X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.»
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable?
Yes, it is efficient conceptually. The idea of closed sets let to topology, and in combination with abstractions of differential geometry led to cohomology theory which needed category theory solving problems in number theory, used in a computer language called Haskell using a feature called Currying, named after a logician and mathematician, though only one person. Hans

Hans Aberg wrote:
On 18 Feb 2010, at 19:19, Nick Rudnick wrote:
agreed, but, in my eyes, you directly point to the problem:
* doesn't this just delegate the problem to the topic of limit operations, i.e., in how far is the term «closed» here more perspicuous?
* that's (for a very simple concept) the way that maths prescribes: + historical background: «I take "closed" as coming from being closed under limit operations - the origin from analysis.» + definition backtracking: «A closure operation c is defined by the property c(c(x)) = c(x). If one takes c(X) = the set of limit points of X, then it is the smallest closed set under this operation. The closed sets X are those that satisfy c(X) = X. Naming the complements of the closed sets open might have been introduced as an opposite of closed.»
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable?
Yes, it is efficient conceptually. The idea of closed sets let to topology, and in combination with abstractions of differential geometry led to cohomology theory which needed category theory solving problems in number theory, used in a computer language called Haskell using a feature called Currying, named after a logician and mathematician, though only one person. It is SUCCESSFUL, NO MATTER... :-)
But I spoke about efficiency, in the Pareto sense (http://en.wikipedia.org/wiki/Pareto_efficiency)... Can we say that the way in which things are done now cannot be improved?? Hans, you were the most specific response to my actual intention -- could I clear up the reference thing for you? All the best, Nick
Hans

On 18 Feb 2010, at 23:02, Nick Rudnick wrote:
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable?
Yes, it is efficient conceptually. The idea of closed sets let to topology, and in combination with abstractions of differential geometry led to cohomology theory which needed category theory solving problems in number theory, used in a computer language called Haskell using a feature called Currying, named after a logician and mathematician, though only one person. It is SUCCESSFUL, NO MATTER... :-)
But I spoke about efficiency, in the Pareto sense (http://en.wikipedia.org/wiki/Pareto_efficiency )... Can we say that the way in which things are done now cannot be improved??
Hans, you were the most specific response to my actual intention -- could I clear up the reference thing for you?
That seems to be an economic theory version of utilitarianism - the problem is that when dealing with concepts there may be no optimizing function to agree upon. There is an Occam's razor one may try to apply in the case of axiomatic systems, but one then finds it may be more practical with one that is not minimal. As for the naming problem, it is more of a linguistic problem: the names were somehow handed by tradition, and it may be difficult to change them. For example, there is a rumor that "kangaroo" means "I do not understand" in a native language; assuming this to be true, it might be difficult to change it. Mathematicians though stick to their own concepts and definitions individually. For example, I had conversations with one who calls monads "triads", and then one has to cope with that. Hans

On 18 Feb 2010, at 23:02, Nick Rudnick wrote:
418 bytes in my file system... how many in my brain...? Is it efficient, inevitable?
Yes, it is efficient conceptually. The idea of closed sets let to topology, and in combination with abstractions of differential geometry led to cohomology theory which needed category theory solving problems in number theory, used in a computer language called Haskell using a feature called Currying, named after a logician and mathematician, though only one person. It is SUCCESSFUL, NO MATTER... :-)
But I spoke about efficiency, in the Pareto sense (http://en.wikipedia.org/wiki/Pareto_efficiency)... Can we say that the way in which things are done now cannot be improved??
Hans, you were the most specific response to my actual intention -- could I clear up the reference thing for you?
That seems to be an economic theory version of utilitarianism - the problem is that when dealing with concepts there may be no optimizing function to agree upon. There is an Occam's razor one may try to apply in the case of axiomatic systems, but one then finds it may be more practical with one that is not minimal. Exactly. By this I justify my questioning of inviolability of the state of art of maths terminology -- an open discussion should be allowed at any time...
As for the naming problem, it is more of a linguistic problem: the names were somehow handed by tradition, and it may be difficult to change them. For example, there is a rumor that "kangaroo" means "I do not understand" in a native language; assuming this to be true, it might be difficult to change it. Completely d'accord. This is indeed a strong problem, and I fully agree if you say that, for maths, trying this is for people with fondness for speaker's corner... :-)) But for category theory, a subject (too!) many
Hans Aberg wrote: people are complaining about, blind for its beauty, a such book -- ideally in children's language and illustrations, of course with a dictionary to original terminology in the appendix! -- could be of great positive influence on category theory itself. And the deep contemplation encompassing the *collective* creation should be most rewarding in itself -- a journey without haste into the depths of the subject.
Mathematicians though stick to their own concepts and definitions individually. For example, I had conversations with one who calls monads "triads", and then one has to cope with that.
Yes. But isn't it also an enrichment by some way? All the best, Nick

On 19 Feb 2010, at 00:05, Nick Rudnick wrote:
Mathematicians though stick to their own concepts and definitions individually. For example, I had conversations with one who calls monads "triads", and then one has to cope with that.
Yes. But isn't it also an enrichment by some way?
Yes, one must be able to choose notation that fits with the notions. A similar situation exists in the case of computer languages, having their own syntax. Hans

On Feb 19, 2010, at 11:22 AM, Hans Aberg wrote:
As for the naming problem, it is more of a linguistic problem: the names were somehow handed by tradition, and it may be difficult to change them. For example, there is a rumor that "kangaroo" means "I do not understand" in a native language; assuming this to be true, it might be difficult to change it.
OED entry for kangaroo, n; etymology: [Stated to have been the name in a native Australian language. Cook and Banks believed it to be the name given to the animal by the natives at Endeavour River, Queensland, and there is later affirmation of its use elsewhere. On the other hand, there are express statements to the contrary (see quotations below), showing that the word, if ever current in this sense, was merely local, or had become obsolete. The common assertion that it really means ‘I don't understand’ (the supposed reply of the native to his questioner) seems to be of recent origin and lacks confirmation. ...] Turning to the Wikipedia article, we find "The word kangaroo derives from the Guugu Yimidhirr word gangurru, referring to a grey kangaroo" and "A common myth about the kangaroo's English name is that 'kangaroo' was a Guugu Yimidhirr phrase for "I don't understand you." According to this legend, Captain James Cook and naturalist Sir Joseph Banks were exploring the area when they happened upon the animal. They asked a nearby local what the creatures were called. The local responded "Kangaroo", meaning "I don't understand you", which Cook took to be the name of the creature. The Kangaroo myth was debunked in the 1970s by linguist John B. Haviland in his research with the Guugu Yimidhirr people." See the wikipedia page for references, especially Haviland's article. It's time this urban legend was forgotten.

On 19 Feb 2010, at 00:52, Richard O'Keefe wrote:
Turning to the Wikipedia article, we find "The word kangaroo derives from the Guugu Yimidhirr word gangurru, referring to a grey kangaroo"
Thanks, particularly for giving the name of the native language. Hope the Wikipedia article can be trusted. :-)
It's time this urban legend was forgotten.
Not at all, there are sites specializing in such http://www.snopes.com/ http://www.snopes.com/inboxer/hoaxes/computer.asp Hans

Am Donnerstag 18 Februar 2010 14:48:08 schrieb Nick Rudnick:
even in Germany, where the term «ring» seems to originate from, since at least a century nowbody has the least idea it once had an alternative meaning «gang,band,group»,
Wrong. The term "Ring" is still in use with that meaning in composites like Schmugglerring, Autoschieberring, ...

Hi Daniel, ;-)) agreed, but is the word «Ring» itself in use? The same about the English language... de.wikipedia says: « Die Namensgebung /Ring/ bezieht sich nicht auf etwas anschaulich Ringförmiges, sondern auf einen organisierten Zusammenschluss von Elementen zu einem Ganzen. Diese Wortbedeutung ist in der deutschen Sprache ansonsten weitgehend verloren gegangen. Einige ältereVereinsbezeichnungen (wie z. B. Deutscher Ring , Weißer Ring ) oder Ausdrücke wie „Verbrecherring“ weisen noch auf diese Bedeutung hin. Das Konzept des Ringes geht auf Richard Dedekind zurück; die Bezeichnung /Ring/ wurde allerdings von David Hilbert eingeführt.» (http://de.wikipedia.org/wiki/Ringtheorie) How many students are wondering confused about what is «the hollow» in a ring every year worlwide, since Hilbert made this unreflected wording, by just picking another term around «collection»? Although not a mathematician, I've visited several maths lectures, for interest, having the same problem. Then I began asking everybody I could reach -- and even maths professors could not tell me why this thing is called a «ring». Thanks for your examples: A «gang» {of smugglers|car thieves} shows even the original meaning -- once knowed -- does not reflect the characteristics of the mathematical structure. Cheers, Nick Daniel Fischer wrote:
Am Donnerstag 18 Februar 2010 14:48:08 schrieb Nick Rudnick:
even in Germany, where the term «ring» seems to originate from, since at least a century nowbody has the least idea it once had an alternative meaning «gang,band,group»,
Wrong. The term "Ring" is still in use with that meaning in composites like Schmugglerring, Autoschieberring, ...

Am Donnerstag 18 Februar 2010 17:10:08 schrieb Nick Rudnick:
Hi Daniel,
;-)) agreed, but is the word «Ring» itself in use?
Of course, many people wear rings on their fingers. Oh - you meant "in the sense of gang/group"? It still appears as part of the name of some groups as a word of its own, otherwise, I can at the moment only recall its use in compounds.
The same about the English language... de.wikipedia says:
« Die Namensgebung /Ring/ bezieht sich nicht auf etwas anschaulich Ringförmiges, sondern auf einen organisierten Zusammenschluss von Elementen zu einem Ganzen.
I don't know whether that's correct. It may be, but then the french "anneau" is a horrible mistranslation.
Diese Wortbedeutung ist in der deutschen Sprache ansonsten weitgehend verloren gegangen. Einige ältereVereinsbezeichnungen (wie z. B. Deutscher Ring , Weißer Ring ) oder Ausdrücke wie „Verbrecherring“ weisen noch auf diese Bedeutung hin. Das Konzept des Ringes geht auf Richard Dedekind zurück; die Bezeichnung /Ring/ wurde allerdings von David Hilbert eingeführt.» (http://de.wikipedia.org/wiki/Ringtheorie)
How many students are wondering confused about what is «the hollow» in a ring every year worlwide, since Hilbert made this unreflected wording,
You know, a "field" is a "Körper" in german, ("corps" in french), a "Ring" is a "Körper" with a hole in it (no division in general).
by just picking another term around «collection»? Although not a mathematician, I've visited several maths lectures, for interest, having the same problem. Then I began asking everybody I could reach -- and even maths professors could not tell me why this thing is called a «ring».
That's often a problem with things that were named by Germans in the nineteenth or early twentieth century. They had pretty undecipherable ways of choosing metaphors and coming up with weird associations.
Thanks for your examples: A «gang» {of smugglers|car thieves} shows even the original meaning -- once knowed -- does not reflect the characteristics of the mathematical structure.
Cheers,
Nick
Daniel Fischer wrote:
Am Donnerstag 18 Februar 2010 14:48:08 schrieb Nick Rudnick:
even in Germany, where the term «ring» seems to originate from, since at least a century nowbody has the least idea it once had an alternative meaning «gang,band,group»,
Wrong. The term "Ring" is still in use with that meaning in composites like Schmugglerring, Autoschieberring, ...

On Feb 19, 2010, at 3:55 AM, Daniel Fischer wrote:
Am Donnerstag 18 Februar 2010 14:48:08 schrieb Nick Rudnick:
even in Germany, where the term «ring» seems to originate from, since at least a century nowbody has the least idea it once had an alternative meaning «gang,band,group»,
Wrong. The term "Ring" is still in use with that meaning in composites like Schmugglerring, Autoschieberring, ...
The mathematical ring is OED ring n1 sense 12. The group of people sense is sense 11, immediately above it. "Drug ring" is still in use. I'd always assumed "ring" was generalised from Z[n].

Am Freitag 19 Februar 2010 00:24:23 schrieb Richard O'Keefe:
On Feb 19, 2010, at 3:55 AM, Daniel Fischer wrote:
Am Donnerstag 18 Februar 2010 14:48:08 schrieb Nick Rudnick:
even in Germany, where the term «ring» seems to originate from, since at least a century nowbody has the least idea it once had an alternative meaning «gang,band,group»,
Wrong. The term "Ring" is still in use with that meaning in composites like Schmugglerring, Autoschieberring, ...
The mathematical ring is OED ring n1 sense 12. The group of people sense is sense 11, immediately above it. "Drug ring" is still in use. I'd always assumed "ring" was generalised from Z[n].
As in "cyclic group", arrange the numbers in a ring like on a clockface? Maybe. As far as I know, the term "ring" (in the mathematical sense) first appears in chapter 9 - Die Zahlringe des Körpers - of Hilbert's "Die Theorie der algebraischen Zahlkörper". Unfortunately, Hilbert gives no hint why he chose that name (Dedekind, who coined the term "Körper", called these structures "Ordnung" [order]).

On 19 Feb 2010, at 00:55, Daniel Fischer wrote:
I'd always assumed "ring" was generalised from Z[n].
As in "cyclic group", arrange the numbers in a ring like on a clockface? Maybe. As far as I know, the term "ring" (in the mathematical sense) first appears in chapter 9 - Die Zahlringe des Körpers - of Hilbert's "Die Theorie der algebraischen Zahlkörper". Unfortunately, Hilbert gives no hint why he chose that name (Dedekind, who coined the term "Körper", called these structures "Ordnung" [order]).
The Wikipedia article "Ring" says he used it for a specific one where the elements somehow cycled back. Hans

Am Freitag 19 Februar 2010 10:42:59 schrieb Hans Aberg:
On 19 Feb 2010, at 00:55, Daniel Fischer wrote:
I'd always assumed "ring" was generalised from Z[n].
As in "cyclic group", arrange the numbers in a ring like on a clockface? Maybe. As far as I know, the term "ring" (in the mathematical sense) first appears in chapter 9 - Die Zahlringe des Körpers - of Hilbert's "Die Theorie der algebraischen Zahlkörper". Unfortunately, Hilbert gives no hint why he chose that name (Dedekind, who coined the term "Körper", called these structures "Ordnung" [order]).
The Wikipedia article "Ring" says he used it for a specific one where the elements somehow cycled back.
Hans
Yes. And I deem a) the english wikipedia a more reliable source of information [concerning things mathematical] than the german, b) Harvey Cohn more trustworthy than either wikipedia. But a quick look at Hilbert's paper didn't reveal the property Cohn mentioned (according to wp) and no explanation of Hilbert why he chose the term. So I remain in doubt. Cheers, Daniel

On 19 Feb 2010, at 12:12, Daniel Fischer wrote:
...As far as I know, the term "ring" (in the mathematical sense) first appears in chapter 9 - Die Zahlringe des Körpers - of Hilbert's "Die Theorie der algebraischen Zahlkörper". Unfortunately, Hilbert gives no hint why he chose that name (Dedekind, who coined the term "Körper", called these structures "Ordnung" [order]).
The Wikipedia article "Ring" says he used it for a specific one where the elements somehow cycled back.
Yes. And I deem a) the english wikipedia a more reliable source of information [concerning things mathematical] than the german, b) Harvey Cohn more trustworthy than either wikipedia. But a quick look at Hilbert's paper didn't reveal the property Cohn mentioned (according to wp) and no explanation of Hilbert why he chose the term. So I remain in doubt.
The term "group" was introduced Évariste Galois, though he meant what we call a cancellative monoid, but since they are finite, have inverses. So perhaps Hilbert made play on that word: a group is a small number of people, a ring larger, like a gang. Hans

On Thu, Feb 18, 2010 at 7:48 AM, Nick Rudnick
IM(H??)O, a really introductive book on category theory still is to be written -- if category theory is really that fundamental (what I believe, due to its lifting of restrictions usually implicit at 'orthodox maths'), than it should find a reflection in our every day's common sense, shouldn't it?
Goldblatt works for me.
* the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open),
Both have a border, just in different places.
As an example, let's play a little:
Arrows: Arrows are more fundamental than objects, in fact, categories may be defined with arrows only. Although I like the term arrow (more than 'morphism'), I intuitively would find the term «reference» less contradictive with the actual intention, as this term
Arrows don't refer.
Categories: In every day's language, a category is a completely different thing, without the least
Not necesssarily (for Kantians, Aristoteleans?) If memory serves, MacLane says somewhere that he and Eilenberg picked the term "category" as an explicit play on the same term in philosophy. In general I find mathematical terminology well-chosen and revealing, if one takes the trouble to do a little digging. If you want to know what terminological chaos really looks like try linguistics. -g

Gregg Reynolds wrote:
On Thu, Feb 18, 2010 at 7:48 AM, Nick Rudnick
mailto:joerg.rudnick@t-online.de> wrote: IM(H??)O, a really introductive book on category theory still is to be written -- if category theory is really that fundamental (what I believe, due to its lifting of restrictions usually implicit at 'orthodox maths'), than it should find a reflection in our every day's common sense, shouldn't it?
Goldblatt works for me. Accidentially, I have Goldblatt here, although I didn't read it before -- you agree with me it's far away from every day's common sense, even for a hobby coder?? I mean, this is not «Head first categories», is it? ;-)) With «every day's common sense» I did not mean «a mathematician's every day's common sense», but that of, e.g., a housewife or a child...
But I have became curious now for Goldblatt...
* the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open),
Both have a border, just in different places.
Which elements form the border of an open set??
As an example, let's play a little:
Arrows: Arrows are more fundamental than objects, in fact, categories may be defined with arrows only. Although I like the term arrow (more than 'morphism'), I intuitively would find the term «reference» less contradictive with the actual intention, as this term
Arrows don't refer.
A *referrer* (object) refers to a *referee* (object) by a *reference* (arrow).
Categories: In every day's language, a category is a completely different thing, without the least
Not necesssarily (for Kantians, Aristoteleans?)
Are you sure...?? See http://en.wikipedia.org/wiki/Categories_(Aristotle) ...
If memory serves, MacLane says somewhere that he and Eilenberg picked the term "category" as an explicit play on the same term in philosophy. In general I find mathematical terminology well-chosen and revealing, if one takes the trouble to do a little digging. If you want to know what terminological chaos really looks like try linguistics. ;-) For linguistics, granted... In regard of «a little digging», don't you think terminology work takes a great share, especially at interdisciplinary efforts? Wouldn't it be great to be able to drop, say 20% or even more, of such efforts and be able to progress more fluidly ?
-g

Am Donnerstag 18 Februar 2010 19:55:31 schrieb Nick Rudnick:
Gregg Reynolds wrote:
On Thu, Feb 18, 2010 at 7:48 AM, Nick Rudnick
mailto:joerg.rudnick@t-online.de> wrote: IM(H??)O, a really introductive book on category theory still is to be written -- if category theory is really that fundamental (what I believe, due to its lifting of restrictions usually implicit at 'orthodox maths'), than it should find a reflection in our every day's common sense, shouldn't it?
Goldblatt works for me.
Accidentially, I have Goldblatt here, although I didn't read it before -- you agree with me it's far away from every day's common sense, even for a hobby coder?? I mean, this is not «Head first categories», is it? ;-)) With «every day's common sense» I did not mean «a mathematician's every day's common sense», but that of, e.g., a housewife or a child...
Doesn't work. You need a lot of training in abstraction to learn very abstract concepts. Joe Sixpack's common sense isn't prepared for that.
But I have became curious now for Goldblatt...
* the definition of open/closed sets in topology with the boundary elements of a closed set to considerable extent regardable as facing to an «outside» (so that reversing these terms could even appear more intuitive, or «bordered» instead of closed and «unbordered» instead of open),
Both have a border, just in different places.
Which elements form the border of an open set??
The boundary of an open set is the boundary of its complement. The boundary may be empty (happens if and only if the set is simultaneously open and closed, "clopen", as some say).
As an example, let's play a little:
Arrows: Arrows are more fundamental than objects, in fact, categories may be defined with arrows only. Although I like the term arrow (more than 'morphism'), I intuitively would find the term «reference» less contradictive with the actual intention, as this term
Arrows don't refer.
A *referrer* (object) refers to a *referee* (object) by a *reference* (arrow).
Doesn't work for me. Not in Ens (sets, maps), Grp (groups, homomorphisms), Top (topological spaces, continuous mappings), Diff (differential manifolds, smooth mappings), ... .

On Thu, Feb 18, 2010 at 1:31 PM, Daniel Fischer
Am Donnerstag 18 Februar 2010 19:55:31 schrieb Nick Rudnick:
Gregg Reynolds wrote:
-- you agree with me it's far away from every day's common sense, even for a hobby coder?? I mean, this is not «Head first categories», is it? ;-)) With «every day's common sense» I did not mean «a mathematician's every day's common sense», but that of, e.g., a housewife or a child...
Doesn't work. You need a lot of training in abstraction to learn very abstract concepts. Joe Sixpack's common sense isn't prepared for that.
True enough, but I also tend to think that with a little imagination even many of the most abstract concepts can be illustrated with intuitive, concrete examples, and it's a fun (to me) challenge to try come up with them. For example, associativity can be nicely illustrated in terms of donning socks and shoes - it's not hard to imagine putting socks into shoes before putting feet into socks. A little weird, but easily understandable. My guess is that with a little effort one could find good concrete examples of at least category, functor, and natural transformation. Hmm, how is a cake-mixer like a cement-mixer? They're structurally and functionally isomorphic. Objects in the category Mixer?
Both have a border, just in different places.
Which elements form the border of an open set??
The boundary of an open set is the boundary of its complement. The boundary may be empty (happens if and only if the set is simultaneously open and closed, "clopen", as some say).
Right, that was what I meant; the point being that "boundary" (or border, or periphery or whatever) is not sufficient to capture the idea of closed v. open.
-g

Gregg Reynolds wrote:
On Thu, Feb 18, 2010 at 1:31 PM, Daniel Fischer
mailto:daniel.is.fischer@web.de> wrote: Am Donnerstag 18 Februar 2010 19:55:31 schrieb Nick Rudnick: > Gregg Reynolds wrote:
> -- you agree with me it's far away from every day's common sense, even > for a hobby coder?? I mean, this is not «Head first categories», is it? > ;-)) With «every day's common sense» I did not mean «a mathematician's > every day's common sense», but that of, e.g., a housewife or a child...
Doesn't work. You need a lot of training in abstraction to learn very abstract concepts. Joe Sixpack's common sense isn't prepared for that.
True enough, but I also tend to think that with a little imagination even many of the most abstract concepts can be illustrated with intuitive, concrete examples, and it's a fun (to me) challenge to try come up with them. For example, associativity can be nicely illustrated in terms of donning socks and shoes - it's not hard to imagine putting socks into shoes before putting feet into socks. A little weird, but easily understandable. My guess is that with a little effort one could find good concrete examples of at least category, functor, and natural transformation. Hmm, how is a cake-mixer like a cement-mixer? They're structurally and functionally isomorphic. Objects in the category Mixer? :-) This comes close to what I mean -- the beauty of category theory does not end at the borders of mathematical subjects...
IMHO we are just beginning to discovery of the categorical world beyond mathematics, and I think many findings original to computer science, but less to maths may be of value then. And I am definitely more optimistic on «Joe Sixpack's common sense», which still surpasses a good lot of things possible with AI -- no categories at all there?? I can't believe...
> > Both have a border, just in different places. > > Which elements form the border of an open set??
The boundary of an open set is the boundary of its complement. The boundary may be empty (happens if and only if the set is simultaneously open and closed, "clopen", as some say).
Right, that was what I meant; the point being that "boundary" (or border, or periphery or whatever) is not sufficient to capture the idea of closed v. open.
;-)) I did not claim «bordered» is the best choice, I just said closed/open is NOT... IMHO this also does not affect what I understand as a refactoring -- just imagine Coq had a refactoring browser; all combinations of terms are possible as before, aren't they? But it was not my aim to begin enumerating all variations of «bordered», «unbordered», «partially ordered» and STOP... Should I come QUICKLY with a pendant to «clopen» now? This would be «MATHS STYLE»...! I neither say finding an appropriate word here is a quickshot, nor I claim trying so is ridiculous, as it is impossible. I think it is WORK, which is to be done in OPEN DISCUSSION -- and that, at the long end, the result might be rewarding, similar as the effort put into a rename refactoring will reveal rewarding. ;-)) Trying a refactored category theory (with a dictionary in the appendix...) might open access to many interesting people and subjects otherwise out of reach. And deeply contemplating terminology cannot hurt, at the least... All the best, Nick

Hi, wow, a topic specific response, at last... But I wish you would be more specific... ;-)
A *referrer* (object) refers to a *referee* (object) by a *reference* (arrow).
Doesn't work for me. Not in Ens (sets, maps), Grp (groups, homomorphisms), Top (topological spaces, continuous mappings), Diff (differential manifolds, smooth mappings), ... .
Why not begin with SET and functions... Every human has a certain age, so that there is a function, ageOf:: Human-> Int, which can be regarded as a certain way of a reference relationship between Human and Int, in that by agoOf, * Int reflects a certain aspect of Human, and, on the other hand, * the structure of Human can be traced to Int. Please tell me the aspect you feel uneasy with, and please give me your opinion, whether (in case of accepting this) you would rather choose to consider Human as referrer and Int as referee of the opposite -- for I think this is a deep question. Thank you in advance, Nick

On Feb 19, 2010, at 2:48 PM, Nick Rudnick wrote:
Please tell me the aspect you feel uneasy with, and please give me your opinion, whether (in case of accepting this) you would rather choose to consider Human as referrer and Int as referee of the opposite -- for I think this is a deep question.
I've read enough philosophy to be wary of treating "reference" as a simple concept. And linguistically, "referees" are people you find telling rugby players "naughty naughty". Don't you mean "referrer" and "referent"? Of course a basic point about language is that the association between sounds and meanings is (for the most part) arbitrary. Why should the terminology of mathematics be any different? Why is a "small dark floating cloud, indicating rain", called a "water-dog"? Water, yes, but dog? Why are the brackets at each end of a fire-place called "fire-dogs"? Why are unusually attractive women called "foxes" (the females of that species being "vixens", and both sexes smelly)? What's the logic in doggedness being a term of praise but bitchiness of opprobrium? We can hope for mathematical terms to be used consistently, but asking for them to be transparent is probably too much to hope for. (We can and should use intention-revealing names in a program, but doing it across the totality of all programs is something never achieved and probably never achievable.)

Richard O'Keefe wrote:
Please tell me the aspect you feel uneasy with, and please give me your opinion, whether (in case of accepting this) you would rather choose to consider Human as referrer and Int as referee of the opposite -- for I think this is a deep question. I've read enough philosophy to be wary of treating "reference" as a simple concept. And linguistically, "referees" are people you find telling rugby players "naughty naughty". Don't you mean "referrer" and "referent"? Yes, thanks. I am not a native English speaker, and in my mother tongue, a referent is somebody who refers, so I missed the guess... Such statements are exactly what I was looking for... So, as a reference is
On Feb 19, 2010, at 2:48 PM, Nick Rudnick wrote: directed, it is possible to distinguish
referrer ::= the one which does refer to s.th.
referent ::= one which is referred to by s.th.
Of course a basic point about language is that the association between sounds and meanings is (for the most part) arbitrary.
I would rather like to say it is not strictly determined, as an evolutionary tendence towards, say ergonomy, cannot be overlooked, can it?
Why should the terminology of mathematics be any different? ;-) Realizing an evolutionary tendence towards ergonony, is my subject... Why is a "small dark floating cloud, indicating rain", called a "water-dog"? Water, yes, but dog? Why are the brackets at each end of a fire-place called "fire-dogs"? Why are unusually attractive women called "foxes" (the females of that species being "vixens", and both sexes smelly)? :-)) The shape of the genitals, which might come into associative imagination of the hopeful observer?? (The same with cats, bears, etc.) [... desperately afraid of getting kicked out of this mailing list ;-))]
Thanks for this beautiful example and, honestly, again I ask again whether we may regard this as «just noise»: In contrary, aren't such usages not paradigmatical examples of memes, which as products of memetic evolution, should be studied for their motivational value? Let me guess: Our cerebral language system is highly coupled with our intentional system, so that it helps learning to have motivating «animation» enclosed... Isn't this in use in contemporary learning environments...? The problem I see is that common maths claims an exception in claiming that, in it's domain, namings are no more than noise -- possible motivated by an extreme rejection of anything between «strictly formally determined» and «noise». This standpoint again does not realize the developments in foundations of mathematics of at least the century ago -- put roughly, this comes close to Hilbert's programme... To my mind, any of the breakthroughs of the last decades -- like incompleteness, strange attractors, algorithmic information theory, CCCs, and not the least computing science itself with metaprogramming, soft computing, its linear types/modes and monads (!) -- have to do with constructs which emancipate such claims of ex ante predetermination. Isn't category theory pretty much a part of all this?
What's the logic in doggedness being a term of praise but bitchiness of opprobrium? Sexism...??
We can hope for mathematical terms to be used consistently, but asking for them to be transparent is probably too much to hope for. (We can and should use intention-revealing names in a program, but doing it across the totality of all programs is something never achieved and probably never achievable.) We have jokers: Evolutionary media, like markdown or even stylesheet may allow us to switch and translate in a moment, and many more useful gimmicks... Online collaboration platforms...
And we can stay pragmatical: If we can reach a (broad, to my estimate...) public, which originally would have to say «the book has really left me dumbfounded» (so the originator of this thread) and offer them an entertaining intuitive way -- why not even in a self-configurable way? -- category theory could be introduced to contemporary culture. Personally, I can't accept statements like (in another posting) «You need a lot of training in abstraction to learn very abstract concepts. Joe Sixpack's common sense isn't prepared for that.» Instead, I think that there is good evidence to believe that there are lots of isomorphisms to be found between every day's life and terminology and concepts category theory -- *not* to be confused with its *applications to maths*... And, to close in your figurative style: Which woman gets hurt by a change of clothes? Cheers, Nick

On Feb 21, 2010, at 8:13 AM, Nick Rudnick wrote:
Of course a basic point about language is that the association between sounds and meanings is (for the most part) arbitrary. I would rather like to say it is not strictly determined, as an evolutionary tendence towards, say ergonomy, cannot be overlooked, can it?
I see no evolutionary tendency towards "ergonomy" in the world's languages. *Articulatory economy*, yes. Bits are always eroding off words. But as for any other kind of "ergonomy", well, people have been talking for a very long time, so such a tendency should have had *some* effect by now, surely? While there seem to be some deep unities, the world's languages are a very diverse lot. Let's face it, who really _needs_ a paucal number? Or 12 noun classes? Or 16 cases? Maori manages just fine without any of those things.
Why should the terminology of mathematics be any different? ;-) Realizing an evolutionary tendence towards ergonony, is my subject...
Does such a tendency exist? If it does, why should we aid and abet a tendency which, to the extent it exists in biology, excels in producing parasites?
Thanks for this beautiful example and, honestly, again I ask again whether we may regard this as «just noise»: In contrary, aren't such usages not paradigmatical examples of memes, which as products of memetic evolution, should be studied for their motivational value?
Quite possibly. But ergonomics is *not* the driver.
The problem I see is that common maths claims an exception in claiming that, in it's domain, namings are no more than noise
No such thing. Quick now, what semantics is transparently revealed in the names "birch, beech, spruce, fir, oak"? To a native speaker of English, these things mean what they mean by convention and nothing else. (If we can trust the OED etymology, "beech" may have originally meant "a tree with edible fruit", but this is very far from transparent to a native speaker of modern English. And spruce trees are so called not because they are particularly spruce but because they came from Prussia, again, very far from transparent to a native speaker of M.E.)
And, to close in your figurative style:
Which woman gets hurt by a change of clothes?
The one whose new clothes fit her worse, of course.

It's not quite what you're asking for, but a very similar idea is the SHE
preprocessor: http://personal.cis.strath.ac.uk/~conor/pub/she/
Dan
On Mon, Feb 22, 2010 at 8:08 PM, Paul Brauner
Hello,
I remember seeing something like
typedata T = A | B
somewhere, where A and B are type constructors, but I can't find it in the ghc doc. Have I been dreaming or is it some hidden feature ?
Paul _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Feb 23, 2010 at 02:08:12AM +0100, Paul Brauner wrote:
Hello,
I remember seeing something like
typedata T = A | B
somewhere, where A and B are type constructors, but I can't find it in the ghc doc. Have I been dreaming or is it some hidden feature ?
No, this doesn't* exist. -Brent * yet!

On Tue, Feb 23, 2010 at 1:08 AM, Paul Brauner
Hello,
I remember seeing something like
typedata T = A | B
somewhere, where A and B are type constructors, but I can't find it in the ghc doc. Have I been dreaming or is it some hidden feature ?
Paul _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
This is similar to what you are thinking of: http://hackage.haskell.org/trac/ghc/wiki/KindSystem ...but it's not implemented (yet).

Am Freitag 19 Februar 2010 02:48:59 schrieb Nick Rudnick:
Hi,
wow, a topic specific response, at last... But I wish you would be more specific... ;-)
A *referrer* (object) refers to a *referee* (object) by a *reference* (arrow).
Doesn't work for me. Not in Ens (sets, maps), Grp (groups, homomorphisms), Top (topological spaces, continuous mappings), Diff (differential manifolds, smooth mappings), ... .
Why not begin with SET and functions...
Sorry, too many Bourbakists in my ancestry, Ens == SET (french: ensemble).
Every human has a certain age, so that there is a function, ageOf:: Human-> Int, which can be regarded as a certain way of a reference relationship between Human and Int, in that by agoOf,
I fail to see a reference here. In particular, I don't see how the one object (set of humans) refers to the other object (set of integers). I suppose the word reference doesn't mean the same for us. For me, a reference is an alias (as in e.g. Java's reference types) or a mention/allusion/citation (as in e.g. "The first verse of this poem is a reference to Macbeth's famous monologue 'Is this a dagger ...'"), a couple of other things I can't now put into english words. None of which I deem similar to a function from one set to another.
* Int reflects a certain aspect of Human,
Okay.
and, on the other hand, * the structure of Human can be traced to Int.
I don't understand that.
Please tell me the aspect you feel uneasy with, and please give me your opinion, whether (in case of accepting this) you would rather choose to consider Human as referrer and Int as referee of the opposite -- for I think this is a deep question.
Thank you in advance,
Nick
participants (20)
-
A E Lawrence
-
Alexander Solla
-
Ben Millwood
-
Benjamin L. Russell
-
Brent Yorgey
-
briand@aracnet.com
-
Creighton Hogg
-
Daniel Fischer
-
Daniel Peebles
-
Dominic Steinitz
-
Gregg Reynolds
-
Hans Aberg
-
L Spice
-
Mark Spezzano
-
Miguel Mitrofanov
-
Nick Rudnick
-
Paul Brauner
-
Richard O'Keefe
-
Sean Leather
-
Álvaro García Pérez