
Peter Verswyvelen
Maybe this raises a new question: does understanding category theory makes you a better *programmer*?
Possibly yes, possibly no. In my experience, you have to have a look at how CT is applied to other fields to appreciate its clarity. Doing so, you may succeed in promoting some of your understanding of code to a more general level. I see abstract nonsense as way less fuzzy than lambda-based abstraction, and a lot more flexible (mentally speaking) than type theory, or logic, in general. The fact that it encompasses both makes it even more attractive (although you can express both of them in terms of the other as it stands) There's not much to understand about CT, anyway: It's actually nearly as trivial as set theory. One part of the benefit starts when you begin to categorise different kind of categories, in the same way that understanding monads is easiest if you just consider their difference to applicative functors. It's a system inviting you to tackle a problem with scrutiny, neither tempting you to generalise way beyond computability, nor burdening you with formal proof requirements or shackling you to some other ball and chain. Sadly, almost all texts about CT are absolutely useless: They tend to focus either on pure mathematical abstraction, lacking applicability, or tell you the story for a particular application of CT to a specific topic, loosing themselves in detail without providing the bigger picture. That's why I liked that Rosetta stone paper so much: I still don't understand anything more about physics, but I see how working inside a category with specific features and limitations is the exact right thing to do for those guys, and why you wouldn't want to do a PL that works in the same category. Throwing lambda calculus at a problem that doesn't happen to be a DSL or some other language of some sort is a bad idea. I seem to understand that for some time now, being especially fond of automata[1] to model autonomous, interacting agents, but CT made me grok it. The future will show how far it will pull my thinking out of the Turing tarpit. [1] Which aren't, at all, objects. Finite automata don't go bottom in any case, at least not if you don't happen to shoot them and their health drops below zero. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.