Functional programmer's intuition for adjunctions?

Hi, Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?). Thanks, Edsko

On Tue, 2008-03-04 at 17:16 +0000, Edsko de Vries wrote:
Hi,
Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?).
Well when you pretend Hask is Set, many things can be discussed fairly directly. F is left adjoint to U, F -| U, if Hom(FA,B) is naturally isomorphic to Hom(A,UB), natural in A and B. A natural transformation over Set is just a polymorphic function. So we have an adjunction if we have two functions: phi :: (F a -> b) -> (a -> U b) and phiInv :: (a -> U b) -> (F a -> b) such that phi . phiInv = id and phiInv . phi = id and F and U are instances of Functor. The unit and counit respectively is then just phi id and phiInv id. You can work several examples using this framework, though it gets difficult when it is difficult to model other categories. Also discussing and proving some properties of adjunctions (namely continuity properties) would help. Of course, this is a concrete example using basic ideas of programming and not some "intuitive analogy". I feel comfortable working with adjunctions, but I don't have some general analogy that I use.

On Tue, Mar 04, 2008 at 11:58:38AM -0600, Derek Elkins wrote:
On Tue, 2008-03-04 at 17:16 +0000, Edsko de Vries wrote:
Hi,
Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?).
Well when you pretend Hask is Set, many things can be discussed fairly directly.
F is left adjoint to U, F -| U, if Hom(FA,B) is naturally isomorphic to Hom(A,UB), natural in A and B. A natural transformation over Set is just a polymorphic function. So we have an adjunction if we have two functions:
phi :: (F a -> b) -> (a -> U b) and phiInv :: (a -> U b) -> (F a -> b)
such that phi . phiInv = id and phiInv . phi = id and F and U are instances of Functor.
The unit and counit respectively is then just phi id and phiInv id.
Sure, but that doesn't really explain what an adjunction *is*. For me, it helps to think of a natural transformation as a polymorphic function: it gives me an intuition about what a natural transformation is. Specializing the definition of an adjunction for Hask (or Set) helps understanding the *definitions*, but not the *intention*, if that makes any sense.. Edsko

Well, we have at least one very useful example of adjunction. It's called "curry". See, if X is some arbitrary type, you can define type F = (,X) instance Functor F where fmap f (a,x) = (fa,x) type G = (->) X instance Functor G where fmap f h = \x -> f (h x) Now, we have the adjunction: phi :: ((a,X) -> b) -> (a -> (X -> b)) phi = curry phiInv :: (a -> (X -> b)) -> ((a,X) -> b) phiInv = uncurry On 4 Mar 2008, at 21:30, Edsko de Vries wrote:
On Tue, Mar 04, 2008 at 11:58:38AM -0600, Derek Elkins wrote:
On Tue, 2008-03-04 at 17:16 +0000, Edsko de Vries wrote:
Hi,
Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?).
Well when you pretend Hask is Set, many things can be discussed fairly directly.
F is left adjoint to U, F -| U, if Hom(FA,B) is naturally isomorphic to Hom(A,UB), natural in A and B. A natural transformation over Set is just a polymorphic function. So we have an adjunction if we have two functions:
phi :: (F a -> b) -> (a -> U b) and phiInv :: (a -> U b) -> (F a -> b)
such that phi . phiInv = id and phiInv . phi = id and F and U are instances of Functor.
The unit and counit respectively is then just phi id and phiInv id.
Sure, but that doesn't really explain what an adjunction *is*. For me, it helps to think of a natural transformation as a polymorphic function: it gives me an intuition about what a natural transformation is. Specializing the definition of an adjunction for Hask (or Set) helps understanding the *definitions*, but not the *intention*, if that makes any sense..
Edsko _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, 2008-03-04 at 19:01 +0000, Dominic Steinitz wrote:
Well, we have at least one very useful example of adjunction. It's called "curry". See, if X is some arbitrary type, you can define
This adjunction is the one that makes a category cartesian closed.
and the monad for it gives rise to the state monad. And the other adjunction relating to exponentials and symmetry gives rise to the continuation monad.

Well, I think it's really cool to be sitting in cafe exchanging some сute facts from category theory we happen to know. Girls would definitely like it. On 5 Mar 2008, at 03:33, Derek Elkins wrote:
On Tue, 2008-03-04 at 19:01 +0000, Dominic Steinitz wrote:
Well, we have at least one very useful example of adjunction. It's called "curry". See, if X is some arbitrary type, you can define
This adjunction is the one that makes a category cartesian closed.
and the monad for it gives rise to the state monad.
And the other adjunction relating to exponentials and symmetry gives rise to the continuation monad.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, 2008-03-04 at 18:30 +0000, Edsko de Vries wrote:
On Tue, Mar 04, 2008 at 11:58:38AM -0600, Derek Elkins wrote:
On Tue, 2008-03-04 at 17:16 +0000, Edsko de Vries wrote:
Hi,
Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?).
Well when you pretend Hask is Set, many things can be discussed fairly directly.
F is left adjoint to U, F -| U, if Hom(FA,B) is naturally isomorphic to Hom(A,UB), natural in A and B. A natural transformation over Set is just a polymorphic function. So we have an adjunction if we have two functions:
phi :: (F a -> b) -> (a -> U b) and phiInv :: (a -> U b) -> (F a -> b)
such that phi . phiInv = id and phiInv . phi = id and F and U are instances of Functor.
The unit and counit respectively is then just phi id and phiInv id.
Sure, but that doesn't really explain what an adjunction *is*. For me, it helps to think of a natural transformation as a polymorphic function: it gives me an intuition about what a natural transformation is. Specializing the definition of an adjunction for Hask (or Set) helps understanding the *definitions*, but not the *intention*, if that makes any sense..
I explicitly mentioned this at the end of my first email: """ Of course, this is a concrete example using basic ideas of programming and not some "intuitive analogy". I feel comfortable working with adjunctions, but I don't have some general analogy that I use. """ I'm suggesting that trying to find such an analogy may be more trouble than it is worth. The best analogy I've heard is to relate the problem of finding an adjunction to optimization problems. Personally, I find representability to be simpler, more useful, and easier to get an intuition about. Adjunction is then a particular case of parameterized representability.

G'day all.
Quoting Derek Elkins
Of course, this is a concrete example using basic ideas of programming and not some "intuitive analogy". I feel comfortable working with adjunctions, but I don't have some general analogy that I use.
I think this is important. The concept of an adjunction isn't like that of a natural transformation. In Haskell, natural transformations are functions that respect the structure of functors. Since you can't avoid respecting the structure of functors (the language won't let you do otherwise), you get natural transformations for free. (Free as in theorems, not free as in beer.) Adjunctions, on the other hand, you have to make yourself. As such, they're more like monads. I use at least three distinct pictures when I'm working with monads: - Overloaded semicolon. - Functorial container (e.g. lists). - Term substitution system. ...but even that doesn't fully cover all the possibilities that monads give you. Monads are what they are, and you use them when it seems to make sense to implement the Monad interface for them. It's sometimes only obvious that an interface is conformed to after the event. For example, consider Data.Supply: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/value-supply-0.1 It's clear in retrospect that Supply is a comonad, but probably neither the paper authors nor the package author, smart as they are, noticed this at the time of writing, because you need experience with comonads to identify them. I think it's the same with adjunctions. Having said that, I think it makes sense to come up with some example pictures, much like the example pictures of monads that people use. Looking at those examples again: phi :: (F a -> b) -> (a -> U b) phiInv :: (a -> U b) -> (F a -> b) One thing that springs to mind is that an adjunction could connect monads and their associated comonads. Is that a good picture? Cheers, Andrew Bromage

ajb-2 wrote:
In Haskell, natural transformations are functions that respect the structure of functors. Since you can't avoid respecting the structure of functors (the language won't let you do otherwise), you get natural transformations for free. (Free as in theorems, not free as in beer.)
It's worth noting that polymorphism is one of those unavoidable, albeit hidden, functors. Polymorphizing a function "forall a" can be thought of as lifting it via the ((->) T_a) functor, where T_a is the type variable of "a". E.g. reverse really has the signature T_a -> [a] -> [a]. But ((->) T_a) is left adjoint to ((,) T_a), which just happens to be the analogous type-explicit way of representing existentials. Adjunctions are a useful tool to describe and analyze such dualities precisely. ajb-2 wrote:
Adjunctions, on the other hand, you have to make yourself. As such, they're more like monads.
Constructing adjunctions, comprising as they do of a pair of functors, does seem double the work of a single monad. Duality OTOH is a powerful guiding principle and may well be easier than working with the monad laws directly. And besides, you get a comonad for free. ajb-2 wrote:
One thing that springs to mind is that an adjunction could connect monads and their associated comonads. Is that a good picture?
Definitely. -- View this message in context: http://www.nabble.com/Functional-programmer%27s-intuition-for-adjunctions--t... Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

Edsko asked:
Is there an intuition that can be used to explain adjunctions to functional programmers, even if the match isn't necessary 100% perfect (like natural transformations and polymorphic functions?).
I think there's a catch because many interesting examples of adjunctions involve multiple distinct categories. So there's a sense in which you have to go outside of programming to explain what they are, apart from a few simple examples like currying. But if you're familiar with the category of monoids and monoid homomorphisms, then we can generate another example: One intuition is the notion of trying to approximate an object in one category using objects in another. For example, consider the category of monoids. How best can we approximate an arbitrary type in a monoid? Suppose our type, T, has elements a,b and c. We could try to represent this as a monoid. But monoids should have products and an identity. So if the monoid contains a,b and c it should also contain 1, ab, bc, abcba and so on. And what should ab equal? Might it be the same as bc? The simplest strategy is to assume that all of these products are distinct and approximate the type T with the monoid containing 1, a, b and c and assuming no element equals any other unless the monoid laws say so (ie. 1a=a1=a). This is called the *f*ree monoid generated by T, and we can write it F(T). Now go the other way: given a monoid, S, how can we map it back to a type? There's an obvious idea, just make a type whose elements are the *u*nderlying elements of the monoid, discarding the multiplication rule. Call this U(S). (We're treating Hask like Set here.) So what's U(F(T))? T gets mapped to the free monoid generated by T, and mapped back to the elements of this monoid. In other words, the elements of U(F(T)) are (possibly non-empty) strings of elements of T. So UF is the list type constructor. Any homomorphism, f, between monoids is completely determined once you know where a set of generators of the monoid map under the homomorphism, and vice versa. All of the other elements can be deduced using f(1) = 1 and f(xy)=f(x)f(y). So if F(a) is the free monoid generated by a, then a homomorphism F(a)->b is completely determined by a function a->U(b), and vice versa. We have an isomorphism (F(a)->b) <-> (a->U(b)) and (F,U) forms an adjunction, according to Derek's definition. So intuitively, what's going on is that Haskell lists emerge as a result of an attempt to approximate types with monoids and adjunctions formalise what we mean by 'approximation'. When we go from a type, to a monoid, and back again, we don't end up where we started, but at a new type. Other adjunctions can be seen in this way. But because different examples use different categories, it's hard to picture a uniform way of viewing adjunctions that spans them all. It's also no coincidence now that lists form a monad... -- Dan
participants (7)
-
ajb@spamcop.net
-
Dan Piponi
-
Derek Elkins
-
Dominic Steinitz
-
Edsko de Vries
-
Kim-Ee Yeoh
-
Miguel Mitrofanov