
Wow, this blew up overnight. Thank you Olaf and allies for attending to this topic. But I think there is a hidden mismatch of premises. So let me take a step back and look at things again, in order, from the ground up.
Hi Ignat, would you slow down a bit and tell us the idea behind your question?
You seem to expect Map forms a Cartesian closed category, and want to know if the specific property of CCC holds ("functors of shape (a ⇸ -) are distributive".)
Could you explain that expectation step-by-step? For the first step, let us know the category made from Maps more clearly. I mean, (1) what are the objects of that category? (2) what are the morphisms? (3) how the identity morphisms and the composition of morphisms are defined?
The next step would be the "Cartesian" part. This category should have direct product of any finite number of objects. What exactly are these, and how do you define the projection morphisms (equivalents of `fst` and `snd` in your category) and the product of morphisms (equivalent of `&&&`.)
It's too rushed to think about CCC until these aren't clear.
This is most fair. Being a software engineer and not a mathematician, I was hoping to avoid the scrutiny into what I was taking to be an accepted base line. It did not play out as I hoped, so let us rewind. As I offered previously:
The category **Haskell** of Haskell types and total functions is not a suitable category for these entities, because you cannot represent subsets in it, and subsets are kinda the whole point of the exercise. So I look at it as though **Haskell** is a subcategory of **Set** and **Finite Set** — the category of finite sets and finite maps — is another, although not entirely disjoint subcategory.
I take **Finite Set** to be Cartesian closed with the same product and exponent objects as its supercategory **Set**. Objections? None, excellent. Now to the mentioned mismatch. There are two philosophies: 1. Let us model perfect Mathematics in an imperfect programming language as best we can. 2. Let us have only as much Mathematics as we can so our programming language remains perfect. I employ the approach №1 here. Also, I already have a good idea of what I want to do, and I already have code that works. This makes proofs of non-existence somewhat less effective, in the sense that they cannot discourage me from exploring the matter. If the idea cannot work, I need to know in what cases it _does_ work. Actually, I find the idea that finite sets and functions may be represented in Haskell very attractive in general, so I think it is worth the effort to realize it in Haskell possibly better, even if perfect realization is unattainable _(and it is)_. That is not to say that I am going to ignore all objections. But I do want to recover as much as can be recovered. If not a category, maybe a semigroupoid? If not a monoid, maybe a semigroup? If we cannot model it in types, can we still use it with imaginary proofs? Maybe we can use a clever normalization to exclude pathologies? And so on. Particularly, I do not invoke the `Maybe` stuff — instead I say that an evaluation of a finite map on a value outside the set of its keys is an error and it is the responsibility of the programmer to prove that no such evaluations occur, even though they may pass the type checker. Yes, I give up type safety. ## Now to the technical details. We do not have subtypes in Haskell, but we do have subsets and finite maps — it is only that the type system does not know which subsets and finite maps are compatible. So, we depart from the category **Haskell** of types and total functions, and we allow instead that some values may be objects in our category **X** — notably the values of type `Set α`. * Now a function may be restricted to a finite set via `fromSet ∷ (k → a) → Set k → Map k a`, so there are arrows in **X** for every pair of an arrow of **Haskell** and a finite subset of an object of **Haskell** that has a total ordering _(or even mere decidable equality if we may sacrifice performance)_. * An identity arrow for a given finite set is then a restriction of `id` to that set. * Finite maps compose when the domain of one subsumes the range of the other via `fmap (g !) f`. * Terminal object is the unique singleton of type `Set ( )` or any uniquely isomorphic singleton of a type with a single nullary constructor. * The product of two finite sets is given by `cartesianProduct`, and projections are the appropriate restrictions of the usual `fst` and `snd`. * The exponential object is a finite set of all maps with the same domain and range. I went on and invented the definitions witnessing the requisite isomorphose: curry = mapKeysWith union fst ∘ mapWithKey (singleton ∘ snd) uncurry = unions ∘ mapWithKey (mapKeys ∘ (, )) They pass any example except the one Tom gave earlier, with an empty map. This establishes a Cartesian closed category of somewhat idealized entities. Can the realization of this category in Haskell be perfected to exclude pathologies by construction? Perhaps it is enough to normalize maps with `dropEmpty` before comparison? Seems worth some research.