
Also, I have to admit I was a bit handwavy here; I meant P in a
metatheoretic sense, that is "P(a) is some type which contains 'a' as a
free variable", and thus the 'theorem' is really a collection of theorems
parametrized on the P you choose.
For example, P(a) could be "Show a & a -> Int"; in that case we get the
theorem
exists a. (Show a, a -> Int)
<=>
forall r. (forall a. Show a => (a -> Int) -> r) -> r
as witnessed by the following code (using the ExistentialQuantification and
RankNTypes extensions)
data P = forall a. Show a => MkP (a -> Int)
type CPS_P r = (forall a. Show a => (a -> Int) -> r) -> r
isoR :: P -> forall r. CPS_P r
isoR (MkP f) k =
-- pattern match on MkP brings a fresh type T into scope,
-- along with f :: T -> Int, and the constraint Show T.
-- k :: forall a. Show a => (a -> Int) -> r
-- so, k {T} f :: r
isoL :: (forall r. CPS_P r) -> P
isoL k = k (\x -> MkP x)
-- k :: forall r. (forall a. Show a => (a -> Int) -> r) -> r
-- k {P} = (forall a. Show a => (a -> Int) -> P) -> P
-- MkP :: forall a. Show a => (a -> Int) -> P
-- therefore, k {P} MkP :: P
Aside: the type 'exists a. (Show a, a -> Int)' is a bit odd, and is another
reason we don't have first class existentials in Haskell. The 'forall'
side is using currying (a -> b -> r) <=> ((a,b) -> r) which works because
the constraint => can be modeled by dictionary passing. But we don't have
a simple way to represent the dictionary (Show a) as a member of a tuple.
One answer is to pack it up in another "existential"; I find this a bit of
a misnomer since there's nothing existential about this data type aside
from the dictionary:
data ShowDict a = Show a => MkShowDict
Then the theorem translation is a bit more straightforward:
data P = forall a. MkP (ShowDict a, a -> Int)
type CPS_P r = (forall a. (ShowDict a, a -> Int) -> r) -> r
-- theorem: P <=> forall r. CPS_P r
isoL :: P -> forall r. CPS_P r
isoL (MkP x) k = k x
isoR :: (forall r. CPS_P r) -> P
isoR k = k (\x -> MkP x)
-- ryan
On Sat, Aug 18, 2012 at 8:24 PM, wren ng thornton
On 8/17/12 12:54 AM, Alexander Solla wrote:
On Thu, Aug 16, 2012 at 8:07 PM, wren ng thornton
wrote: Though bear in mind we're discussing second-order quantification here, not first-order.
Can you expand on what you mean here? I don't see two kinds of quantification in the type language (at least, reflexively, in the context of what we're discussing). In particular, I don't see how to quantify over predicates for (or sets of, via the extensions of the predicates) types.
Is Haskell's 'forall' doing double duty?
Nope, it's the "forall" of mathematics doing double duty :)
Whenever doing quantification, there's always some domain being quantified over, though all too often that domain is left implicit; whence lots of confusion over the years. And, of course, there's the scope of the quantification, and the entire expression. For example, consider the expression:
forall a. P(a)
The three important collections to bear in mind are: (1) the collection of things a ranges over (2) the collection of things P(a) belongs to (3) the collection of things forall a. P(a) belongs to
So if we think of P as a function from (1) to (2), and name the space of such functions (1->2), then we can think of the quantifier as a function from (1->2) to (3).
When you talk to logicians about quantifiers, the first thing they think of is so-called "first-order" quantification. That is, given the above expression, they think that the thing being quantified over is a collection of "individuals" who live outside of logic itself[1]. For example, we could be quantifying over the natural numbers, or over the kinds of animals in the world, or any other extra-logical group of entities.
In Haskell, when we see the above expression we think that the thing being quantified over is some collection of types[2]. But, that means when we think of P as a function it's taking types and returning types! So the thing you're quantifying over and the thing you're constructing are from the same domain[3]! This gets logicians flustered and so they call it "second-order" (or more generally, "higher-order") quantification. If you assert the primacy of first-order logic, it makes sense right? In the first-order case we're quantifying over individuals; in the second-order case we're quantifying over collections of individuals; so third-order would be quantifying over collections of collections of individuals; and on up to higher orders.
Personally, I find the names "first-order" and "second-order" rather dubious--- though the distinction is a critical one to make. Part of the reason for its dubiousness can be seen when you look at PTSes which make explicit that (1), (2), and (3) above can each be the same or different in all combinations. First-order quantification is the sort of thing you get from Pi/Sigma types in dependently typed languages like LF; second-order quantification is the sort of thing you get from the forall/exists quantifiers in polymorphic languages like System F. But the Barendregt cube makes it clear that these are *orthogonal* features. Neither one of them comes "first" or "second"; they just do very different things. Logicians thought first of first-order quantification, so they consider that primitive and worry about the complications of dealing with second-/higher-order quantification. Until recently type theory was focused on polymorphism of various sorts, so that's considered primitive or at least easier, whereas dependent types are exotic and worrisome/hard.
[1] This is also the root of the distinction between "functions", "predicates", and "logical connectives" in traditional logic. Functions map individuals to individuals; predicates map individuals to propositions/truth-values; and connectives map propositions/TVs to propositions/TVs. Though, traditionally, the set of connectives is taken to be small and fixed whereas functions and predicates are left to be defined by a "signature".
[2] Namely monotypes. If we allowed quantified variables to be instantiated with polymorphic types, then we'd get "impredicative" quantification. Impredicativity can be both good and bad, depending on what you're up to.
[3] Well, they're only the same domain for impredicative quantification. Otherwise the collection of types you're constructing is "bigger" than the collection you're quantifying over.
-- Live well, ~wren
______________________________**_________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe