
I've noticed over the - okay, over the months - that some folks enjoy the puzzle-like qualities of programming in the type system (poor Oleg, he's become #haskell's answer to the "Chuck Norris" meme commonly encountered in MMORPGs). Anyway,... are there any languages out there whose term-level programming resembles Haskell type-level programming, and if so, would a deliberate effort to appeal to that resemblance be an advantage (leaving out for now the hair-pulling effort that such a change would entail)? Or, better yet, is there an Interest Group or committee (Working, or not), that is looking at a coherent architecture or design for a possible future version of Haskell (no offense to Tim Sheard's excelent Ωmega project)? Walt "BMeph" Rorie-Baety "A mountain that eats people? I want one!." - Richard, of LFGComic.com

On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety
I've noticed over the - okay, over the months - that some folks enjoy the puzzle-like qualities of programming in the type system (poor Oleg, he's become #haskell's answer to the "Chuck Norris" meme commonly encountered in MMORPGs).
Anyway,... are there any languages out there whose term-level programming resembles Haskell type-level programming, and if so, would a deliberate effort to appeal to that resemblance be an advantage (leaving out for now the hair-pulling effort that such a change would entail)?
I'm not a prolog programmer, but I've heard that using type classes to do your computations leads to code that resembles prolog. You might enjoy looking at dependently typed languages. In those languages the term level and type level have the same computing power so your programs will go between the levels at times. In Agda they share the same syntax even, I think. Jason

Jason Dagit wrote:
On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety
wrote: I've noticed over the - okay, over the months - that some folks enjoy the puzzle-like qualities of programming in the type system (poor Oleg, he's become #haskell's answer to the "Chuck Norris" meme commonly encountered in MMORPGs).
Anyway,... are there any languages out there whose term-level programming resembles Haskell type-level programming, and if so, would a deliberate effort to appeal to that resemblance be an advantage (leaving out for now the hair-pulling effort that such a change would entail)?
I'm not a prolog programmer, but I've heard that using type classes to do your computations leads to code that resembles prolog.
Indeed. If you like the look of Haskell's type-level programming, you should look at logic programming languages based on Prolog. Datalog gives a well understood fragment of Prolog. ECLiPSe[1] extends Prolog with constraint programming. Mercury[2], lambda-Prolog[3], and Dyna give a more modern take on the paradigm. If you're just a fan of logic variables and want something more Haskell-like, there is Curry[4]. In a similar vein there's also AliceML[5] which gives a nice futures/concurrency story to ML. AliceML started out on the same VM as Mozart/Oz[6], which has similar futures, though a different overall programming style. And, as Jason said, if you're just interested in having the same programming style at both term and type levels, then you should look into dependently typed languages. Agda is the most Haskell-like, Epigram draws heavily from the Haskell community, and Coq comes more from the ML tradition. There's a menagerie of others too, once you start looking. [1] http://eclipse-clp.org/ is currently down, but can be accessed at http://87.230.22.228/ [2] http://www.mercury.csse.unimelb.edu.au/ [3] http://www.lix.polytechnique.fr/~dale/lProlog/ [4] http://www-ps.informatik.uni-kiel.de/currywiki/ [5] http://www.ps.uni-saarland.de/alice/ [6] http://www.mozart-oz.org/ -- Live well, ~wren

Are any of those compatible with Haskell, so that we could mix code in that language with Haskell code? Cheers, Greg On 6/25/10 9:49 PM, wren ng thornton wrote:
Jason Dagit wrote:
On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety
wrote: I've noticed over the - okay, over the months - that some folks enjoy the puzzle-like qualities of programming in the type system (poor Oleg, he's become #haskell's answer to the "Chuck Norris" meme commonly encountered in MMORPGs).
Anyway,... are there any languages out there whose term-level programming resembles Haskell type-level programming, and if so, would a deliberate effort to appeal to that resemblance be an advantage (leaving out for now the hair-pulling effort that such a change would entail)?
I'm not a prolog programmer, but I've heard that using type classes to do your computations leads to code that resembles prolog.
Indeed. If you like the look of Haskell's type-level programming, you should look at logic programming languages based on Prolog. Datalog gives a well understood fragment of Prolog. ECLiPSe[1] extends Prolog with constraint programming. Mercury[2], lambda-Prolog[3], and Dyna give a more modern take on the paradigm.
If you're just a fan of logic variables and want something more Haskell-like, there is Curry[4]. In a similar vein there's also AliceML[5] which gives a nice futures/concurrency story to ML. AliceML started out on the same VM as Mozart/Oz[6], which has similar futures, though a different overall programming style.
And, as Jason said, if you're just interested in having the same programming style at both term and type levels, then you should look into dependently typed languages. Agda is the most Haskell-like, Epigram draws heavily from the Haskell community, and Coq comes more from the ML tradition. There's a menagerie of others too, once you start looking.
[1] http://eclipse-clp.org/ is currently down, but can be accessed at http://87.230.22.228/ [2] http://www.mercury.csse.unimelb.edu.au/ [3] http://www.lix.polytechnique.fr/~dale/lProlog/ [4] http://www-ps.informatik.uni-kiel.de/currywiki/ [5] http://www.ps.uni-saarland.de/alice/ [6] http://www.mozart-oz.org/

Gregory Crosswhite wrote:
On 6/25/10 9:49 PM, wren ng thornton wrote:
[1] http://eclipse-clp.org/ is currently down, but can be accessed at http://87.230.22.228/ [2] http://www.mercury.csse.unimelb.edu.au/ [3] http://www.lix.polytechnique.fr/~dale/lProlog/ [4] http://www-ps.informatik.uni-kiel.de/currywiki/ [5] http://www.ps.uni-saarland.de/alice/ [6] http://www.mozart-oz.org/
Are any of those compatible with Haskell, so that we could mix code in that language with Haskell code?
Your best bets would be Agda and Curry. I'm not familiar enough with either of them to know what sort of FFI or cross-linking they support, but both are (by design) rather similar to Haskell. For Curry, it may also vary depending on the compiler. With all the others, interaction will be restricted to generic FFI support. -- Live well, ~wren

On 26 June 2010 08:07, Andrew Coppin
Out of curiosity, what the hell does "dependently typed" mean anyway?
How about: "The result type of a function may depend on the argument value" (rather than just the argument type) The quoted bit rather than the parens bit is from Lennart Augustsson's "Cayenne - a language with dependent types" The papers on Cayenne might be an interesting start point as the field was less mature at that time, so the early papers had more explaining to do. Best wishes Stephen

Stephen Tetley wrote:
On 26 June 2010 08:07, Andrew Coppin
wrote: Out of curiosity, what the hell does "dependently typed" mean anyway?
How about:
"The result type of a function may depend on the argument value" (rather than just the argument type)
Hmm. This sounds like one of those things where the idea is simple, but the consequences are profound... (I have once or twice wanted to do this in fact, but I don't recall why now.)

Andrew Coppin wrote:
Stephen Tetley wrote:
On 26 June 2010 08:07, Andrew Coppin
wrote: Out of curiosity, what the hell does "dependently typed" mean anyway?
How about:
"The result type of a function may depend on the argument value" (rather than just the argument type)
Hmm. This sounds like one of those things where the idea is simple, but the consequences are profound...
The simplest fully[1] dependently typed formalism is one of the many variants of LF. LF is an extension of the simply typed lambda calculus, extending the arrow type constructor to be ((x:a) -> b) where the variable x is bound to the argument value and has scope over b. In order to make use of this, we also allow type constructors with dependent kinds, for example with the type family (P : A -> *) we could have a function (f : (x:A) -> P x). Via Curry--Howard isomorphism this gives us first-order intuitionistic predicate calculus (whereas STLC is 1st-order propositional calculus). The switch from atomic propositions to predicates is where the profundity lies. A common extension to LF is to add dependent pairs, generalizing the product type constructor to be ((x:a) * b), where the variable x is bound to the first component of the pair and has scope over b. This extension is rather trivial in the LF setting, but it can cause unforeseen complications in more complex formalisms. Adding dependencies is orthogonal to adding polymorphism or to adding higher-orderness. Though "orthogonal" should probably be said with scare-quotes. In the PTS presentation of Barendregt's lambda cube these three axes are indeed syntactically orthogonal. However, in terms of formal power, the lambda cube isn't really a cube as such. There are numerous shortcuts and wormholes connecting far reaches in obscure non-Euclidean ways. The cube gives a nice overview to start from, but don't confuse the map for the territory. One particular limitation of LF worth highlighting is that, even though term-level values may *occur* in types, they may not themselves *be* types. That is, in LF, we can't have any function like (g : (x:a) -> x). In the Calculus of Constructions (CC)[2], this restriction is lifted in certain ways, and that's when the distinction between term-level and type-level really breaks down. Most current dependently typed languages (Coq, Agda, Epigram, LEGO, NuPRL) are playing around somewhere near here, whereas older languages tended to play around closer to LF or ITT (e.g., Twelf). And as a final note, GADTs and type families are forms of dependent types, so GHC Haskell is indeed a dependently typed language (of sorts). They're somewhat kludgy in Haskell because of the way they require code duplication for term-level and type-level definitions of "the same" function. In "real" dependently typed languages it'd be cleaner to work with these abstractions since we could pass terms into the type level directly, however that cleanliness comes with other burdens such as requiring that all functions be provably terminating. Managing to balance cleanliness and the requirements of type checking is an ongoing research area. (Unless you take the Cayenne route and just stop caring about whether type checking will terminate :) [1] Just as Hindley--Milner is an interesting stopping point between STLC and System F, there are also systems between STLC and LF. [2] In terms of formal power: CC == F_omega + LF == ITT + SystemF. -- Live well, ~wren

It means that not only can values have types, types can have values.
An example of the uses of a dependent type would be to encode the
length of a list in it's type.
Due to the curry-howard isomorphism, dependent types open the gateway
for lots of type-level theorem proving.
On 26 June 2010 17:07, Andrew Coppin
wren ng thornton wrote:
And, as Jason said, if you're just interested in having the same programming style at both term and type levels, then you should look into dependently typed languages.
Out of curiosity, what the hell does "dependently typed" mean anyway?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Liam O'Connor wrote:
It means that not only can values have types, types can have values.
Uh, don't types have values *now*?
An example of the uses of a dependent type would be to encode the length of a list in it's type.
Oh, right. So you mean that as well as being able to say "Foo Bar", you can say "Foo 7", where 7 is (of course) a value rather than a type. (?)

http://www.cs.nott.ac.uk/~txa/publ/ydtm.pdf Andrew Coppin wrote:
Liam O'Connor wrote:
It means that not only can values have types, types can have values.
Uh, don't types have values *now*?
An example of the uses of a dependent type would be to encode the length of a list in it's type.
Oh, right. So you mean that as well as being able to say "Foo Bar", you can say "Foo 7", where 7 is (of course) a value rather than a type. (?)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Tony Morris http://tmorris.net/

Tony Morris wrote:
Ah yes, it was definitely Epigram I looked at. The intro to this looked promising, but by about 3 pages in, I had absolutely no clue what on Earth the text is talking about...

I prefer Agda to Epigram, but different strokes for different folks.
In agda, you could define a list indexed by its size like this:
data Vec : (A : Set) → ℕ → Set where
[] : Vec A 0
_∷_ : ∀ {n : ℕ} → A → Vec A n → Vec A (1 + n)
So, we have a Vec data type, and on the type level it is a function
from some type A (which itself is of type Set) and a natural number
(the length) to a new type (of type Set).
The empty list is defined as a zero length vector, and cons therefore
increases the type-level length of the vector by one. Using this
method, Agda can be used to make a fully safe "head" implementation
that is statically verified not to crash:
head : ∀ { A : Set } { n : ℕ} → Vec A (1 + n) → A
head (x :: xs) = x
This uses the type system to ensure that the vector includes at least
one element.
Note that a similar thing can be achieved in Haskell with the right
extensions, however type-level naturals must be used:
data S n
data Z
data Vec a :: * -> * where
Empty :: Vec a 0
Cons :: a -> Vec a b -> Vec a (S b)
safeHead :: forall b. Vec a (S b) -> a
safeHead (Cons x xs) = x
(note: not tested)
The main difference here between Haskell and Agda is that the types
themselves are typed, and that the types contain real naturals not
fake ones like in Haskell
Cheers.
~Liam
On 26 June 2010 22:04, Andrew Coppin
Tony Morris wrote:
Ah yes, it was definitely Epigram I looked at. The intro to this looked promising, but by about 3 pages in, I had absolutely no clue what on Earth the text is talking about...
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 6/26/10 07:28 , Andrew Coppin wrote:
Uh, don't types have values *now*?
To the extent that's true now, they're "baked in"; you can't do anything to/with them.
Oh, right. So you mean that as well as being able to say "Foo Bar", you can say "Foo 7", where 7 is (of course) a value rather than a type. (?)
A bit more than that: imagine now that you can (a) replace that 7 with a variable and (b) do math on it in a type declaration:
-- borrowing 'typevar from ML for a moment... (SList 'a l) is a sized list of ('a) of length (l) sListConcat :: SList 'a l1 -> SList 'a l2 -> SList 'a (l1 + l2)
Just for starters. - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkwmD/YACgkQIn7hlCsL25UD8ACghqbV0MUtbfWrFN82yCmsdb4D X44An2WUkBcuptAe4iz1Wl1t4j3y0NdL =+6IT -----END PGP SIGNATURE-----

Brandon S Allbery KF8NH wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/26/10 07:28 , Andrew Coppin wrote:
Oh, right. So you mean that as well as being able to say "Foo Bar", you can say "Foo 7", where 7 is (of course) a value rather than a type. (?)
A bit more than that: imagine now that you can (a) replace that 7 with a variable and (b) do math on it in a type declaration.
Right, I see. So is there a specific reason why Haskell isn't dependently typed then?

On Sat, Jun 26, 2010 at 11:23 AM, Andrew Coppin wrote: Brandon S Allbery KF8NH wrote: -----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 On 6/26/10 07:28 , Andrew Coppin wrote: Oh, right. So you mean that as well as being able to say "Foo Bar", you
can
say "Foo 7", where 7 is (of course) a value rather than a type. (?) A bit more than that: imagine now that you can (a) replace that 7 with a
variable and (b) do math on it in a type declaration. Right, I see. So is there a specific reason why Haskell isn't dependently typed then? Or you could ask, So is there a specific reason why C isn't a functional
language?

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 6/26/10 14:58 , Jason Dagit wrote:
On Sat, Jun 26, 2010 at 11:23 AM, Andrew Coppin
mailto:andrewcoppin@btinternet.com> wrote: Brandon S Allbery KF8NH wrote: A bit more than that: imagine now that you can (a) replace that 7 with a variable and (b) do math on it in a type declaration. So is there a specific reason why Haskell isn't dependently typed then?
Or you could ask, So is there a specific reason why C isn't a functional language?
More to the point, Haskell was a bit too frozen in stone when dependent type theory reached the point of being implementable. As a case in point, you'll notice in my sized-list example in pseudo-Haskell I had to drag in syntax from ML to distinguish type variables from value variables? Hard to escape that with Haskell as it currently exists --- but in a proper dependently typed system, there is no such distinction: types aren't different kinds of things from values. (Or in the usual lingo, types are first class values in dependently-typed languages.) Compare my example to the Agda example someone else posted; Agda is a proper dependently typed language, and the value and type variables are treated exactly the same way. - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkwmcEgACgkQIn7hlCsL25UaRgCgybBPBhtn2evzDA6+0D9L3uph XVsAni86B2NDPZPCPvIc1Us53rj3hh06 =LxLd -----END PGP SIGNATURE-----

So is there a specific reason why Haskell isn't dependently typed then?
Or you could ask, So is there a specific reason why C isn't a functional language?
More to the point, Haskell was a bit too frozen in stone when dependent type theory reached the point of being implementable.
Right. So, in summary, the answer is "historical circumstance"? (I was wondering whether it was history or whether it's impossible to implement dependantly-typed languages or some other reason or...)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 6/28/10 15:04 , Andrew Coppin wrote:
More to the point, Haskell was a bit too frozen in stone when dependent type theory reached the point of being implementable.
Right. So, in summary, the answer is "historical circumstance"?
(I was wondering whether it was history or whether it's impossible to implement dependantly-typed languages or some other reason or...)
I think you chose to chop off a partial answer to that. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkwo9I4ACgkQIn7hlCsL25VeSwCfSwXPHN1iN9RSbvxa31BL+Q0Y nq8AoMf5I13/mGItF5MS08WxSWUsOF0f =hBO8 -----END PGP SIGNATURE-----

Andrew Coppin wrote:
Right, I see.
So is there a specific reason why Haskell isn't dependently typed then?
One problem with dependent types as I understand it is that type inference is not guaranteed to terminate. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

On Sat, Jun 26, 2010 at 6:55 PM, Erik de Castro Lopo
One problem with dependent types as I understand it is that type inference is not guaranteed to terminate.
Full type inference is undecidable in most interesting type systems anyway. It's possible for the simply-typed λ-calculus, but the best you can do beyond that is probably the Damas/Hindley/Milner algorithm which covers a (rather useful) subset of System F. This is the heart of Haskell's type inference, but some GHC extensions make type inference undecidable, such as RankNTypes. Type inference being undecidable is only a problem insofar as it requires adding explicit type annotations until the remaining types can be inferred. - C.

On Sat, Jun 26, 2010 at 12:07 AM, Andrew Coppin wrote: wren ng thornton wrote: And, as Jason said, if you're just interested in having the same
programming style at both term and type levels, then you should look into
dependently typed languages. Out of curiosity, what the hell does "dependently typed" mean anyway? The types can depend on values. For example, have you ever notice we have
families of functions in Haskell like zip, zip3, zip4, ..., and liftM,
liftM2, ...?
Each of them has a type that fits into a pattern, mainly the arity
increases. Now imagine if you could pass a natural number to them and have
the type change based on that instead of making new versions and
incrementing the name. That of course, is only the tip of the iceberg.
Haskell's type system is sufficiently expressive that we can encode many
instances of dependent types by doing some extra work. Even the example I
just gave can be emulated. See this paper:
http://www.brics.dk/RS/01/10/
Also SHE:
http://personal.cis.strath.ac.uk/~conor/pub/she/
Then there are languages like Coq and Agda that support dependent types
directly. There you can return a type from a function instead of a value.
Jason

On Sat, Jun 26, 2010 at 12:27 AM, Jason Dagit
On Sat, Jun 26, 2010 at 12:07 AM, Andrew Coppin < andrewcoppin@btinternet.com> wrote:
wren ng thornton wrote:
And, as Jason said, if you're just interested in having the same programming style at both term and type levels, then you should look into dependently typed languages.
Out of curiosity, what the hell does "dependently typed" mean anyway?
The types can depend on values. For example, have you ever notice we have families of functions in Haskell like zip, zip3, zip4, ..., and liftM, liftM2, ...?
Each of them has a type that fits into a pattern, mainly the arity increases. Now imagine if you could pass a natural number to them and have the type change based on that instead of making new versions and incrementing the name. That of course, is only the tip of the iceberg. Haskell's type system is sufficiently expressive that we can encode many instances of dependent types by doing some extra work. Even the example I just gave can be emulated. See this paper: http://www.brics.dk/RS/01/10/
Also SHE: http://personal.cis.strath.ac.uk/~conor/pub/she/
Then there are languages like Coq and Agda that support dependent types directly. There you can return a type from a function instead of a value.
I just realized, I may not have made this point sufficiently clear. Much of the reason we need to do extra work in Haskell to emulate dependent types is because types are not first class in Haskell. We can't write terms that manipulate types (type level functions). Instead we use type classes as sets of types and newtypes/data in place of type level functions. But, in dependently typed languages types are first class. Along this line, HList would also serve as a good example of what you would/could do in a dependently type language by showing you how to emulate it in Haskell. Jason

We most certainly do have type-level functions. See type families.
Cheers.
~Liam
On 26 June 2010 17:33, Jason Dagit
On Sat, Jun 26, 2010 at 12:27 AM, Jason Dagit
wrote: On Sat, Jun 26, 2010 at 12:07 AM, Andrew Coppin
wrote: wren ng thornton wrote:
And, as Jason said, if you're just interested in having the same programming style at both term and type levels, then you should look into dependently typed languages.
Out of curiosity, what the hell does "dependently typed" mean anyway?
The types can depend on values. For example, have you ever notice we have families of functions in Haskell like zip, zip3, zip4, ..., and liftM, liftM2, ...? Each of them has a type that fits into a pattern, mainly the arity increases. Now imagine if you could pass a natural number to them and have the type change based on that instead of making new versions and incrementing the name. That of course, is only the tip of the iceberg. Haskell's type system is sufficiently expressive that we can encode many instances of dependent types by doing some extra work. Even the example I just gave can be emulated. See this paper: http://www.brics.dk/RS/01/10/ Also SHE: http://personal.cis.strath.ac.uk/~conor/pub/she/ Then there are languages like Coq and Agda that support dependent types directly. There you can return a type from a function instead of a value.
I just realized, I may not have made this point sufficiently clear. Much of the reason we need to do extra work in Haskell to emulate dependent types is because types are not first class in Haskell. We can't write terms that manipulate types (type level functions). Instead we use type classes as sets of types and newtypes/data in place of type level functions. But, in dependently typed languages types are first class. Along this line, HList would also serve as a good example of what you would/could do in a dependently type language by showing you how to emulate it in Haskell. Jason _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Jason Dagit wrote:
On Sat, Jun 26, 2010 at 12:07 AM, Andrew Coppin
mailto:andrewcoppin@btinternet.com> wrote: Out of curiosity, what the hell does "dependently typed" mean anyway?
The types can depend on values. For example, have you ever notice we have families of functions in Haskell like zip, zip3, zip4, ..., and liftM, liftM2, ...?
Each of them has a type that fits into a pattern, mainly the arity increases. Now imagine if you could pass a natural number to them and have the type change based on that instead of making new versions and incrementing the name.
Right. I see. (I think...)
Then there are languages like Coq and Agda that support dependent types directly. There you can return a type from a function instead of a value.
I think I looked at Coq (or was it Epigram?) and found it utterly incomprehensible. Whoever wrote the document I was reading was obviously very comfortable with advanced mathematical abstractions which I've never even heard of. It's a bit like trying to learn Prolog from somebody who thinks that the difference between first-order and second-order logic is somehow "common knowledge". (FWIW, I have absolutely no clue what that difference is. But if you show me a few Prolog examples, I get the gist of what it does and why that's useful.)

On Jun 26, 2010, at 4:33 AM, Andrew Coppin wrote:
It's a bit like trying to learn Prolog from somebody who thinks that the difference between first-order and second-order logic is somehow "common knowledge".
A first order logic quantifies over values, and a second order logic quantifies over values and sets of values (i.e., types, predicates, etc). The latter lets you express things like "For every property P, P x". Notice that this expression "is equivalent" to Haskell's bottom type "a". Indeed, Haskell is a weak second-order language. Haskell's language of values, functions, and function application is a first- order language.

Alexander Solla wrote:
On Jun 26, 2010, at 4:33 AM, Andrew Coppin wrote:
It's a bit like trying to learn Prolog from somebody who thinks that the difference between first-order and second-order logic is somehow "common knowledge".
A first order logic quantifies over values, and a second order logic quantifies over values and sets of values (i.e., types, predicates, etc). The latter lets you express things like "For every property P, P x". Notice that this expression "is equivalent" to Haskell's bottom type "a". Indeed, Haskell is a weak second-order language. Haskell's language of values, functions, and function application is a first-order language.
I have literally no idea what you just said. It's like, I have C. J. Date on the shelf, and the whole chapter on the Relational Calculus just made absolutely no sense to me because I don't understand the vocabulary.

On Jun 26, 2010, at 11:21 AM, Andrew Coppin wrote:
A first order logic quantifies over values, and a second order logic quantifies over values and sets of values (i.e., types, predicates, etc). The latter lets you express things like "For every property P, P x". Notice that this expression "is equivalent" to Haskell's bottom type "a". Indeed, Haskell is a weak second-order language. Haskell's language of values, functions, and function application is a first-order language.
I have literally no idea what you just said.
It's like, I have C. J. Date on the shelf, and the whole chapter on the Relational Calculus just made absolutely no sense to me because I don't understand the vocabulary.
Let's break it down then. A language is a set of "terms" or "expressions", together with rules for putting terms together (resulting in "sentences", in the logic vocabulary). A "logic" is a language with "rules of inference" that let you transform sets of sentences into new sentences. Quantification is a tricky thing to describe. In short, if a language can "quantify over" something, it means that you can have variables "of that kind". So, Haskell can quantify over integers, since we can have variables like "x :: Integer". More generally, Haskell's run- time language quantifies over "values". From this point of view, Haskell is TWO languages that interact nicely (or rather, a second-order language). First, there is the "term-level" run-time language. This is the stuff that gets evaluated when you actually run a program. It can quantify over values and functions. And we can express function application (a rule of inference to combine a function and a value, resulting in a new value). Second, there is the type language, which relies on specific keywords: data, class, instance, newtype, type, (::) Using these keywords, we can quantify over types. That is, the constructs they enable take types as variables. However, notice that a type is "really" a collection of values. For example, as the Gentle Introduction to Haskell says, we should think of the type Integer as being: data Integer = 0 | 1 | -1 | 2 | -2 | ... despite the fact that this isn't how it's really implemented. The Integer type is "just" an enumeration of the integers. Putting this all together and generalizing a bit, a second-order language is a language with two distinct, unmixable kinds variables, where one kind of variable represents a collection of things that can fill in the other kind of variable.

Andrew Coppin wrote:
I think I looked at Coq (or was it Epigram?) and found it utterly incomprehensible. Whoever wrote the document I was reading was obviously very comfortable with advanced mathematical abstractions which I've never even heard of.
One of the things I've found when dealing with--- er, reading the documentation for languages like Coq, is that the class of problems which motivate the need to move to such powerful formalisms are often so abstract that it is hard to come up with simple practical examples of why anyone should care. There are examples everywhere, but complex machinery is only motivated by complex problems, so it's hard to find nice simple examples. In particular, I've noticed that once you start *using* Coq (et al.) and trying to prove theorems about your programs, the subtle issues that motivate the complex machinery begin to make sense. For example, there's a lot of debate over whether Axiom K is a good thing or not. Just reading the literature doesn't shed any light on the real ramifications of having the axiom vs not; you really need to go about trying to prove definitional equalities and seeing the places where you can't proceed without it, before you can appreciate what all the hubbub is about.
It's a bit like trying to learn Prolog from somebody who thinks that the difference between first-order and second-order logic is somehow "common knowledge". (FWIW, I have absolutely no clue what that difference is.
First-order means you can only quantify over simple things. Second-order means you can also quantify over quantification, as it were. For example, the type system of simply-typed lambda calculus is 1st-order intuitionistic propositional logic, and System F (i.e., STLC + rank-n polymorphism) is 2nd-order. (F_omega is higher-order, but that's one of those wormholes in the lambda cube.) Though "higher-order" is a much abused term, which may cause some of the confusion. The higher-orderness discussed above has to do with quantification within types, which has to do with higher-orderness of the related logics. But we can also talk about a different sort of higher-orderness, namely what distinguishes System F from F_omega, or distinguishes LF from ITT. In STLC we add a simple language of types at a layer above the term layer, in order to make sure the term layer behaves itself. After hacking around we eventually decide it'd be cool to have functions at the type level. But how to we make sure that our types are well-formed? Well, we add a new layer of simple "types" on top of the type layer--- which gives us a second-order system. We could repeat the process again once we decide we want kind-level functions too.[1] To take a different track, in cognitive science people talk about "theory of mind", i.e. the idea that we each theorize that other people have minds, desires, beliefs, etc. A first-order theory of mind is when we attribute a mind to other people. A second-order theory of mind is when we attribute a theory of mind to other people (i.e., we believe that they believe that we have a mind). Etc. In epistemic/doxastic logics we can talk about first-order knowledge/beliefs, that is beliefs in simple propositions. But we can also talk about second-order beliefs, that is beliefs about beliefs. Etc. [1] See Tim Sheard's Omega for the logical conclusion of this process. -- Live well, ~wren

wren ng thornton wrote:
Andrew Coppin wrote:
I think I looked at Coq (or was it Epigram?) and found it utterly incomprehensible. Whoever wrote the document I was reading was obviously very comfortable with advanced mathematical abstractions which I've never even heard of.
One of the things I've found when dealing with--- er, reading the documentation for languages like Coq, is that the class of problems which motivate the need to move to such powerful formalisms are often so abstract that it is hard to come up with simple practical examples of why anyone should care. There are examples everywhere, but complex machinery is only motivated by complex problems, so it's hard to find nice simple examples.
Yeah. I think a similar problem is probably why most of the existing GHC extensions don't have comprehendable documentation. It's like "if you don't understand why this is useful, you probably don't even need it anyway". Except that that still doesn't explain the opaque language. (Other than that saying "this lets you put a thingy on the wotsit holder" would be even *more* opaque...)
It's a bit like trying to learn Prolog from somebody who thinks that the difference between first-order and second-order logic is somehow "common knowledge". (FWIW, I have absolutely no clue what that difference is.
First-order means you can only quantify over simple things. Second-order means you can also quantify over quantification, as it were.
Sure. I get the idea that "second-order" means "like first-order, but you can apply it to itself" (in most contexts, anyway). But what the heck does "quantification" mean? Sadly, I suspect that it would be like asking where Maidenhead is. ("Hey, where *is* Maidenhead anyway?" "Oh, it's near Slough." "Uh, where's Slough?" "It's in Berkshire." "...and where's Berkshire?" "It's near Surrey." "OK, forget I asked.")

Andrew Coppin wrote:
wren ng thornton wrote:
Andrew Coppin wrote:
It's a bit like trying to learn Prolog from somebody who thinks that the difference between first-order and second-order logic is somehow "common knowledge". (FWIW, I have absolutely no clue what that difference is.
First-order means you can only quantify over simple things. Second-order means you can also quantify over quantification, as it were.
Sure. I get the idea that "second-order" means "like first-order, but you can apply it to itself" (in most contexts, anyway).
Kind of. More accurately you can apply it to itself once. That is, continuing the quantification version: First we have a set S. In the 0th-order world we can only pick out particular elements of S by name, we can't say "oh, for any old S". In the 1st-order world we are allowed to quantify over elements of S (i.e., pick out subsets). However, in so doing, we implicitly create a new set S' which contains the names of all possible 1st-order quantifications over S. And in the 1st-order setting we have no way of quantifying over S', we must pick out elements of S' by name. But in the 2nd-order setting we can use 2nd-order quantification in order to say "any old element of S'"[1]. Of course, this will implicitly create a set S'' of 2nd-order quantifications... ... ... In practice, there seems to be something magical about 2nd-order systems. That is, while there are differences between 2nd-order and higher-order (i.e., >=2 including the limit), these differences tend to be trivial/obvious, whereas the leap from 1st-order systems to 2nd-order systems causes remarkable and catastrophic[2] changes. So technically 2nd-order systems are restricted to only quantifying over their 1st-order subsystems, but they introduce the notion of self-reference, which is as different from quantification as quantification is from naming.
But what the heck does "quantification" mean?
In the simplest sense, it's the ability to enumerate some collection. Though that only raises more questions: "which collection?", "how big of a collection can it be?" or equivalently "how powerful of an enumerator does it use?", "is it guaranteed to yield *every* element of the collection?",... In an only somewhat more general sense, it's a question philosophers (and logicians) have argued over for centuries. But more apropos, the distinction between orders is really a distinction about the power/nature of a system/theory. Quantification over sets is an example of a theory where we'd want to distinguish orders, but it isn't the only one. The categorization of lambda calculi into layers of terms, types, kinds, etc. is another example (where each layer is an order). The other example I have about Theory of Mind is yet another example, and perhaps the clearest for explaining what the distinction is ultimately trying to get at. In 1st-order theories we can generate hypotheses about the world, but in 2nd-/higher-order theories we can also generate hypotheses about our theories. Again, the 2nd-order introduces the notion of self-reference, the idea of a theory containing itself as an object, and the infinite regress of impredicativity.[3] [1] And we can still use 1st-order quantification in order to enumerate S. Frequently mathematicians don't bother distinguishing the different varieties of quantification available in a given system; so long as we know what the upper limit is, we use the same notation for everything under that limit. Sometimes, however, we do need to distinguish the different quantifiers. For example, this shows up in the stratification of universes to avoid impredicativity in Coq, Agda, etc... this is usually hidden/implicit from the programmer, though it shows up in the compiler/type checker. [2] That is, world-altering. Not necessarily bad. [3] Though noone can really say what "impredicativity" *is* either ;) -- Live well, ~wren

On Sat, Jun 26, 2010 at 3:27 AM, Jason Dagit
The types can depend on values. For example, have you ever notice we have families of functions in Haskell like zip, zip3, zip4, ..., and liftM, liftM2, ...? Each of them has a type that fits into a pattern, mainly the arity increases. Now imagine if you could pass a natural number to them and have the type change based on that instead of making new versions and incrementing the name. That of course, is only the tip of the iceberg.
That's also a degenerate example, because it doesn't actually require dependent types. What you have here are types dependent on *numbers*, not *values* specifically. That type numbers are rarely built into non-dependently-typed languages is another matter; encoding numbers (inefficiently) as types is mind-numbingly simple in Haskell, not requiring even any exciting GHC extensions (although encoding arithmetic on those numbers will soon pile the extensions on). Furthermore, families of functions of the flavor you describe are doubly degenerate examples: The simplest encoding for type numbers are the good old Peano numerals, expressed as nested type constructors like Z, S Z, S (S Z), and so on... but the arity of a function is *already* expressed as nested type constructors like [b] -> ([a] -> [(b, a)]), [c] -> ([b] -> ([a] ->[(c,b, a)])), and such! The only complication here is that the "zero" type changes for each number[0], but in a very practical sense these functions already encode type numbers. Many alleged uses for dependent types are similarly trivial--using them only as a shortcut for doing term-like computations on types. With sufficient sweat, tears, and GHC extensions, most if not all of said degenerate examples can be encoded directly at the type level. For those following along at home, here's a quick cheat-sheet on playing with type programs in GHC: - Type constructors build new type "values" - Type classes in general associate types with term values inhabiting them - Type families and MPTCs with fundeps provide functions on types - When an instance is selected, everything in its context is "evaluated" - Instance selection that depends on the result of another type function provides a sort of lazy evaluation (useful for control structures) - Getting anything interesting done usually needs UndecidableInstances plus Oleg's TypeEq and TypeCast classes Standard polymorphism also involves functions on types in a sense, but doesn't allow computation on them. As a crude analogy, one could think of type classes as allowing "pattern matching" on types, whereas parametric polymorphism can only bind types to generic variables without inspecting constructors.
Haskell's type system is sufficiently expressive that we can encode many instances of dependent types by doing some extra work.
Encoding *actual* instances of dependent types in some fashion is indeed possible, but it's a bit trickier. The examples you cite deal largely with making the degenerate cases more pleasant to work with; the paper by a charming bit of Church-ish encoding that weaves the desired number-indexed function right into the definition of the zero and successor, and she... well, she's a sight to behold to be sure, but at the end of the day she's not really doing all that much either. Since any value knowable at compile-time can be lifted to the type level one way or another, non-trivial faux-dependent types must depend on values not known until run-time--which of course means that the exact type will be unknown until run-time as well, i.e., an existential type. For instance, Oleg's uses of existential types to provide static guarantees about some property of run-time values[1] can usually be reinterpreted as a rather roundabout way of replicating something most naturally expressed by actual dependent types. Which isn't to say that type-level programming isn't useful, of course--just that if you know the actual type at compile-time, you don't really need dependent types. - C. [0] This is largely because of how Haskell handles tuples--the result of a zipN function is actually another type number, not a zero, but there's no simple way to find the successor of a tuple. More sensible, from a number types perspective, would be to construct tuples using () as zero and (_, n) as successor. This would give us zip0 :: [()], zip1 :: [a] -> [(a, ())], zip2 :: [b] -> ([a] -> [(b, (a, ()))]), and so on. The liftM and zipWith functions avoid this issue. [1] See http://okmij.org/ftp/Haskell/types.html#branding and http://okmij.org/ftp/Haskell/regions.html for instance.

On Sat, Jun 26, 2010 at 3:49 AM, wren ng thornton
Jason Dagit wrote:
On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety
wrote: I've noticed over the - okay, over the months - that some folks enjoy the puzzle-like qualities of programming in the type system (poor Oleg, he's become #haskell's answer to the "Chuck Norris" meme commonly encountered in MMORPGs).
Anyway,... are there any languages out there whose term-level programming resembles Haskell type-level programming, and if so, would a deliberate effort to appeal to that resemblance be an advantage (leaving out for now the hair-pulling effort that such a change would entail)?
I'm not a prolog programmer, but I've heard that using type classes to do your computations leads to code that resembles prolog.
Indeed. If you like the look of Haskell's type-level programming, you should look at logic programming languages based on Prolog. Datalog gives a well understood fragment of Prolog. ECLiPSe[1] extends Prolog with constraint programming. Mercury[2], lambda-Prolog[3], and Dyna give a more modern take on the paradigm.
It's interesting how C++ is imperative at the term level and functional at the type level, while Haskell is functional at the term level and similar to logic programming at the type level. Given imperative, functional, and logic programming, that's nine possible combinations of paradigms at the term and type level. How many of them do we have examples for? imperative/functional: C++ functional/logic: Haskell functional/functional: Agda etc. (?) imperative/imperative: Smalltalk/Ruby?
If you're just a fan of logic variables and want something more Haskell-like, there is Curry[4]. In a similar vein there's also AliceML[5] which gives a nice futures/concurrency story to ML. AliceML started out on the same VM as Mozart/Oz[6], which has similar futures, though a different overall programming style.
And, as Jason said, if you're just interested in having the same programming style at both term and type levels, then you should look into dependently typed languages. Agda is the most Haskell-like, Epigram draws heavily from the Haskell community, and Coq comes more from the ML tradition. There's a menagerie of others too, once you start looking.
[1] http://eclipse-clp.org/ is currently down, but can be accessed at http://87.230.22.228/ [2] http://www.mercury.csse.unimelb.edu.au/ [3] http://www.lix.polytechnique.fr/~dale/lProlog/ [4] http://www-ps.informatik.uni-kiel.de/currywiki/ [5] http://www.ps.uni-saarland.de/alice/ [6] http://www.mozart-oz.org/
-- Live well, ~wren _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Work is punishment for failing to procrastinate effectively.

Hello Gábor, Saturday, June 26, 2010, 4:29:28 PM, you wrote:
It's interesting how C++ is imperative at the term level and functional at the type level
or logic? it supports indeterminate choice -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Fri, Jun 25, 2010 at 02:26:54PM -0700, Walt Rorie-Baety wrote:
I've noticed over the - okay, over the months - that some folks enjoy the puzzle-like qualities of programming in the type system (poor Oleg, he's become #haskell's answer to the "Chuck Norris" meme commonly encountered in MMORPGs).
Anyway,... are there any languages out there whose term-level programming resembles Haskell type-level programming, and if so, would a deliberate effort to appeal to that resemblance be an advantage (leaving out for now the hair-pulling effort that such a change would entail)?
As several people have pointed out, type-level programming in Haskell resembles logic programming a la Prolog -- however, this actually only applies to type-level programming using multi-parameter type classes with functional dependencies [1] (which was until recently the only way to do it). Type-level programming using the newer type families [2] (which are equivalent in power [3]) actually lets you program in a functional style, much more akin to defining value-level functions in Haskell. However, all of this type-level programming is fairly *untyped* -- the only kinds available are * and (k1 -> k2) where k1 and k2 are kinds [4], so type-level programming essentially takes place in the simply typed lambda calculus with only a single base type, except you can't write explicit lambdas. I'm currently working on a project with Simon Peyton-Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Steve Zdancewic on enabling *typed* functional programming at the type level in GHC, very much inspired by Conor McBride's SHE preprocessor [5]. I've got a blog post in the works explaining our goals in much more detail, hopefully I'll be able to get that up in the next day or two. -Brent [1] http://haskell.org/haskellwiki/Functional_dependencies [2] http://haskell.org/haskellwiki/Type_families [3] http://www.haskell.org/pipermail/haskell-cafe/2009-February/055890.html [4] Leaving out GHC's special ? and ?? kinds which aren't really of interest to the type-level programmer. [5] http://personal.cis.strath.ac.uk/~conor/pub/she/

Brent Yorgey wrote:
As several people have pointed out, type-level programming in Haskell resembles logic programming a la Prolog -- however, this actually only applies to type-level programming using multi-parameter type classes with functional dependencies [1] (which was until recently the only way to do it).
Type-level programming using the newer type families [2] (which are equivalent in power [3]) actually lets you program in a functional style, much more akin to defining value-level functions in Haskell.
I did wonder what the heck a "type function" is or why you'd want one. And then a while later I wrote some code along the lines of class Collection c where type Element c :: * empty :: c -> Bool first :: c -> Element c So now it's like Element is a function that takes a collection type and returns the type of its elements - a *type function*. Suddenly the usual approach of doing something like class Collection c where empty :: c x -> Bool first :: c x -> x seems like a clumsy kludge, and the first snippet seems much nicer. (I gather that GHC's implementation of all this is still "fragile" though? Certainly I regularly get some very, very strange type errors if I try to use this stuff...) The latter approach relies on "c" having a particular kind, and the element type being a type argument (rather than static), and in a specific argument position, and so on. So you can construct a class that works for *one* type of element, or for *every* type of element, but not for only *some*. The former approach (is that type families or associated types or...? I get confused with all the terms for nearly the same thing...) seems much cleaner to me. I never liked FDs in the first place. Not only is Element now a function, but you define it as a sort of case-by-case pattern match: instance Collection Bytestring where type Element ByteString = Word8 instance Collection [x] where type Element [x] = x instance Ord x => Collection (Set x) where type Element (Set x) = x ... So far, I haven't seen any other obvious places where this technology might be useful (other than monads - which won't work). Then again, I haven't looked particularly hard either. ;-)
However, all of this type-level programming is fairly *untyped*
Yeah, there is that.
-- the only kinds available are * and (k1 -> k2)
Does # not count?
so type-level programming essentially takes place in the simply typed lambda calculus with only a single base type, except you can't write explicit lambdas.
Uh... if you say so? o_O
I'm currently working on a project with Simon Peyton-Jones, Dimitrios Vytiniotis, Stephanie Weirich, and Steve Zdancewic on enabling *typed* functional programming at the type level in GHC
Certainly sounds interesting...

Andrew Coppin wrote:
I did wonder what the heck a "type function" is or why you'd want one. And then a while later I wrote some code along the lines of
class Collection c where type Element c :: * empty :: c -> Bool first :: c -> Element c
So now it's like Element is a function that takes a collection type and returns the type of its elements - a *type function*.
Suddenly the usual approach of doing something like
class Collection c where empty :: c x -> Bool first :: c x -> x
seems like a clumsy kludge, and the first snippet seems much nicer. (I gather that GHC's implementation of all this is still "fragile" though? Certainly I regularly get some very, very strange type errors if I try to use this stuff...)
Adding type functions introduces a lot of theoretical complexity to the type system. It's very easy to end up loosing decidability of type inference/checking, not giving the power/properties you want, or both. I get the impression that folks are still figuring out exactly how to balance these issues in Haskell. For example, should type functions be injective (i.e. (F x = F y) ==> (x = y)) or not? If we make them injective then that rules out writing non-injective functions. But if we don't, then we cripple the type inferencer. This is why there's a distinction between associated datatypes (injective) vs associated type aliases (not).
the only kinds available are * and (k1 -> k2)
Does # not count?
The other kinds like #, ?, and ?? are just implementation details in GHC. By and large they are not theoretically significant. Also, other compilers (e.g. jhc) have different kinding systems for optimizations. Some folks have advocated for distinguishing the kind of proper types from the kind of type indices. This is theoretically significant, and I think it's a fabulous idea. But it hasn't been implemented AFAIK. -- Live well, ~wren
participants (15)
-
Alexander Solla
-
Andrew Coppin
-
Brandon S Allbery KF8NH
-
Brent Yorgey
-
Bulat Ziganshin
-
C. McCann
-
Erik de Castro Lopo
-
Gregory Crosswhite
-
Gábor Lehel
-
Jason Dagit
-
Liam O'Connor
-
Stephen Tetley
-
Tony Morris
-
Walt Rorie-Baety
-
wren ng thornton