The Definition of Haskell

Why doesn't Haskell (or at least "core" Haskell) have a formal definition and safety result the way Standard ML does [1], [2]? Is it just that no one has gotten around to producing them, or because there are fundamental difficulties that stand in the way of completing such a project? Todd Wilson [1] Robin Milner, Mads Tofte, Robert Harper, and David MacQueen. The Definition of Standard ML (Revised). MIT Press, 1997. [2] Daniel K. Lee , Karl Crary , Robert Harper, "Mechanizing the Metatheory of Standard ML", http://www.cs.cmu.edu/~dklee/tslf/

Dunno about safety results, but
https://www.haskell.org/onlinereport/haskell2010/ exists (next revision
expected in 2020 iirc).
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/lang.html
documents ghc's extensions to the standard, and
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/bugs.html#ha...
its deviations from the standard, some of which will likely be in the next
standard.
Some things, such as how typeclasses are implemented, are not specified in
the standard so that alternative implementations can choose to implement
them differently. ghc uses dictionary / record parameters; jhc did whole
program compilation, so all non-polymorphically-recursive typeclass methods
could be resolved at compile time (I don't know offhand how it handled
polymorphic recursion).
On Sat, Dec 16, 2017 at 11:43 PM, Todd Wilson
Why doesn't Haskell (or at least "core" Haskell) have a formal definition and safety result the way Standard ML does [1], [2]? Is it just that no one has gotten around to producing them, or because there are fundamental difficulties that stand in the way of completing such a project?
Todd Wilson
[1] Robin Milner, Mads Tofte, Robert Harper, and David MacQueen. The Definition of Standard ML (Revised). MIT Press, 1997. [2] Daniel K. Lee , Karl Crary , Robert Harper, "Mechanizing the Metatheory of Standard ML", http://www.cs.cmu.edu/~dklee/tslf/
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

SPJ may answered part of it in his talk Escape from the ivory tower:
https://www.youtube.com/watch?v=re96UgMk6GQ&feature=share from about 29
minutes.
On Sat, Dec 16, 2017 at 21:17 Brandon Allbery
Dunno about safety results, but https://www.haskell.org/onlinereport/haskell2010/ exists (next revision expected in 2020 iirc). https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/lang.html documents ghc's extensions to the standard, and https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/bugs.html#ha... its deviations from the standard, some of which will likely be in the next standard.
Some things, such as how typeclasses are implemented, are not specified in the standard so that alternative implementations can choose to implement them differently. ghc uses dictionary / record parameters; jhc did whole program compilation, so all non-polymorphically-recursive typeclass methods could be resolved at compile time (I don't know offhand how it handled polymorphic recursion).
On Sat, Dec 16, 2017 at 11:43 PM, Todd Wilson
wrote: Why doesn't Haskell (or at least "core" Haskell) have a formal definition and safety result the way Standard ML does [1], [2]? Is it just that no one has gotten around to producing them, or because there are fundamental difficulties that stand in the way of completing such a project?
Todd Wilson
[1] Robin Milner, Mads Tofte, Robert Harper, and David MacQueen. The Definition of Standard ML (Revised). MIT Press, 1997. [2] Daniel K. Lee , Karl Crary , Robert Harper, "Mechanizing the Metatheory of Standard ML", http://www.cs.cmu.edu/~dklee/tslf/
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Sat, Dec 16, 2017 at 9:14 PM Brandon Allbery
Dunno about safety results, but https://www.haskell.org/onlinereport/haskell2010/ exists (next revision expected in 2020 iirc).
Although this can be considered a definition of the language, it is by no
means a formal definition of the static and dynamic semantics of the
language in a form amenable to representation in systems like Coq, where
metatheory might be checked.
On Sat, Dec 16, 2017 at 9:22 PM Baojun Wang
SPJ may answered part of it in his talk Escape from the ivory tower: https://www.youtube.com/watch?v=re96UgMk6GQ&feature=share from about 29 minutes.
Nice talk; thanks for the reference. To summarize: formalization, which is a lot of work, leads to standardization and the reluctance to change the language, and one of Haskell's strong points is that it is constantly evolving. I see SPJ's point, but he makes another point in the same video, which is that Haskell has a small, internal core language to which everything must elaborate, so this would seem to make formalization even more attractive: formalize and prove safety for the core language once and for all, and then specify the elaborations and check their properties modularly, which should work well even in the face of language evolution.

Hello,
Many parts of the language are formally specified in papers written by SPJ, or someone else who worked closely on that feature. For example, certain modifications to the type system are described here [1].
Check out SPJ's Microsoft Research page for other papers [2]. I seem to recall reading a formal definition of Haskell Core, but can't recall whether it was in one of these papers or not.
Cheers,
Jon
[1]: https://www.microsoft.com/en-us/research/publication/practical-aspects-evide...
[2]: https://www.microsoft.com/en-us/research/people/simonpj/?from=http%3A%2F%2Fr...
On December 17, 2017 1:53:06 PM CST, Todd Wilson
On Sat, Dec 16, 2017 at 9:14 PM Brandon Allbery
wrote: Dunno about safety results, but https://www.haskell.org/onlinereport/haskell2010/ exists (next revision expected in 2020 iirc).
Although this can be considered a definition of the language, it is by no means a formal definition of the static and dynamic semantics of the language in a form amenable to representation in systems like Coq, where metatheory might be checked.
On Sat, Dec 16, 2017 at 9:22 PM Baojun Wang
wrote: SPJ may answered part of it in his talk Escape from the ivory tower: https://www.youtube.com/watch?v=re96UgMk6GQ&feature=share from about 29 minutes.
Nice talk; thanks for the reference. To summarize: formalization, which is a lot of work, leads to standardization and the reluctance to change the language, and one of Haskell's strong points is that it is constantly evolving.
I see SPJ's point, but he makes another point in the same video, which is that Haskell has a small, internal core language to which everything must elaborate, so this would seem to make formalization even more attractive: formalize and prove safety for the core language once and for all, and then specify the elaborations and check their properties modularly, which should work well even in the face of language evolution.
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.

Hi Todd, The GHC repository includes a PDF, with its source, formalizing the Core language (no proofs); the rules are notably written in the parseable .ott format. https://github.com/ghc/ghc/blob/master/docs/core-spec Li-yao On 12/17/2017 02:53 PM, Todd Wilson wrote:
On Sat, Dec 16, 2017 at 9:14 PM Brandon Allbery
mailto:allbery.b@gmail.com> wrote: Dunno about safety results, but https://www.haskell.org/onlinereport/haskell2010/ exists (next revision expected in 2020 iirc).
Although this can be considered a definition of the language, it is by no means a formal definition of the static and dynamic semantics of the language in a form amenable to representation in systems like Coq, where metatheory might be checked.
On Sat, Dec 16, 2017 at 9:22 PM Baojun Wang
mailto:wangbj@gmail.com> wrote: SPJ may answered part of it in his talk Escape from the ivory tower: https://www.youtube.com/watch?v=re96UgMk6GQ&feature=share from about 29 minutes.
Nice talk; thanks for the reference. To summarize: formalization, which is a lot of work, leads to standardization and the reluctance to change the language, and one of Haskell's strong points is that it is constantly evolving.
I see SPJ's point, but he makes another point in the same video, which is that Haskell has a small, internal core language to which everything must elaborate, so this would seem to make formalization even more attractive: formalize and prove safety for the core language once and for all, and then specify the elaborations and check their properties modularly, which should work well even in the face of language evolution.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I see SPJ's point, but he makes another point in the same video, which is that Haskell has a small, internal core language to which everything must elaborate, so this would seem to make formalization even more attractive: formalize and prove safety for the core language once and for all, and then specify the elaborations and check their properties modularly, which should work well even in the face of language evolution.
Yes – and Core is indeed formalised quite carefully (by Richard Eisenberg). See https://ghc.haskell.org/trac/ghc/ticket/14572#comment:8
Simon
From: Haskell-Cafe [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Todd Wilson
Sent: 17 December 2017 19:53
To: Haskell Cafe

Thanks, all, this was indeed what I was looking for, with the reference to
Ott as an added bonus. It looks like formalizing and proving safety for
this system in a proof assistant would make a good project.
--Todd
On Mon, Dec 18, 2017 at 1:42 AM Simon Peyton Jones
I see SPJ's point, but he makes another point in the same video, which is that Haskell has a small, internal core language to which everything must elaborate, so this would seem to make formalization even more attractive: formalize and prove safety for the core language once and for all, and then specify the elaborations and check their properties modularly, which should work well even in the face of language evolution.
Yes – and Core is indeed formalised quite carefully (by Richard Eisenberg). See https://ghc.haskell.org/trac/ghc/ticket/14572#comment:8
Simon
*From:* Haskell-Cafe [mailto:haskell-cafe-bounces@haskell.org] *On Behalf Of *Todd Wilson *Sent:* 17 December 2017 19:53 *To:* Haskell Cafe
*Subject:* Re: [Haskell-cafe] The Definition of Haskell On Sat, Dec 16, 2017 at 9:14 PM Brandon Allbery
wrote: Dunno about safety results, but https://www.haskell.org/onlinereport/haskell2010/ exists (next revision expected in 2020 iirc).
Although this can be considered a definition of the language, it is by no means a formal definition of the static and dynamic semantics of the language in a form amenable to representation in systems like Coq, where metatheory might be checked.
On Sat, Dec 16, 2017 at 9:22 PM Baojun Wang
wrote: SPJ may answered part of it in his talk Escape from the ivory tower: https://www.youtube.com/watch?v=re96UgMk6GQ&feature=share https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dre96UgMk6GQ%26feature%3Dshare&data=02%7C01%7Csimonpj%40microsoft.com%7C6fb497b3f6c94c81876e08d545882101%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636491373335007410&sdata=5WnVwESDGAotRmeUq6wevSmwgsg6hFFGe6c90YYFv6E%3D&reserved=0 from about 29 minutes.
Nice talk; thanks for the reference. To summarize: formalization, which is a lot of work, leads to standardization and the reluctance to change the language, and one of Haskell's strong points is that it is constantly evolving.
I see SPJ's point, but he makes another point in the same video, which is that Haskell has a small, internal core language to which everything must elaborate, so this would seem to make formalization even more attractive: formalize and prove safety for the core language once and for all, and then specify the elaborations and check their properties modularly, which should work well even in the face of language evolution.

On Dec 18, 2017, at 12:27 PM, Todd Wilson
wrote: It looks like formalizing and proving safety for this system in a proof assistant would make a good project.
It would, but formalizing a variant is already underway. See https://repository.brynmawr.edu/cgi/viewcontent.cgi?article=1076&context=compsci_pubs for a recent paper discussing (among other things) a formalization of a dependently-typed version of Core. Much more development in the formalization has taken place since that paper was written, but you’ll have to contact Stephanie Weirich (sweirich@cis.upenn.edu) for the latest, as I’m not involved at the ground level of the formalization. Richard

On 2017-12-18 11:08 AM, Richard Eisenberg wrote:
… formalizing a variant is already underway. See https://repository.brynmawr.edu/cgi/viewcontent.cgi?article=1076&context=compsci_pubs for a recent paper discussing (among other things) a formalization of a dependently-typed version of Core. Much more development in the formalization has taken place since that paper was written, but you’ll have to contact Stephanie Weirich (sweirich@cis.upenn.edu mailto:sweirich@cis.upenn.edu) for the latest, as I’m not involved at the ground level of the formalization.
The past two issues of Haskell Weekly News https://haskellweekly.news/ have mentioned research from some of the same people: Total Haskell is reasonable Coq https://arxiv.org/abs/1711.09286 We would like to use the Coq proof assistant to mechanically verify properties of Haskell programs. To that end, we present a tool, named hs-to-coq, that translates total Haskell programs into Coq programs via a shallow embedding. Finding bugs in Haskell code by proving it https://www.joachim-breitner.de/blog/734-Finding_bugs_in_Haskell_code_by_pro... We have used hs-to-coq on various examples, as described in our CPP’18 paper, but it is high-time to use it for real.
participants (8)
-
Baojun Wang
-
Brandon Allbery
-
Jonathan Paugh
-
Li-yao Xia
-
Neil Mayhew
-
Richard Eisenberg
-
Simon Peyton Jones
-
Todd Wilson