Haskell.org GSoC

Hi, I noticed last year Haskell.org was a mentoring organization for Google's Summer of Code, and I barely noticed some discussion about it applying again this year :) I participated for GCC in 2008 and would like to try again this year; while I'm still active for GCC and will surely stay so, I'd like to see something new at least for GSoC. And Haskell.org would surely be a very, very nice organization. Since I discovered there's more than just a lot of imperative languages that are nearly all the same, I love to do some programming in Prolog, Scheme and of course Haskell. However, so far this was only some toy programs and nothing "really useful"; I'd like to change this (as well as learning more about Haskell during the projects). Here are some ideas for developing Haskell packages (that would hopefully be of general use to the community) as possible projects: - Numerics, like basic linear algebra routines, numeric integration and other basic algorithms of numeric mathematics. - A basic symbolic maths package; I've no idea how far one could do this as a single GSoC project, but it would surely be a very interesting task. Alternatively or in combination, one could try to use an existing free CAS package as engine. - Graphs. - Some simulation routines from physics, though I've not really an idea what exactly one should implement here best. - A logic programming framework. I know there's something like that for Scheme; in my experience, there are some problems best expressed logically with Prolog-style backtracking/predicates and unification. This could help use such formulations from inside a Haskell program. This is surely also a very interesting project. What do you think about these ideas? I'm pretty sure there are already some of those implemented, but I also hope some would be new and really of some use to the community. Do you think something would be especially nice to have and is currently missing? Thanks for your comments, Daniel

d:
Hi,
I noticed last year Haskell.org was a mentoring organization for Google's Summer of Code, and I barely noticed some discussion about it applying again this year :)
I participated for GCC in 2008 and would like to try again this year; while I'm still active for GCC and will surely stay so, I'd like to see something new at least for GSoC. And Haskell.org would surely be a very, very nice organization.
Since I discovered there's more than just a lot of imperative languages that are nearly all the same, I love to do some programming in Prolog, Scheme and of course Haskell. However, so far this was only some toy programs and nothing "really useful"; I'd like to change this (as well as learning more about Haskell during the projects).
Here are some ideas for developing Haskell packages (that would hopefully be of general use to the community) as possible projects:
- Numerics, like basic linear algebra routines, numeric integration and other basic algorithms of numeric mathematics.
I think a lot of the numerics stuff is now covered by libraries (see e.g. haskell-blas, haskell-lapack, haskell-fftw)
- A basic symbolic maths package; I've no idea how far one could do this as a single GSoC project, but it would surely be a very interesting task. Alternatively or in combination, one could try to use an existing free CAS package as engine.
Interesting, but niche, imo.
- Graphs.
- Some simulation routines from physics, though I've not really an idea what exactly one should implement here best.
True graphs (the data structure) are still a weak point! There's no canonical graph library for Haskell.
- A logic programming framework. I know there's something like that for Scheme; in my experience, there are some problems best expressed logically with Prolog-style backtracking/predicates and unification. This could help use such formulations from inside a Haskell program. This is surely also a very interesting project.
Interesting, lots of related work, hard to state the benefits to the community though.
What do you think about these ideas? I'm pretty sure there are already some of those implemented, but I also hope some would be new and really of some use to the community. Do you think something would be especially nice to have and is currently missing?
Think about how many people would benefit. For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users. -- Don

Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that? Best wishes, Wolfgang

g9ks157k:
Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
No one said anything about unrestricted commit rights ... we're not crazy ... what if it were more like, say, RWH's wiki .. where comments go to editors to encorporate ... -- Don

From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Don Stewart
You mean, everyone should be able to mess about with my
documentation? This
would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
No one said anything about unrestricted commit rights ... we're not crazy ... what if it were more like, say, RWH's wiki .. where comments go to editors to incorporate ...
Yes. PostgreSQL online docs have a similar feature, although their implementation isn't perhaps as nice as RWH's (I like RWH's inline comments feature). See e.g. http://www.postgresql.org/docs/8.3/interactive/tutorial-createdb.html http://www.postgresql.org/docs/8.3/interactive/functions-string.html http://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html http://www.postgresql.org/docs/8.3/interactive/textsearch-tables.html Alistair ***************************************************************** Confidentiality Note: The information contained in this message, and any attachments, may contain confidential and/or privileged material. It is intended solely for the person(s) or entity to which it is addressed. Any review, retransmission, dissemination, or taking of any action in reliance upon this information by persons or entities other than the intended recipient(s) is prohibited. If you received this in error, please contact the sender and delete the material from any computer. *****************************************************************

Don Stewart wrote:
No one said anything about unrestricted commit rights ... we're not crazy ... what if it were more like, say, RWH's wiki .. where comments go to editors to encorporate ...
By the way, the PHP documentation has such a comment feature, see for example http://www.php.net/manual/en/function.preg-match.php But I'm not sure whether this form of commenting is the best way to write/improve documentation. (The many proposed regular expressions for validating e-mail addresses sure have a certain hilarity to them...) Regards, apfelmus -- http://apfelmus.nfshost.com

Am Donnerstag, 12. Februar 2009 09:15 schrieben Sie:
g9ks157k:
Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
No one said anything about unrestricted commit rights ... we're not crazy ... what if it were more like, say, RWH's wiki .. where comments go to editors to encorporate ...
“Wiki” sounds like you could edit the documentation of the package. This wouldn’t necessarily mean that these modifications are written back to the repository. However, it would men that Hackage is not longer showing the real documentation of the repository but what others made of it by wiki editing. I would dislike this. However, a commenting system like that of RWH is a very different thing. People would not edit the actual documentation (so the documentation is not a wiki). People would just add comments. This might be a good idea. Best wishes, Wolfgang

Wolfgang Jeltsch
Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
The fact that you can mark a revision to be the official one and, in case you e.g. have a wiki yourself, disable the hackage wiki? -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Am Donnerstag, 12. Februar 2009 09:20 schrieb Achim Schneider:
Wolfgang Jeltsch
wrote: Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
The fact that you can mark a revision to be the official one and, in case you e.g. have a wiki yourself, disable the hackage wiki?
What do you mean by “disabling the Hackage wiki”? Best wishes, Wolfgang

Wolfgang Jeltsch
Am Donnerstag, 12. Februar 2009 09:20 schrieb Achim Schneider:
Wolfgang Jeltsch
wrote: Am Mittwoch, 11. Februar 2009 18:51 schrieb Don Stewart:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
The fact that you can mark a revision to be the official one and, in case you e.g. have a wiki yourself, disable the hackage wiki?
What do you mean by ___disabling the Hackage wiki___?
Well, disabling the wiki function of the documentation on hackage and redirect to somewhere else, instead. The point is that it's certainly possible to have some maintainer-guaranteed integrity and the possibility of low-barrier contributions at the same time. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Wolfgang Jeltsch wrote:
Don Stewart wrote:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good thing about that?
Well, it's excellent if you haven't written said documentation yet. ;) But yes, the package author should have the last word about what goes in and what does not. The goal is simply to drastically lower the bar of participation. Regards, apfelmus -- http://apfelmus.nfshost.com

On Thu, Feb 12, 2009 at 2:42 AM, Heinrich Apfelmus < apfelmus@quantentunnel.de> wrote:
Don Stewart wrote:
For example, if all the haddocks on hackage.org were a wiki, and interlinked, every single package author would benefit, as would all users.
You mean, everyone should be able to mess about with my documentation? This would be similar to give everyone commit rights to my repositories or allow everyone to edit the code of my published libraries. What is the good
Wolfgang Jeltsch wrote: thing
about that?
Well, it's excellent if you haven't written said documentation yet. ;)
But yes, the package author should have the last word about what goes in and what does not. The goal is simply to drastically lower the bar of participation.
Something like AnnoCPAN would be a good middle-ground here; i.e. differentiate between "official" package documentation and user annotations, but make them both visible. Luke

Am Donnerstag, 12. Februar 2009 10:49 schrieb Luke Palmer:
Something like AnnoCPAN would be a good middle-ground here; i.e. differentiate between "official" package documentation and user annotations, but make them both visible.
And give visitors of the Hackage website the choice to not see the comments at all. Best wishes, Wolfgang

Don Stewart wrote:
- Graphs.
True graphs (the data structure) are still a weak point! There's no canonical graph library for Haskell.
That sounds interesting... What do you mean by "no canonical" library? Are there already ones but just no "standard" one? But in this case, I don't think adding yet another one will help :D Or isn't there a "real" general graph library? If so, this would be a nice thing to do :) I could look at existing ones (like Boost's graphs) to get a feeling for how an interface might look like and what functionality to implement. BTW, is there some sort of "project hosting" specifically for such Haskell projects? Or should I go with sourceforge (for instance) for developing this, if I gave it a try? Thanks, Daniel

Daniel Kraft asked:
That sounds interesting... What do you mean by "no canonical" library? Are there already ones but just no "standard" one? But in this case, I don't think adding yet another one will help :D Or isn't there a "real" general graph library? My impression is that there is now standard one, but then again I've only used Haskell + a graph library once.
BTW, is there some sort of "project hosting" specifically for such Haskell projects? Or should I go with sourceforge (for instance) for developing this, if I gave it a try? Get a community.haskell.org account once you are ready to start a repo, it can not only host your repo (ex: http://community.haskell.org/~tommd/pureMD5) but also allows you to upload packages to hackage.haskell.org.
Thomas

Am Donnerstag, 12. Februar 2009 15:34 schrieb Thomas DuBuisson:
Daniel Kraft asked:
That sounds interesting... What do you mean by "no canonical" library? Are there already ones but just no "standard" one? But in this case, I don't think adding yet another one will help :D Or isn't there a "real" general graph library?
My impression is that there is now standard one, but then again I've only used Haskell + a graph library once.
BTW, is there some sort of "project hosting" specifically for such Haskell projects? Or should I go with sourceforge (for instance) for developing this, if I gave it a try?
Get a community.haskell.org account once you are ready to start a repo, it can not only host your repo (ex: http://community.haskell.org/~tommd/pureMD5) but also allows you to upload packages to hackage.haskell.org.
I already have a Hackage account. Can this be readily used as a community.haskell.org account? If not, what if I get a community account. Do I have two accounts for Hackage access then? Best wishes, Wolfgang

On Thu, Feb 12, 2009 at 04:10:21PM +0100, Wolfgang Jeltsch wrote:
Am Donnerstag, 12. Februar 2009 15:34 schrieb Thomas DuBuisson:
Daniel Kraft asked:
That sounds interesting... What do you mean by "no canonical" library? Are there already ones but just no "standard" one? But in this case, I don't think adding yet another one will help :D Or isn't there a "real" general graph library?
My impression is that there is now standard one, but then again I've only used Haskell + a graph library once.
BTW, is there some sort of "project hosting" specifically for such Haskell projects? Or should I go with sourceforge (for instance) for developing this, if I gave it a try?
Get a community.haskell.org account once you are ready to start a repo, it can not only host your repo (ex: http://community.haskell.org/~tommd/pureMD5) but also allows you to upload packages to hackage.haskell.org.
I already have a Hackage account. Can this be readily used as a community.haskell.org account? If not, what if I get a community account. Do I have two accounts for Hackage access then?
No, they are two separate things. A Hackage account just lets you upload things to Hackage. A community.haskell.org account lets you log into code.haskell.org (a completely different server than Hackage), host projects there, and so on. -Brent

Am Samstag, 14. Februar 2009 16:59 schrieb Brent Yorgey:
On Thu, Feb 12, 2009 at 04:10:21PM +0100, Wolfgang Jeltsch wrote:
Am Donnerstag, 12. Februar 2009 15:34 schrieb Thomas DuBuisson:
Get a community.haskell.org account once you are ready to start a repo, it can not only host your repo (ex: http://community.haskell.org/~tommd/pureMD5) but also allows you to upload packages to hackage.haskell.org.
I already have a Hackage account. Can this be readily used as a community.haskell.org account? If not, what if I get a community account. Do I have two accounts for Hackage access then?
No, they are two separate things. A Hackage account just lets you upload things to Hackage. A community.haskell.org account lets you log into code.haskell.org (a completely different server than Hackage), host projects there, and so on.
But Thomas DuBuisson wrote that you can upload packages to HackageDB with your community.haskell.org account (see above). Is this wrong? Best wishes, Wolfgang

g9ks157k:
Am Samstag, 14. Februar 2009 16:59 schrieb Brent Yorgey:
On Thu, Feb 12, 2009 at 04:10:21PM +0100, Wolfgang Jeltsch wrote:
Am Donnerstag, 12. Februar 2009 15:34 schrieb Thomas DuBuisson:
Get a community.haskell.org account once you are ready to start a repo, it can not only host your repo (ex: http://community.haskell.org/~tommd/pureMD5) but also allows you to upload packages to hackage.haskell.org.
I already have a Hackage account. Can this be readily used as a community.haskell.org account? If not, what if I get a community account. Do I have two accounts for Hackage access then?
No, they are two separate things. A Hackage account just lets you upload things to Hackage. A community.haskell.org account lets you log into code.haskell.org (a completely different server than Hackage), host projects there, and so on.
But Thomas DuBuisson wrote that you can upload packages to HackageDB with your community.haskell.org account (see above). Is this wrong?
Yes. Register for community accounts (shell, trac, darcs etc) http://community.haskell.org/ (or you can use github if all you need is a repo). To get hackage upload perms, http://hackage.haskell.org/packages/accounts.html -- Don

Daniel Kraft
Don Stewart wrote:
- Graphs.
True graphs (the data structure) are still a weak point! There's no canonical graph library for Haskell.
That sounds interesting... What do you mean by "no canonical" library? Are there already ones but just no "standard" one? But in this case, I don't think adding yet another one will help :D Or isn't there a "real" general graph library?
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/fgl ? -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Don Stewart wrote:
- Graphs.
True graphs (the data structure) are still a weak point! There's no canonical graph library for Haskell.
That sounds interesting... What do you mean by "no canonical" library? Are there already ones but just no "standard" one? But in this case, I don't think adding yet another one will help :D Or isn't there a "real" general graph library? I would also like to see a project working on a new graph library. Currently, there is at least Data.Graph (just one Module, package containers), based on Array - adjacency lists, and the functional graph
If so, this would be a nice thing to do :) I could look at existing ones (like Boost's graphs) to get a feeling for how an interface might look like and what functionality to implement.
BTW, is there some sort of "project hosting" specifically for such Haskell projects? Or should I go with sourceforge (for instance) for developing this, if I gave it a try? code.haskell.org / community.haskell.org
Daniel Kraft schrieb: library (package fgl). I think a good general purpose graph library is tricky though: - There are lot of variants of graphs (trees, bipartite, acyclic, undirected, simple, edge labeled etc.), hard to find adequate and easy to use abstraction. - There is no single 'best' implementation (mutable vs. unmutable etc.). - Its hard to find good traversal and zipper abstractions, though fgl has some nice ones. - The complexity of algorithms varies greatly depending on the particular kind of graph. Anyway, that's why it is challenging and interesting. provides webspace, trac, mailing-list, darcs. benedikt

Benedikt Huber
I would also like to see a project working on a new graph library. [...] I think a good general purpose graph library is tricky though:
And please, let us have a library that is scalable! This means there should be one (or several) benchmark application(s) that do non-trivial work on graphs of non-trivial sizes. -k -- If I haven't seen further, it is by standing in the footprints of giants

Benedikt writes:
I think a good general purpose graph library is tricky though: - There are lot of variants of graphs (trees, bipartite, acyclic, undirected, simple, edge labeled etc.), hard to find adequate and easy to use abstraction. - There is no single 'best' implementation (mutable vs. unmutable etc.). - Its hard to find good traversal and zipper abstractions, though fgl has some nice ones. - The complexity of algorithms varies greatly depending on the particular kind of graph. Anyway, that's why it is challenging and interesting.
From what I can tell, fgl is the closest thing I've seen to a usable remotely-general-purpose graph library in any language. With all apologies to Siek, et al., the boost graph library is exceedingly complex, to the
Hear, hear! point of being unusable (one man's opinion) to all but the most ardent lovers of C++ template metaprogramming. That's no knock on its designers, it's just that the graph problem provides an exceptionally challenging tension between managing complexity and providing acceptable performance--on top of trying to achieve respectable generality. It seems to me that anyone who aspires to author a Haskell graph library that can be regarded as canonical really ought to first know fgl inside and out, and probably ought to consider extending that library, vs. starting from scratch. In fact, it was my current work with non-trivial graph problems that ultimately led me to justify switching to using Haskell for my day job a couple of months ago (after having investigated and abandoned other possible solutions in other languages that would have been a much easier sell to my higher-ups). So I have a pressing professional interest on hearing others weigh in on this topic, and particularly on the benefits/shortcomings of fgl. I'd especially like to hear from Martin Erwig on this topic. Martin? You listening? --Tracy

Benedikt Huber wrote:
I would also like to see a project working on a new graph library. Currently, there is at least Data.Graph (just one Module, package containers), based on Array - adjacency lists, and the functional graph library (package fgl).
I don't know those, but "functional graph library" suggests it is meant to be something like what we're discussing here (and this is also what Google returns for Haskell + graph library). What's the problem with it or in which way is it different?
I think a good general purpose graph library is tricky though: - There are lot of variants of graphs (trees, bipartite, acyclic, undirected, simple, edge labeled etc.), hard to find adequate and easy to use abstraction.
Well, while an undirected tree / acyclic graph can obviously be represented as a "tree structure", all the others would resolve around having nodes and for each node a list of edges to neighbouring nodes; at least concept-wise, wouldn't they? Then one could build an abstraction on top of this with a node class able to name neighbouring nodes or something like that to get rid of a fixed representation (and allow working on graphs constructed on the fly as opposed to being stored in memory), but I think this would be the basic way to go. Everything else (whatever could benefit from a substantial different representation) could go in a seperate library or specialisation. Or do I miss something here? (No expert with regards to graphs and their use-cases yet.)
code.haskell.org / community.haskell.org provides webspace, trac, mailing-list, darcs.
Thanks for the pointer! Yours, Daniel

Daniel Kraft schrieb:
Benedikt Huber wrote:
I would also like to see a project working on a new graph library. Currently, there is at least Data.Graph (just one Module, package containers), based on Array - adjacency lists, and the functional graph library (package fgl).
I don't know those, but "functional graph library" suggests it is meant to be something like what we're discussing here (and this is also what Google returns for Haskell + graph library). What's the problem with it or in which way is it different? (I hope the following explanation is correct, apologies otherwise) The fgl is build upon the concept of Inductive Graphs, described in Martin Erwig's "Inductive Graphs and Functional Graph Algorithms" (http://web.engr.oregonstate.edu/~erwig/papers/InductiveGraphs_JFP01.pdf). The basic idea is that for traversals, you don't mark visited notes, but decompose the graph like that:
(nodeContext,restGraph) = matchAny graph {- or -} (nodeContext,restGraph) = match nodeId graph
This is in some sense 'more functional' than relying on tables for marking visited nodes. DFS e.g. is implemented as (comments for clarification)
xdfsWith :: Graph gr => (Context a b -> [Node]) -> -- neighbours (Context a b -> c) -> -- compute result [Node] -> -- start gr a b -> -- the graph [c] -- result list xdfsWith _ _ [] _ = [] -- no more nodes to visit xdfsWith _ _ _ g | isEmpty g = [] -- empty graph xdfsWith d f (v:vs) g = case match v g of -- decompose graph -- compute result, add neighbours to todo list, recursive dfs (Just c,g') -> f c:xdfsWith d f (d c++vs) g' -- Couldn't find node, continue (Nothing,g') -> xdfsWith d f vs g'
I think a good general purpose graph library is tricky though: - There are lot of variants of graphs (trees, bipartite, acyclic, undirected, simple, edge labeled etc.), hard to find adequate and easy to use abstraction.
Well, while an undirected tree / acyclic graph can obviously be represented as a "tree structure" Acyclic graphs are not tree-like. The point about trees is that sometimes it is useful to view them as general purpose graphs, so they should also be represented in a graph
There's certainly more to say about the fgl, just read the paper. Now for many applications this is fine, and the fgl is indeed a fine library. But obviously there is some overhead in matching, and the API is sometimes a little difficult to understand. And it's not the only way to do graph algorithms in haskell. library. However, in my opionion the library should also have a specialized representation of trees, as most algorithms are much simpler (Like containers' Data.Tree).
all the others would resolve around having nodes and for each node a list of edges to neighbouring nodes; at least concept-wise, wouldn't they? From an implementation point of view, yes, adjacency lists, as arrays or trees, and matrices. But there is a wide range of possibilities for algorithms, from monadic implementations (using ST/UArray) to inductive graphs. For efficient algorithms, somethink like Array's freeze/thaw seems to be a good idea, too.
benedikt

Daniel Kraft wrote:
Hi,
I noticed last year Haskell.org was a mentoring organization for Google's Summer of Code, and I barely noticed some discussion about it applying again this year :)
I participated for GCC in 2008 and would like to try again this year; while I'm still active for GCC and will surely stay so, I'd like to see something new at least for GSoC. And Haskell.org would surely be a very, very nice organization.
Since I discovered there's more than just a lot of imperative languages that are nearly all the same, I love to do some programming in Prolog, Scheme and of course Haskell. However, so far this was only some toy programs and nothing "really useful"; I'd like to change this (as well as learning more about Haskell during the projects).
Here are some ideas for developing Haskell packages (that would hopefully be of general use to the community) as possible projects:
- Numerics, like basic linear algebra routines, numeric integration and other basic algorithms of numeric mathematics.
I have some unsorted routines for that: http://darcs.haskell.org/htam/src/Numerics/
- A basic symbolic maths package; I've no idea how far one could do this as a single GSoC project, but it would surely be a very interesting task. Alternatively or in combination, one could try to use an existing free CAS package as engine.
DoCon?
- Graphs.
There was some discussion here about improved API of fgl: http://haskell.org/pipermail/libraries/2008-February/009241.html
- A logic programming framework. I know there's something like that for Scheme; in my experience, there are some problems best expressed logically with Prolog-style backtracking/predicates and unification. This could help use such formulations from inside a Haskell program. This is surely also a very interesting project.
LogicT ?

Hi, Henning Thielemann wrote:
Daniel Kraft wrote:
I noticed last year Haskell.org was a mentoring organization for Google's Summer of Code, and I barely noticed some discussion about it applying again this year :)
I participated for GCC in 2008 and would like to try again this year; while I'm still active for GCC and will surely stay so, I'd like to see something new at least for GSoC. And Haskell.org would surely be a very, very nice organization.
Since I discovered there's more than just a lot of imperative languages that are nearly all the same, I love to do some programming in Prolog, Scheme and of course Haskell. However, so far this was only some toy programs and nothing "really useful"; I'd like to change this (as well as learning more about Haskell during the projects).
- A basic symbolic maths package; I've no idea how far one could do this as a single GSoC project, but it would surely be a very interesting task. Alternatively or in combination, one could try to use an existing free CAS package as engine.
DoCon?
hm, I've only read a little on their webpage; what I was thinking of was to implement a very basic package just to do some symbolic integration or equation solving to be embedded in some other calculation, and DoCon sounds like a more abstract, mathematical system. So maybe there could still be some interest in a basic symbolic expression package that could be backed by a CAS library like GiNaC or some other one (better yet, pluggable) instead of imlementing a full CAS itself. Daniel

On Fri, 13 Feb 2009, Daniel Kraft wrote:
Henning Thielemann wrote:
DoCon?
hm, I've only read a little on their webpage; what I was thinking of was to implement a very basic package just to do some symbolic integration or equation solving to be embedded in some other calculation, and DoCon sounds like a more abstract, mathematical system.
So maybe there could still be some interest in a basic symbolic expression package that could be backed by a CAS library like GiNaC or some other one (better yet, pluggable) instead of imlementing a full CAS itself.
So it seems to be useful to make the distinction: * Computer algebra is doing advanced arithmetic on advanced mathematical objects like polynomials, Galois groups, function compositions (for integration). This is what DoCon is about (integration excluded). * Symbolic manipulation is what you are after, where symbolic manipulation often involves computer algebra behind the scenes (i.e. translating symbolic expressions to polynomials, function compositions)

I think the recent discussion about advanced markup for Haddock documentation could yield a Summer of code project. I still like my suggestion to use Haskell code as description for math formulas and I like Wolfgang's idea to use an existing tool like Template Haskell for conversion from Haskell code to an output format (TeX, MathML, or whatever): http://haskell.org/pipermail/haskell-cafe/2009-February/055358.html

Daniel Kraft wrote:
Do you think something would be especially nice to have and is currently missing?
Have type class aliases been implemented yet? This proposal (or parts or it) seems like a very useful compiler extension to have, and might be an interesting GSoC project. http://repetae.net/recent/out/classalias.html Kind regards, Martijn.

Am Dienstag, 17. Februar 2009 01:13 schrieb Martijn van Steenbergen:
Daniel Kraft wrote:
Do you think something would be especially nice to have and is currently missing?
Have type class aliases been implemented yet? This proposal (or parts or it) seems like a very useful compiler extension to have, and might be an interesting GSoC project.
http://repetae.net/recent/out/classalias.html
Kind regards,
Martijn.
It would be great to have something like class aliases implemented. However, the proposal(s) should be reviewed and discussed before someone starts implementing them. Once something is implemented, it is difficult to change it. It’s similar to libraries. Several libraries were implemented as part of research and their developers didn’t seem to think very deeply about choosing identifiers and such things. Later, these libraries were in wider use and changing the identifiers got problematic. I think of such things like the DPH identifiers. In my opinion, it’s bad practice to include single letters in identifiers to denote namespace. Parallel.filter would be much better than filterP. That said, I want to reinforce that class aliases are far too long not implemented. My dream is that we thoroughly improve library interfaces and class aliases could be important for doing so without breaking compatibility very much. So we should probably also think about what kinds of library interface changes would be desirable in order to know what features some class alias extension should provide. Some things that come to my mind immediately: * making Applicative a superclass of Monad * getting rid of MonadPlus (use (Alternative m, Monad m) instead of (MonadPlus m) or, with another extension, even something like (forall a. Monoid (m a), Monad m)) * getting rid of ugly Monoid method names (empty, append, concat or something totally different instead of mempty, mappend, mconcat) * redesigning numeric classes Best wishes, Wolfgang

Wolfgang Jeltsch wrote:
* making Applicative a superclass of Monad
* getting rid of MonadPlus (use (Alternative m, Monad m) instead of (MonadPlus m) or, with another extension, even something like (forall a. Monoid (m a), Monad m))
* getting rid of ugly Monoid method names (empty, append, concat or something totally different instead of mempty, mappend, mconcat)
* redesigning numeric classes
Let's make these ideas more concrete and add them to the Other Prelude, if they haven't been already! http://www.haskell.org/haskellwiki/The_Other_Prelude Martijn.

Am Dienstag, 17. Februar 2009 21:01 schrieben Sie:
Wolfgang Jeltsch wrote:
* making Applicative a superclass of Monad
* getting rid of MonadPlus (use (Alternative m, Monad m) instead of (MonadPlus m) or, with another extension, even something like (forall a. Monoid (m a), Monad m))
* getting rid of ugly Monoid method names (empty, append, concat or something totally different instead of mempty, mappend, mconcat)
* redesigning numeric classes
Let's make these ideas more concrete and add them to the Other Prelude, if they haven't been already!
Is this really good? First, I’m not only talking about prelude stuff but also stuff from other libraries. Second, the page “The Other Prelude” states right at the beginning that this project is (just) a fun project. However, I mean my comment very seriously. My agenda is not “This would be nice but won’t be realized anyway.” but “I really want to see this implemented.”. Best wishes, Wolfgang

Something that hit me tonight: Last GSoC gave us GHC compiler plugins. We have examples, but no documented significant uses, suitable for production code. Plugins, in essence, as I understand them, let us extend the type system in useful ways. Haskell has libraries for units[1], but no lightweight (i.e. without simulated dependent types or a dsl) way to embed units in Haskell calculations. Units in a functional language are possible, and implemented in, e.g., F# [2] The punchline: With GHC plugins, it should be possible, and reasonable, to add a proper unit system for Haskell. Given a suitable SoC candidate, I'd love to see this, and if it worked reasonably enough, I'm sure that I would use it. Cheers, Sterl. [1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ dimensional, http://liftm.wordpress.com/2007/06/03/ scientificdimension-type-arithmetic-and-physical-units-in-haskell/, http://hackage.haskell.org/cgi-bin/hackage-scripts/package/caldims [2] http://blogs.msdn.com/andrewkennedy/archive/2008/09/04/units-of- measure-in-f-part-three-generic-units.aspx

2009/2/18 Sterling Clover
Something that hit me tonight: Last GSoC gave us GHC compiler plugins. We have examples, but no documented significant uses, suitable for production code. Plugins, in essence, as I understand them, let us extend the type system in useful ways. Haskell has libraries for units[1], but no lightweight (i.e. without simulated dependent types or a dsl) way to embed units in Haskell calculations. Units in a functional language are possible, and implemented in, e.g., F# [2]
The punchline: With GHC plugins, it should be possible, and reasonable, to add a proper unit system for Haskell.
Alas, GHC plugins cannot change the type system - only meddle with the compilation strategy or analyse the code and suchlike. I'm working with Simon Peyton Jones to get plugins integrated fully into GHC (parts of it have been commited already) but we're both busy and so progress is slow. I don't think any GSoC projects should take a dependency on it being both integrated into GHC and stable in time for the summer. I agree this would be a cool feature :-). Fully pluggable compilers are hard work though, and would require at the very least very large amounts of GHC refactoring. Cheers, Max

On Feb 18, 2009, at 4:43 AM, Max Bolingbroke wrote:
2009/2/18 Sterling Clover
: The punchline: With GHC plugins, it should be possible, and reasonable, to add a proper unit system for Haskell.
Alas, GHC plugins cannot change the type system - only meddle with the compilation strategy or analyse the code and suchlike. I'm working with Simon Peyton Jones to get plugins integrated fully into GHC (parts of it have been commited already) but we're both busy and so progress is slow. I don't think any GSoC projects should take a dependency on it being both integrated into GHC and stable in time for the summer.
Thanks for the update on plugins! I look forward to trying them out from the GHC mainline at some point. I don't think that units as I envision them would need to mess with the type system directly, but could be implemented simply as a static analysis step, such as you describe. Units really do something quite different from standard types, and unit-correctness seems orthogonal to type correctness. A simple plugin would, e.g., allow unit annotations as distinct from type annotations (although, with an appropriate preprocessor step, they could, with cleverness, be written jointly), and simply track unit usage throughout the code to ensure correctness. It would, I assume, only need to operate on things which belonged to the Num typeclass to begin with, and assume that all unannotated numbers were scalars. Even lacking any sophisticated features, I would find this very useful, and certainly easier to envision than a more general extension to the type system that made unit tracking feasible, but yet, somehow, didn't veer fully into either simulated or actual dependent typing. So I guess I'll just keep this idea in mind and pitch it for next year's GSoC. :-) Cheers, Sterl.

On Thu, 19 Feb 2009, Sterling Clover wrote:
Thanks for the update on plugins! I look forward to trying them out from the GHC mainline at some point. I don't think that units as I envision them would need to mess with the type system directly, but could be implemented simply as a static analysis step, such as you describe.
I think it is widely agreed on that all these static checkers (e.g. for totality of functions) fill the gap of a type system that is not strong enough to handle these cases. Have you ever tried one of the units-by-types libraries so far? They catch many bugs and are already quite natural in usage.

On Tue, 17 Feb 2009, Sterling Clover wrote:
Something that hit me tonight: Last GSoC gave us GHC compiler plugins.
Never heard of it. Sometimes I thought it would be nice to modify or extend GHCs error messages by libraries in order make they feel more like domain specific languages. E.g. instead of or additionally to saying 'type A infered, but type B expected' it could state 'you have probably made the common error to use function f instead of (uncurry f)'. Is this possible with compiler plugins?
Plugins, in essence, as I understand them, let us extend the type system in useful ways. Haskell has libraries for units[1], but no lightweight (i.e. without simulated dependent types or a dsl) way to embed units in Haskell calculations. Units in a functional language are possible, and implemented in, e.g., F# [2]
I think units as separate extensions are not a good goal. The type system should be made strong enough to handle this application without hassle.
[1] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dimensional, http://liftm.wordpress.com/2007/06/03/scientificdimension-type-arithmetic-an..., http://hackage.haskell.org/cgi-bin/hackage-scripts/package/caldims
There is also both dynamic and static unit checking in NumericPrelude: http://haskell.org/haskellwiki/Physical_units

Le mercredi 11 février 2009 à 11:12 +0100, Daniel Kraft a écrit :
Hi,
Hi,
Do you think something would be especially nice to have and is currently missing?
In my humble opinion, Haskell presently fall short of its promises because it does not embed theorem proving. Quickcheck-like testing is truly great, but can not replace proofs to produce bug free software. With use of equational reasoning + induction, one can already prove a huge amount of useful properties of an Haskell program [1]. The idea would be to extend Haskell with a syntax to write such proofs, and implement support for it in the GHC compiler. This could look like: module Integer where .. theorem read_parses_what_show_shows : (a :: Integer, Show a, Read a) => show . read a = id a proof axiom In the case above, programmers may resort to the "axiom" keyword, which would at last have the merit of explicitly document assumptions. For axioms, one would have to fall back to quickcheck and consort, so these tools would not be made obsolete ;) Another example: theorem : ( xs :: [a], ys :: [a], f :: a -> b) => length (map f (xs ++ ys )) = length xs + length yx proof length (map f (xs ++ ys )) = length (xs ++ ys) = {- use length++ -} length xs ++ length ys Theorems can be named, like "length++" (which is not shown here). Successfully proven theorems would be automatically added to the current theorem database and could be reused in further proofs in the same scope. The compiler could even be instructed to make use of this database to optimize the program. This theorem prover could also be used to ensure the soundness of refinement, rewrite of an obvious version of a function/module in a more efficient but less obvious version. Furthermore, the compiler could be instructed to generate a "proof obligation", which would have to be discharged by the programmer. instance Read Integer where ... instance Show Integer where ... constraint read_parses_what_show_shows where (Read a, Show a) => read . show a = id a The compiler would complain if, for any couple of Read/Show instance of the same data type, the constraint is not proved to be satisfied. There is more to it, of course. "Tactics" would be needed to make this practical. Hopefully, at this stage, this project would catch interest of the "academics", and development would take off, until we get an almost automated theorem prover. Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support. This would be quite unique[2] and there really would be no excuse to not use it in one step or another of the development process :) Hope I didn't uttered nonsense. Sylvain PS A package even exists that may serve as basis for this work http://www.haskell.org/haskellwiki/Haskell_Equational_Reasoning_Assistant [1] I currently think that equational reasoning + induction is effectively enough to prove every theorems on Haskell programs. Please somebody knowledgeable, correct me if I am wrong? [2] I know of "B", but it is not "nice".

Am Donnerstag, 19. Februar 2009 02:22 schrieb sylvain:
Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support.
Wouldn’t it be better to achieve the goals you describe with a dependently typed programming language? Best wishes, Wolfgang

2009/2/19 Wolfgang Jeltsch
Am Donnerstag, 19. Februar 2009 02:22 schrieb sylvain:
Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support.
Wouldn't it be better to achieve the goals you describe with a dependently typed programming language?
But I wonder how a dependent typed language can express certain properties, for example, the distributive property between the operation + and * in a ring or simply the fact that show(read x)==x. As far as I understand, a dependent type system can restrict the set of values for wich a function apply, but it can not express complex relationships between operations. My knowledge on dependent types is very limited. I would like to be wrong on this. The point is that permits the automatic checking of such properties at the class level (or module level) are critical, to make sure that derived instances agree with that. This would be very good to feel confident and program at higuer levels of abstraction.

Am Donnerstag, 19. Februar 2009 11:35 schrieb Alberto G. Corona:
2009/2/19 Wolfgang Jeltsch
Am Donnerstag, 19. Februar 2009 02:22 schrieb sylvain:
Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support.
Wouldn't it be better to achieve the goals you describe with a dependently typed programming language?
But I wonder how a dependent typed language can express certain properties, for example, the distributive property between the operation + and * in a ring or simply the fact that show(read x)==x. As far as I understand, a dependent type system can restrict the set of values for wich a function apply, but it can not express complex relationships between operations.
Each type system corresponds to some constructive logic and can enforce properties which are expressible in that logic. Dependent type systems correspond to higher order predicate logics or so and therefore can express almost everything you want. An Agda or Epigram type can cover a complete behavioral specification like “This function turns a list into its sorted equivalent.” Best wishes, Wolfgang

Hello, but to specify that “this function turns a list into its sorted equivalent” would probably require to specify e.g. "sort" in terms of the type system and to write code that actually does the sorting. The first task is much like specifying what a sorted list is in first-order-logic (much like a Prolog program) and the second task is a regular functional program. If this is correct, dependent types would become more useful if the first task could be done by the compiler - which is probably impossible in general. Am I right? Best wishes, Torsten -----Ursprüngliche Nachricht----- Von: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] Im Auftrag von Wolfgang Jeltsch Gesendet: Donnerstag, 19. Februar 2009 14:22 An: haskell-cafe@haskell.org Betreff: Re: [Haskell-cafe] Haskell.org GSoC Am Donnerstag, 19. Februar 2009 11:35 schrieb Alberto G. Corona:
2009/2/19 Wolfgang Jeltsch
Am Donnerstag, 19. Februar 2009 02:22 schrieb sylvain:
Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support.
Wouldn't it be better to achieve the goals you describe with a dependently typed programming language?
But I wonder how a dependent typed language can express certain properties, for example, the distributive property between the operation + and * in a ring or simply the fact that show(read x)==x. As far as I understand, a dependent type system can restrict the set of values for wich a function apply, but it can not express complex relationships between operations.
Each type system corresponds to some constructive logic and can enforce properties which are expressible in that logic. Dependent type systems correspond to higher order predicate logics or so and therefore can express almost everything you want. An Agda or Epigram type can cover a complete behavioral specification like “This function turns a list into its sorted equivalent.” Best wishes, Wolfgang _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Am Freitag, 20. Februar 2009 09:42 schrieben Sie:
Hello,
but to specify that “this function turns a list into its sorted equivalent” would probably require to specify e.g. "sort" in terms of the type system and to write code that actually does the sorting. The first task is much like specifying what a sorted list is in first-order-logic (much like a Prolog program) and the second task is a regular functional program.
If this is correct, dependent types would become more useful if the first task could be done by the compiler - which is probably impossible in general.
I might not really understand you. Do you mean the compiler should be able to infer the specification from the implementation? In a dependently-typed programming language this would just mean type inference since specifications are types. However, type inference isn’t possible for dependent type systems. Personally, I think, it makes more sense to start with a specification (type) and then provide implementations. Systems like Agda and Epigram can actually use the reach information from the types to assist you in writing your implementation. Best wishes, Wolfgang

"Kemps-Benedix Torsten"
Hello,
but to specify that ___this function turns a list into its sorted equivalent___ would probably require to specify e.g. "sort" in terms of the type system and to write code that actually does the sorting. The first task is much like specifying what a sorted list is in first-order-logic (much like a Prolog program) and the second task is a regular functional program.
If this is correct, dependent types would become more useful if the first task could be done by the compiler - which is probably impossible in general.
Am I right?
Have a look at http://www.cs.nott.ac.uk/~wss/Publications/ThePowerOfPi.pdf http://www.cs.nott.ac.uk/~txa/publ/pisigma.pdf http://www.cs.kent.ac.uk/people/staff/sjt/TTFP/ You probably want to give type signatures... as you want the compiler to help you to construct a valid proof of what you claimed in it. It isn't much help if the compiler just tells you "that code you gave, it denotes _|_", if you get my meaning. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Yes, this is indeed the point. Right now we express properties that a type class should obey, but there is no compiler assistance to prove or verify them. A compiler aware of "properties" can do additional symbolic manipulation to increase performance. Surely there has been some research in this area already. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 19, 2009, at 3:35 AM, Alberto G. Corona wrote:
2009/2/19 Wolfgang Jeltsch
Am Donnerstag, 19. Februar 2009 02:22 schrieb sylvain: Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support.
Wouldn't it be better to achieve the goals you describe with a dependently typed programming language?
But I wonder how a dependent typed language can express certain properties, for example, the distributive property between the operation + and * in a ring or simply the fact that show(read x)==x. As far as I understand, a dependent type system can restrict the set of values for wich a function apply, but it can not express complex relationships between operations. My knowledge on dependent types is very limited. I would like to be wrong on this.
The point is that permits the automatic checking of such properties at the class level (or module level) are critical, to make sure that derived instances agree with that. This would be very good to feel confident and program at higuer levels of abstraction.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Alberto G. Corona wrote:
Wouldn't it be better to achieve the goals you describe with a dependently typed programming language?
But I wonder how a dependent typed language can express certain properties, for example, the distributive property between the operation + and * in a ring or simply the fact that show(read x)==x. As far as I understand, a dependent type system can restrict the set of values for wich a function apply, but it can not express complex relationships between operations.
In a dependently typed language, propositions can be encoded as types, and a value of such a type is a proof that the theorem holds, so one could imagine something like:
class ReadShow a where show :: a -> String read :: String -> Maybe a law :: forall (x : String), read (show x) = x
instance ReadShow Int where show x = <construct string representation of x> read x = <try to parse x> law = <proof of property>
Note that the forall quantifies over values in a type, so this is indeed dependently typed. With the new support for type classes in Coq 8.2, you can write stuff like that.
Require Import List.
Class Monoid (A : Type) := { empty : A; concat : A -> A -> A;
left_ident : forall (a : A), a = concat empty a; right_ident : forall (a : A), a = concat a empty; assoc : forall (a b c : A), concat (concat a b) c = concat a (concat b c) }.
Instance list_monoid : Monoid (list A) := { empty := nil; concat := fun x y => x ++ y;
left_ident := fun a => refl_equal (nil ++ a); right_ident := @app_nil_end A; assoc := @app_ass A }.
The class declaration lists the laws, the instance declaration has to give the proofs. Since these facts about lists are provided by the standard library, this is easy in this example. Of course, one can use the proof mode to prove the laws with a proof script. For example for the Monoid instance for Maybe we have in Haskell.
Instance option_monoid_lifted (A : Type) (MA : Monoid A) : Monoid (option A) := { empty := None; append := fun x y => match (x, y) with | (None, b) => b | (a, None) => a | (Some a, Some b) => Some (append a b) end }. Proof. auto. destruct a; auto. destruct a; destruct b; destruct c; auto. f_equal. apply assoc. Defined.
Tillmann

Unfortunately the "proofs" in dependently typed languages are extremely long and tedious to write. Some kind of compiler proofing tool could ease the pain, but I do not think it has low enough complexity for a GSoC project. Regards, John A. De Goes N-BRAIN, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Feb 19, 2009, at 1:37 AM, Wolfgang Jeltsch wrote:
Am Donnerstag, 19. Februar 2009 02:22 schrieb sylvain:
Haskell is a nice, mature and efficient programming language. By its very nature it could also become a nice executable formal specification language, provided there is tool support.
Wouldn’t it be better to achieve the goals you describe with a dependently typed programming language?
Best wishes, Wolfgang _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Am Donnerstag, 19. Februar 2009 14:50 schrieb John A. De Goes:
Unfortunately the "proofs" in dependently typed languages are extremely long and tedious to write. Some kind of compiler proofing tool could ease the pain, but I do not think it has low enough complexity for a GSoC project.
I was not saying that I want such a thing done as a GSoC project. I just wanted to say that if one wants a programming language with an integrated proof language, it might be better to put work into Agda or Epigram instead of extending Haskell. Best wishes, Wolfgang

Unfortunately the "proofs" in dependently typed languages are extremely long and tedious to write. Some kind of compiler proofing tool could ease the pain, but I do not think it has low enough complexity for a GSoC project.
I wouldn't say that. Here's the complete proof script in Coq proving the theorem that was originally proposed: length (map f (xs ++ ys)) = length xs + length ys. It weighs in at about 30 lines, although I could probably get it down to less than 10. The proofs maybe look a bit unfamiliar if you haven't seen Coq before, but they are hardly "extremely long and tedious to write". I can understand that raw proof *terms* in type theory can be long and painful to write. But that's like saying Haskell is bad, because its hard to understand ghc-core. Wouter Require Import List. Variables a b : Set. Variable f : a -> b. Lemma lengthMap : forall (xs : list a), length (map f xs) = length xs. Proof. intros. induction xs; trivial. simpl; rewrite IHxs. reflexivity. Qed. Lemma appendLength : forall (xs ys : list a), length (xs ++ ys) = length xs + length ys. Proof. intros. induction xs; trivial. simpl; rewrite IHxs. reflexivity. Qed. Lemma main : forall (xs ys : list a), length (map f (xs ++ ys)) = length xs + length ys. Proof. intros. rewrite lengthMap. rewrite appendLength. reflexivity. Qed.

This could look like:
module Integer where .. theorem read_parses_what_show_shows : (a :: Integer, Show a, Read a) => show . read a = id a proof axiom
There are several problems with this approach. For example, I can show: const 0 (head []) = 0 But if I pretend that I don't know that Haskell is lazy: const 0 (head []) = const 0 (error ....) = error ... Which would allow me to substitute each occurrence of 0 with "error" - which probably isn't a good idea. So to do proper equational reasoning in a lazy language you need to be extremely careful with evaluation order. Predicting the evaluation order of code generated by "ghc -O2 Main.hs" is non-trivial to say the least. To make matters worse, as you're working in a language with general recursion, you have to be fight quite hard to avoid introducing inconsistencies in your proof language. Alberto wrote:
As far as I understand, a dependent type system can restrict the set of values for wich a function apply, but it can not express complex relationships between operations. My knowledge on dependent types is very limited. I would like to be wrong on this.
I'm sorry, but you're wrong. Dependent types can encode the kind of equational proofs Sylvain mentioned perfectly adequately. Lennart Augustsson has a nice note explaining the principle in the context of Cayenne: http://www.cs.chalmers.se/~augustss/cayenne/eqproof.ps The good news is: in languages like Coq and Agda you can write total functional programs, like map or ++, verify such properties and extract Haskell code. Hope this helps, Wouter

* Wouter Swierstra
There are several problems with this approach.
For example, I can show:
const 0 (head []) = 0
But if I pretend that I don't know that Haskell is lazy:
const 0 (head []) = const 0 (error ....) = error ...
Where does the last equality come from?
Which would allow me to substitute each occurrence of 0 with "error" - which probably isn't a good idea. So to do proper equational reasoning in a lazy language you need to be extremely careful with evaluation order.
Evaluation order matters for operational semantics, not for axiomatic. And even in operational semantics Church–Rosser theorem should prevent getting different results (e.g. 0 and error) for different evaluation orders. Please correct me if I'm wrong. -- Roman I. Cheplyaka :: http://ro-che.info/ "Don't let school get in the way of your education." - Mark Twain

On Thu, 2009-02-19 at 23:06 +0200, Roman Cheplyaka wrote:
* Wouter Swierstra
[2009-02-19 11:58:38+0100] There are several problems with this approach.
For example, I can show:
const 0 (head []) = 0
But if I pretend that I don't know that Haskell is lazy:
const 0 (head []) = const 0 (error ....) = error ...
Where does the last equality come from?
Pretending Haskell is strict. It would be an axiom of Strict Haskell, were someone to write such a thing, that forall a b (x :: String) (f :: a -> b). f (error x) = error x :: b With the side condition that f is a lambda. Then, we would know that, if f is a lambda of arity > 1 (or a constant defined to be such a lambda), that (f a), where a is a value (such as 0), is equal to some lambda; so by congruence and the equation above, we get (f a (error x) = error x) for all values a. Which doesn't obviate the point that any proof-checker for *Haskell* worth its salt would reject any alleged proof of (const 0 (error x) = error x). jcc

Roman Cheplyaka schrieb:
Evaluation order matters for operational semantics, not for axiomatic. And even in operational semantics Church–Rosser theorem should prevent getting different results (e.g. 0 and error) for different evaluation orders.
Let's consider omega = omega const omega 42 which is evaluated to 42 in Haskell, but is nonterminating in an strict language. I would expect every kind of semantics to account for this difference. Regarding Church-Rosser. First, it says that if you can reduce term P into both P and Q, then there exists a term R so that both P and Q can be reduced to it. That doesn't mean that your particular evaluation order will ever do this reduction. Instead, it may keep doing other reductions forever. Second, who says Church-Rosser holds for Haskell? Tillmann

* Tillmann Rendel
Roman Cheplyaka schrieb:
Evaluation order matters for operational semantics, not for axiomatic. And even in operational semantics Church–Rosser theorem should prevent getting different results (e.g. 0 and error) for different evaluation orders.
Let's consider
omega = omega const omega 42
I guess you meant "const 42 omega".
which is evaluated to 42 in Haskell, but is nonterminating in an strict language. I would expect every kind of semantics to account for this difference.
It's slightly different. I understand that error ".." and omega have the same denotation, but the difference is that omega does not have normal form and error ".." is in normal form. So non-termination of "const 42 omega" in a strict language is not surprising (we know that strict evaluation does not always find normal form, even if it exists), but "const 42 (error ...) = error ..." means that different evaluation orders give us different normal forms, which is denied by Church-Rosser.
Second, who says Church-Rosser holds for Haskell?
Now I see that exceptions magic can break it. Is this what you mean? -- Roman I. Cheplyaka :: http://ro-che.info/ "Don't let school get in the way of your education." - Mark Twain
participants (24)
-
Achim Schneider
-
Alberto G. Corona
-
Bayley, Alistair
-
Benedikt Huber
-
Brent Yorgey
-
Daniel Kraft
-
Don Stewart
-
Heinrich Apfelmus
-
Henning Thielemann
-
John A. De Goes
-
Jonathan Cast
-
Kemps-Benedix Torsten
-
Ketil Malde
-
Luke Palmer
-
Martijn van Steenbergen
-
Max Bolingbroke
-
Roman Cheplyaka
-
Sterling Clover
-
sylvain
-
Thomas DuBuisson
-
Tillmann Rendel
-
Tracy Wadleigh
-
Wolfgang Jeltsch
-
Wouter Swierstra