Re: [Haskell] Re: [Haskell-cafe] ANN: Haddock version 2.1.0

2008/5/2 Simon Marlow
David Waern wrote:
No it doesn't, but it's on the TODO list. It needs a fix in GHC.
By the way, I'm going to experiment with doing the parsing of comments on the Haddock side instead of in GHC. If that works out, we won't have to fix these things in GHC anymore.
Sounds great - along the lines that we discussed on cvs-ghc a while back?
Yes, something along the lines of separately parsing the comments and recording their source locations, and then trying to match them with the source locations of the AST nodes. I don't think there are any lexical rules for where Haddock comments can exist, so it should work. David

2008/5/2 Simon Marlow
: David Waern wrote:
No it doesn't, but it's on the TODO list. It needs a fix in GHC.
By the way, I'm going to experiment with doing the parsing of comments on the Haddock side instead of in GHC. If that works out, we won't have to fix these things in GHC anymore.
Sounds great - along the lines that we discussed on cvs-ghc a while back?
Yes, something along the lines of separately parsing the comments and recording their source locations, and then trying to match them with the source locations of the AST nodes.
yay!-) i hope that the haddock-independent part (parsing, preserving, and accessing comments) becomes part of the GHC API in a form that would fix trac ticket #1886, then we could finally start writing (ghc) haskell source-to-source transformations without losing pragmas or comments! losing layout would still be a pain, but that could be dealt with later - at least the code would remain functional under some form of (pretty . id . parse). please keep us posted about your experiments, claus

2008/5/2 Claus Reinke
2008/5/2 Simon Marlow
: David Waern wrote:
No it doesn't, but it's on the TODO list. It needs a fix in GHC.
By the way, I'm going to experiment with doing the parsing of comments on the Haddock side instead of in GHC. If that works out, we won't have to fix these things in GHC anymore.
Sounds great - along the lines that we discussed on cvs-ghc a while back?
Yes, something along the lines of separately parsing the comments and recording their source locations, and then trying to match them with the source locations of the AST nodes.
yay!-) i hope that the haddock-independent part (parsing, preserving, and accessing comments) becomes part of the GHC API in a form that would fix trac ticket #1886, then we could finally start writing (ghc) haskell source-to-source transformations without losing pragmas or comments! losing layout would still be a pain, but that could be dealt with later - at least the code would remain functional under some form of (pretty . id . parse).
Hmm. When it comes Haddock, things are simpler than in a refactoring situation, since we don't need to know exactly where the comments appear in the concrete syntax. The original Haddock parser is very liberal in where you can place comments. For example, it doesn't matter if you place a comment before or after a comma in a record field list, it is still attached to the previous (or next, depending on the type of comment) field. I need to take another look at the grammar to confirm that this is true in general, though. But anyway, my plan was to do this entirely in Haddock, not do the "preserving" part that you mention, and not do anything to GHC. David David

David Waern wrote:
2008/5/2 Claus Reinke
: 2008/5/2 Simon Marlow
: David Waern wrote:
No it doesn't, but it's on the TODO list. It needs a fix in GHC.
By the way, I'm going to experiment with doing the parsing of comments on the Haddock side instead of in GHC. If that works out, we won't have to fix these things in GHC anymore.
Sounds great - along the lines that we discussed on cvs-ghc a while back? Yes, something along the lines of separately parsing the comments and recording their source locations, and then trying to match them with the source locations of the AST nodes.
yay!-) i hope that the haddock-independent part (parsing, preserving, and accessing comments) becomes part of the GHC API in a form that would fix trac ticket #1886, then we could finally start writing (ghc) haskell source-to-source transformations without losing pragmas or comments! losing layout would still be a pain, but that could be dealt with later - at least the code would remain functional under some form of (pretty . id . parse).
Hmm. When it comes Haddock, things are simpler than in a refactoring situation, since we don't need to know exactly where the comments appear in the concrete syntax. The original Haddock parser is very liberal in where you can place comments. For example, it doesn't matter if you place a comment before or after a comma in a record field list, it is still attached to the previous (or next, depending on the type of comment) field. I need to take another look at the grammar to confirm that this is true in general, though. But anyway, my plan was to do this entirely in Haddock, not do the "preserving" part that you mention, and not do anything to GHC.
So basically you want to run a lexer over the source again to collect all the comments? You really want to use GHC's lexer, because otherwise you have to write another lexer. So a flag to GHC's lexer that says whether it should return comments or not seems like a reasonable way to go. But if you're doing that, you might as well have the parser collect all the comments off to the side during parsing, to avoid having to lex the file twice, right? Cheers, Simon

2008/5/8 Simon Marlow
So basically you want to run a lexer over the source again to collect all the comments?
Yes.
You really want to use GHC's lexer, because otherwise you have to write another lexer.
I don't mind writing a lexer that just collects the comments. It should be simpler than a full Haskell lexer, right? It wouldn't need to handle layout, for instance. Using GHC is also a good option.
So a flag to GHC's lexer that says whether it should return comments or not seems like a reasonable way to go. But if you're doing that, you might as well have the parser collect all the comments off to the side during parsing, to avoid having to lex the file twice, right?
Yes. David

David Waern wrote:
2008/5/8 Simon Marlow
: So basically you want to run a lexer over the source again to collect all the comments?
Yes.
You really want to use GHC's lexer, because otherwise you have to write another lexer.
I don't mind writing a lexer that just collects the comments. It should be simpler than a full Haskell lexer, right? It wouldn't need to handle layout, for instance. Using GHC is also a good option.
I'm not sure it's that much easier to write a lexer that just collects comments. For example, is there a comment here? 3#--foo with -XMagicHash it is (3# followed by a comment), but without -XMagicHash it is not (3 followed by the operator #--). You have to implement a significant chunk of the options that GHC supports to get it right. I'd say its probably easier to work with GHC's lexer. Cheers, Simon

2008/5/9 Simon Marlow
David Waern wrote:
2008/5/8 Simon Marlow
: So basically you want to run a lexer over the source again to collect all the comments?
Yes.
You really want to use GHC's lexer, because otherwise you have to write another lexer.
I don't mind writing a lexer that just collects the comments. It should be simpler than a full Haskell lexer, right? It wouldn't need to handle layout, for instance. Using GHC is also a good option.
I'm not sure it's that much easier to write a lexer that just collects comments. For example, is there a comment here?
3#--foo
with -XMagicHash it is (3# followed by a comment), but without -XMagicHash it is not (3 followed by the operator #--). You have to implement a significant chunk of the options that GHC supports to get it right. I'd say its probably easier to work with GHC's lexer.
Ah, I didn't think about the GHC options that change the lexical syntax. You're right, using the GHC lexer should be easier. David

Ah, I didn't think about the GHC options that change the lexical syntax. You're right, using the GHC lexer should be easier.
and, if you do that, you could also make the GHC lexer squirrel away the comments (including pragmas, if they aren't already in the AST) someplace safe, indexed by, or at least annotated with, their source locations, and make this comment/ pragma storage available via the GHC API. (1a) then, we'd need a way to merge those comments and pragmas back into the output during pretty printing, and we'd have made the first small step towards source-to-source transformations: making code survive semantically intact over (pretty . parse). (1b) that would still not quite fullfill the GHC API comment ticket (*), but that was only a quick sketch, not a definite design. it might be sufficient to let each GHC API client do its own search to associate bits of comment/pragma storage with bits of AST. if i understand you correctly, you are going to do (1a), so if you could add that to the GHC API, we'd only need (1b) to go from useable-for-analysis-and-extraction to useable-for-transformation. is that going to be a problem? claus (*) knowing the source location of some piece of AST is not sufficient for figuring out whether it has any immediately preceding or following comments (there might be other AST fragments in between, closer to the next comment). but, if one knows the nearest comment segment for each piece of AST, one could then build a map where the closest AST pieces are mapped to (Just commentID), and the other AST pieces are mapped to Nothing.

2008/5/9 Claus Reinke
Ah, I didn't think about the GHC options that change the lexical syntax. You're right, using the GHC lexer should be easier.
and, if you do that, you could also make the GHC lexer squirrel away the comments (including pragmas, if they aren't already in the AST) someplace safe, indexed by, or at least annotated with, their source locations, and make this comment/ pragma storage available via the GHC API. (1a)
then, we'd need a way to merge those comments and pragmas back into the output during pretty printing, and we'd have made the first small step towards source-to-source transformations: making code survive semantically intact over (pretty . parse). (1b)
that would still not quite fullfill the GHC API comment ticket (*), but that was only a quick sketch, not a definite design. it might be sufficient to let each GHC API client do its own search to associate bits of comment/pragma storage with bits of AST. if i understand you correctly, you are going to do (1a), so if you could add that to the GHC API, we'd only need (1b) to go from useable-for-analysis-and-extraction to useable-for-transformation.
is that going to be a problem?
I'll have a look to see if doing 1a) is possible without too much work. And then if I actually implement something, adding it to the GHC API shouldn't be a problem. David

Feel free to CC me or the ticket with things like that. I'll be
working on this for this year's GSoC and it'd be helpful to find out
what I should tackle first.
On Fri, May 9, 2008 at 8:30 PM, Claus Reinke
Ah, I didn't think about the GHC options that change the lexical syntax. You're right, using the GHC lexer should be easier.
and, if you do that, you could also make the GHC lexer squirrel away the comments (including pragmas, if they aren't already in the AST) someplace safe, indexed by, or at least annotated with, their source locations, and make this comment/ pragma storage available via the GHC API. (1a)
then, we'd need a way to merge those comments and pragmas back into the output during pretty printing, and we'd have made the first small step towards source-to-source transformations: making code survive semantically intact over (pretty . parse). (1b)
that would still not quite fullfill the GHC API comment ticket (*), but that was only a quick sketch, not a definite design. it might be sufficient to let each GHC API client do its own search to associate bits of comment/pragma storage with bits of AST. if i understand you correctly, you are going to do (1a), so if you could add that to the GHC API, we'd only need (1b) to go from useable-for-analysis-and-extraction to useable-for-transformation.
is that going to be a problem?
claus
(*) knowing the source location of some piece of AST is not sufficient for figuring out whether it has any immediately preceding or following comments (there might be other AST fragments in between, closer to the next comment). but, if one knows the nearest comment segment for each piece of AST, one could then build a map where the closest AST pieces are mapped to (Just commentID), and the other AST pieces are mapped to Nothing.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Feel free to CC me or the ticket with things like that. I'll be working on this for this year's GSoC and it'd be helpful to find out what I should tackle first.
Hi Thomas, thanks, I was wondering about your project. Is there a project page documenting the issues/tickets you look at, and particularly the plan of attack as it changes in the face of reality?-) I've found http://code.google.com/soc/2008/haskell/appinfo.html?csaid=4189AF2C8AE5E25A which covers a lot of ground, and some interesting issues, but is so general (and design- rather than application-driven) that I've been worried about how much of it you'll manage (and with which priorities), given that the GHC API is indeed exposed rather than designed and may thus interfere with your good intentions in unexpected ways. Also, there are very different user needs out there, some want just analysis or some information extraction, some want source transformation capabilities, some want a stable portable hs-plugins replacement, some want to work with backends, etc. . you can't please everyone, but until your focus is known, people can't easily complain about your priorities. IMHO, trying to support a semantics- and comment-preserving roundtrip in (pretty . parse) would be a good way to start (David says he's going to look at the extracting comments/pragmas part, but they still need to be put back in during pretty printing). It sounds simple, and if it is, it will enable a lot more usage of the GHC API; and if it turns out not to be simple, you'll have a first sign of what you're up against, and can adjust your expectations!-) Making yourself available as you've done here "I'm here; I'm going to work on this now; please cc me if you want to express your priorities" sounds like a good way to pull together the many strands of interests relating to the GHC API. Now we all have to dust off our old "wouldn't it be nice if the API could do this and that"s. Perhaps something similar to what the type family folks are doing would help: use the ticket tracker for individual issues, have test cases that demonstrate the issues and their resolution, have more detailed documents online elsewhere, and a single wiki page to tie everything together (making it easier to find relevant tickets and the state of the art). [cf http://hackage.haskell.org/trac/ghc/wiki/TypeFunctionsStatus ] over the years, quite a few issues have been raised as tickets/ email/source comments, so collecting them would be a good way to get an idea of what is needed, deciding which of those issues would take how much effort would be a first useful contribution, and seeing which of these you intend to tackle would give the community at large a better chance to comment on your priorities in relation to their needs. I also hope you are in touch with Chaddaï - the port of HaRe to the GHC API did not make it as a GSoC project, but I understand he is going to do some work in this direction anyway. Looking forward to an improved GHC API!-) Claus ps. here are some first entries for your list, and for other interested parties following along (I'd be very interested to hear about your progress): - http://code.google.com/soc/2008/haskell/appinfo.html?csaid=4189AF2C8AE5E25A (project outline) - http://hackage.haskell.org/trac/ghc/ticket/1467 (GHC API: expose separate compilation stages; your main ticket so far?) - concerning exposed phases, it would also be useful if the interface was more uniform (eg., AST, typed AST,..) - search for NOTE in ghc/compiler/main/GHC.hs for some related notes from an earlier GHC/HaRe meeting - is it possible to use standalone deriving to get a generic programming framework over the ASTs without blowing up GHC's code for its own use (deriving Data, etc.)? - http://www.haskell.org/pipermail/haskell-cafe/2008-May/042616.html (GHC API: how to get the typechecked AST?) - http://hackage.haskell.org/trac/ghc/ticket/1886 (GHC API should preserve and provide access to comments) - dynamic loading of Haskell code, ala hs-plugins, but without the version/platform issues (GHCi has to be able to do this anyway, but it would be nice to have the ugly bits hidden, such as unsafeCast#, or whatever it was). that might require a standard for typeReps, if I recall correctly.. - is there a way to reduce version-skew for clients of the GHC API (currently, there is no stability guaranteed at all, so if you don't want to live with lots of #ifdefs and breakage, you keep delaying your fantastic GHC API-base projects "until the dust settles") - I'm sure there have been many more, but that's the problem: not all these issues have been collected as they were raised; even if you don't tackle all of them, it would be nice if you could collect all of them
On Fri, May 9, 2008 at 8:30 PM, Claus Reinke
wrote: Ah, I didn't think about the GHC options that change the lexical syntax. You're right, using the GHC lexer should be easier.
and, if you do that, you could also make the GHC lexer squirrel away the comments (including pragmas, if they aren't already in the AST) someplace safe, indexed by, or at least annotated with, their source locations, and make this comment/ pragma storage available via the GHC API. (1a)
then, we'd need a way to merge those comments and pragmas back into the output during pretty printing, and we'd have made the first small step towards source-to-source transformations: making code survive semantically intact over (pretty . parse). (1b)
that would still not quite fullfill the GHC API comment ticket (*), but that was only a quick sketch, not a definite design. it might be sufficient to let each GHC API client do its own search to associate bits of comment/pragma storage with bits of AST. if i understand you correctly, you are going to do (1a), so if you could add that to the GHC API, we'd only need (1b) to go from useable-for-analysis-and-extraction to useable-for-transformation.
is that going to be a problem?
claus
(*) knowing the source location of some piece of AST is not sufficient for figuring out whether it has any immediately preceding or following comments (there might be other AST fragments in between, closer to the next comment). but, if one knows the nearest comment segment for each piece of AST, one could then build a map where the closest AST pieces are mapped to (Just commentID), and the other AST pieces are mapped to Nothing.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

2008/5/15 Claus Reinke
Feel free to CC me or the ticket with things like that. I'll be IMHO, trying to support a semantics- and comment-preserving roundtrip in (pretty . parse) would be a good way to start (David says he's going to look at the extracting comments/pragmas part, but they still need to be put back in during pretty printing). It sounds simple, and if it is, it will enable a lot more usage of the GHC API; and if it turns out not to be simple, you'll have a first sign of what you're up against, and can adjust your expectations!-)
I also hope you are in touch with Chaddaï - the port of HaRe to the GHC API did not make it as a GSoC project, but I understand he is going to do some work in this direction anyway.
- http://hackage.haskell.org/trac/ghc/ticket/1886 (GHC API should preserve and provide access to comments)
Well not in touch until now, I was waiting a little bit to see in which direction this project is going to evolve. There's plenty of interesting improvement that one could bring to GHC API. For my project of course, I'm very interested in the preserving comment part. I intended to make it independantly, like Haddock do now, but if the GHC API was to get native support for it, it would be great (for all kind of others applications too). I'll keep in touch. -- Chaddaï

Claus, thanks for taking the time to articulate all this, speaking as the mentor for the project this material is invaluable. I'd particularly like to see a wiki page collecting issues, plans and priorities too. My own priority is to have the compilation phases exposed. One (selfish) reason for this is that it will force a number of refactorings and cleanups inside GHC, that we've had on the radar for some time. As soon as there's a wiki page up I can start downloading some of the contents of my whiteboard onto it :-) Keeping track of comments in the parser sounds like a high priority to me, because we have an active customer (Haddock) to drive the design and test it. Another active customer is Yi, as I understand it they are using the GHC API to provide the features we had in Visual Haskell. This will be useful for driving the aspects of the design that IDEs need. Cheers, Simon Claus Reinke wrote:
thanks, I was wondering about your project. Is there a project page documenting the issues/tickets you look at, and particularly the plan of attack as it changes in the face of reality?-) I've found
http://code.google.com/soc/2008/haskell/appinfo.html?csaid=4189AF2C8AE5E25A
which covers a lot of ground, and some interesting issues, but is so general (and design- rather than application-driven) that I've been worried about how much of it you'll manage (and with which priorities), given that the GHC API is indeed exposed rather than designed and may thus interfere with your good intentions in unexpected ways.
Also, there are very different user needs out there, some want just analysis or some information extraction, some want source transformation capabilities, some want a stable portable hs-plugins replacement, some want to work with backends, etc. . you can't please everyone, but until your focus is known, people can't easily complain about your priorities.
IMHO, trying to support a semantics- and comment-preserving roundtrip in (pretty . parse) would be a good way to start (David says he's going to look at the extracting comments/pragmas part, but they still need to be put back in during pretty printing). It sounds simple, and if it is, it will enable a lot more usage of the GHC API; and if it turns out not to be simple, you'll have a first sign of what you're up against, and can adjust your expectations!-)
Making yourself available as you've done here "I'm here; I'm going to work on this now; please cc me if you want to express your priorities" sounds like a good way to pull together the many strands of interests relating to the GHC API. Now we all have to dust off our old "wouldn't it be nice if the API could do this and that"s.
Perhaps something similar to what the type family folks are doing would help: use the ticket tracker for individual issues, have test cases that demonstrate the issues and their resolution, have more detailed documents online elsewhere, and a single wiki page to tie everything together (making it easier to find relevant tickets and the state of the art).
[cf http://hackage.haskell.org/trac/ghc/wiki/TypeFunctionsStatus ]
over the years, quite a few issues have been raised as tickets/ email/source comments, so collecting them would be a good way to get an idea of what is needed, deciding which of those issues would take how much effort would be a first useful contribution, and seeing which of these you intend to tackle would give the community at large a better chance to comment on your priorities in relation to their needs.
I also hope you are in touch with Chaddaï - the port of HaRe to the GHC API did not make it as a GSoC project, but I understand he is going to do some work in this direction anyway.
Looking forward to an improved GHC API!-) Claus
ps. here are some first entries for your list, and for other interested parties following along (I'd be very interested to hear about your progress):
- http://code.google.com/soc/2008/haskell/appinfo.html?csaid=4189AF2C8AE5E25A (project outline)
- http://hackage.haskell.org/trac/ghc/ticket/1467 (GHC API: expose separate compilation stages; your main ticket so far?)
- concerning exposed phases, it would also be useful if the interface was more uniform (eg., AST, typed AST,..)
- search for NOTE in ghc/compiler/main/GHC.hs for some related notes from an earlier GHC/HaRe meeting
- is it possible to use standalone deriving to get a generic programming framework over the ASTs without blowing up GHC's code for its own use (deriving Data, etc.)?
- http://www.haskell.org/pipermail/haskell-cafe/2008-May/042616.html (GHC API: how to get the typechecked AST?)
- http://hackage.haskell.org/trac/ghc/ticket/1886 (GHC API should preserve and provide access to comments)
- dynamic loading of Haskell code, ala hs-plugins, but without the version/platform issues (GHCi has to be able to do this anyway, but it would be nice to have the ugly bits hidden, such as unsafeCast#, or whatever it was). that might require a standard for typeReps, if I recall correctly..
- is there a way to reduce version-skew for clients of the GHC API (currently, there is no stability guaranteed at all, so if you don't want to live with lots of #ifdefs and breakage, you keep delaying your fantastic GHC API-base projects "until the dust settles")
- I'm sure there have been many more, but that's the problem: not all these issues have been collected as they were raised; even if you don't tackle all of them, it would be nice if you could collect all of them
On Fri, May 9, 2008 at 8:30 PM, Claus Reinke
wrote: Ah, I didn't think about the GHC options that change the lexical syntax. You're right, using the GHC lexer should be easier.
and, if you do that, you could also make the GHC lexer squirrel away the comments (including pragmas, if they aren't already in the AST) someplace safe, indexed by, or at least annotated with, their source locations, and make this comment/ pragma storage available via the GHC API. (1a)
then, we'd need a way to merge those comments and pragmas back into the output during pretty printing, and we'd have made the first small step towards source-to-source transformations: making code survive semantically intact over (pretty . parse). (1b)
that would still not quite fullfill the GHC API comment ticket (*), but that was only a quick sketch, not a definite design. it might be sufficient to let each GHC API client do its own search to associate bits of comment/pragma storage with bits of AST. if i understand you correctly, you are going to do (1a), so if you could add that to the GHC API, we'd only need (1b) to go from useable-for-analysis-and-extraction to useable-for-transformation.
is that going to be a problem?
claus
(*) knowing the source location of some piece of AST is not sufficient for figuring out whether it has any immediately preceding or following comments (there might be other AST fragments in between, closer to the next comment). but, if one knows the nearest comment segment for each piece of AST, one could then build a map where the closest AST pieces are mapped to (Just commentID), and the other AST pieces are mapped to Nothing.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

My own priority is to have the compilation phases exposed. One (selfish) reason for this is that it will force a number of refactorings and cleanups inside GHC, that we've had on the radar for some time. As soon as there's a wiki page up I can start downloading some of the contents of my whiteboard onto it :-)
This aspect is going to affect my own project, GHC plugins. Plugins need to be able to register their own compilation phases and when they should be run with the compiler. A nice way to do this might be to encode the current GHC stage inter-dependencies in code. Plugins could then add their own stages with similar dependency information and finally GHC would compute a topological sort based on the constraints as the actual order in which to run stages. These codified dependencies would complement any documentation based approach. I sketched a very rough idea of what that could look like at http://hackage.haskell.org/trac/ghc/wiki/Plugins (see note [Declarative Core Pass Placement]). I don't have the bandwidth to seriously think about these issues until after exams, but there are other things GHC-API related things that need to happen for plugins: - Expose the Core representation with documentation - Expose and document internal functions for manipulating Core (e.g. CoreUtil, DsUtil) I'm happy to do this work myself, but I need to be sure it's relatively coordinated with Thomas' work and the intentions for the compiler passes so we don't step on each others toes. Cheers, Max

2008/5/15, Claus Reinke
- is it possible to use standalone deriving to get a generic programming framework over the ASTs without blowing up GHC's code for its own use (deriving Data, etc.)?
Speaking of generics, I'm working on deriving Data.Traversable for GHC's abstract syntax using the derive package (I should give most credit to Twan here -- he has been modifying the derive package to make this possible). A package of some form that exports these instances should be useful to GHC API clients. David

Thanks a lot for your comprehensive response, Claus! Per your suggestion, I started a GHC wiki page at http:// hackage.haskell.org/trac/ghc/wiki/GhcApiStatus. I added your comments and I will continue to add more things as I find them. I am closely following the Yi project and I am aware that the HaRe project needs some help from my side. I am interested that both projects become more usable and useful, so I'll be looking forward to concrete requests from both sides. I am also aware of Max's project. We certainly have to look out not to get into each other's ways but as I understand it Max will work on lower-level transformations, while I will concentrate on the front- end. We should therefore be able to keep our work fairly separate.
thanks, I was wondering about your project. Is there a project page documenting the issues/tickets you look at, and particularly the plan of attack as it changes in the face of reality?-) I've found
http://code.google.com/soc/2008/haskell/appinfo.html? csaid=4189AF2C8AE5E25A
which covers a lot of ground, and some interesting issues, but is so general (and design- rather than application-driven) that I've been worried about how much of it you'll manage (and with which priorities), given that the GHC API is indeed exposed rather than designed and may thus interfere with your good intentions in unexpected ways.
Yes, I tried to comment on this a little on the wiki page. I will also do an internship at MSR beginning in October. The topic isn't decided upon, yet, but working on the GHC API was among the things I proposed.
Also, there are very different user needs out there, some want just analysis or some information extraction, some want source transformation capabilities, some want a stable portable hs-plugins replacement, some want to work with backends, etc. . you can't please everyone, but until your focus is known, people can't easily complain about your priorities.
I will start with extracting semantic information from Haskell code. Things that are useful for Yi or HaRe. But if people give good arguments for other features I'd be willing to change priorities. Of course, Simon will have a word there, too. :)
IMHO, trying to support a semantics- and comment-preserving roundtrip in (pretty . parse) would be a good way to start (David says he's going to look at the extracting comments/pragmas part, but they still need to be put back in during pretty printing). It sounds simple, and if it is, it will enable a lot more usage of the GHC API; and if it turns out not to be simple, you'll have a first sign of what you're up against, and can adjust your expectations!-)
I agree. This looks like a good getting-up-to-speed topic.
Making yourself available as you've done here "I'm here; I'm going to work on this now; please cc me if you want to express your priorities" sounds like a good way to pull together the many strands of interests relating to the GHC API. Now we all have to dust off our old "wouldn't it be nice if the API could do this and that"s.
I'm looking forward to a lot of those responses, and I will try and dig around in the archives to find some of those myself.
/ Thomas
-- My God's will / becomes me. / When he speaks out, / he speaks through me. / He has needs / like I do. / We both want / to rape you.
participants (7)
-
Chaddaï Fouché
-
Claus Reinke
-
David Waern
-
Max Bolingbroke
-
Simon Marlow
-
Simon Marlow
-
Thomas Schilling