
As a side note, buildwrapper version 0.4.0 and above follows the
approach you outline. When a file is modified, we call GHC to build
it, and we store the GHC AST as a JSON object in a hidden file. Then
all subsequent calls that make use of the JSON data (in EclipseFP,
this would be to show you a tooltip of the type of the object you're
hovering over) without calling GHC again, so it's much faster, even
though buildwrapper is a pure "one shot" executable with no concept of
a session. The JSON file could also be read by another process that
buildwrapper itself, so maybe Christopher could use this approach.
JP
On Thu, Jan 26, 2012 at 7:00 PM, Thomas Schilling
On 26 January 2012 16:33, JP Moresmau
wrote: Thomas, thank you for that explanation about the different type of identifiers in the different phases of analysis. I've never seen that information so clearly laid out before, can it be added to the wikis (in http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/API or http://www.haskell.org/haskellwiki/GHC/As_a_library maybe)? I think it would be helpful to all people that want to dive into the GHC API.
Will do.
On a side note, I'm going to do something very similar in my BuildWrapper project (which is now the backend of the EclipseFP IDE plugins): instead of going back to the API every time the user requests to know the type of "something" in the AST, I'm thinking of sending the whole typed AST to the Java code. Maybe that's something Christopher could use. Both the BuildWrapper code and Thomas's scion code are available on GitHub, as they provide examples on how to use the GHC API.
I really don't think you want to do much work on the front-end as that will just need to be duplicated for each front-end. That was the whole point of building Scion in the first place. I understand, of course, that Scion is not useful enough at this time.
Well, I currently don't have much time to work on Scion, but the plan is as follows:
- Scion becomes a multi-process architecture. It has to be since it's not safe to run multiple GHC sessions inside the same process. Even if that were possible, you wouldn't be able to, say, have a profiling compiler and a release compiler in the same process due to how static flags work. Separate processes have the additional advantage that you can kill them if they use too much memory (e.g., because you can't unload loaded interfaces).
- Scion will be based on Shake and GHC will mostly be used in one-shot mode (i.e., not --make). This makes it easier to handle preprocessed files. It also allows us to generate and update meta-information on demand. I.e., instead of parsing and typechecking a file and then caching the result for the current file, Scion will simply generate meta information whenever it (re-)compiles a source file and writes that meta information to a file. Querying or caching that meta information then is completely orthogonal to generating it. The most basic meta information would be a type-annotated version of the compiled AST (possibly + warnings and errors from the last time it was compiled). Any other meta information can then be generated from that.
- The GHCi debugger probably needs to be treated specially. There also should be automatic detection of files that aren't supported by the bytecode compiler (e.g., those using UnboxedTuples) and force compilation to machine code for those.
- The front-end protocol should be specified somewhere. I'm thinking about using protobuf specifications and then use ways to generate custom formats from that (e.g., JSON, Lisp S-Expressions, XML?). And if the frontend supports protocol buffers, then it can use that and be fast. That also means that all serialisation code can be auto-generated.
I won't have time to work on this before the ICFP deadline (and only very little afterwards), but Scion is not dead (just hibernating).
JP
On Thu, Jan 26, 2012 at 2:31 PM, Thomas Schilling
wrote: On 26 January 2012 09:24, Christopher Brown
wrote: Hi Thomas,
By static semantics I mean use and bind locations for every name in the AST.
Right, that's what the renamer does in GHC. The GHC AST is parameterised over the type of identifiers used. The three different identifier types are:
RdrName: is the name as it occurred in source code. This is the output of the parser. Name: is basically RdrName + unique ID, so you can distinguish two "x"s bound at different locations (this is what you want). This is the output of the renamer. Id: is Name + Type information and consequently is the output of the type checker.
Diagram:
String --parser--> HsModule RdrName --renamer--> HsModule Name --type-checker--> HsBinds Id
Since you can't hook in-between renamer and type checker, it's perhaps more accurately depicted as:
String --parser--> HsModule RdrName --renamer+type-checker--> (HsModule Name, HsBinds Id)
The main reasons why it's tricky to use the GHC API are:
You need to setup the environment of packages etc. E.g., the renamer needs to look up imported modules to correctly resolve imported names (or give a error). The second is that the current API is not designed for external use. As I mentioned, you cannot run renamer and typechecker independently, there are dozens of invariants, there are environments being updated by the various phases, etc. For example, if you want to generate code it's probably best to either generate HsModure RdrName or perhaps the Template Haskell API (never tried that path).
/ Thomas
-- Push the envelope. Watch it bend.
-- JP Moresmau http://jpmoresmau.blogspot.com/
-- Push the envelope. Watch it bend.
-- JP Moresmau http://jpmoresmau.blogspot.com/