ghci 7.4.1 no longer loading .o files?

Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.: % ghc -c T.hs % ghci GHCi, version 7.4.1: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package filepath-1.3.0.0 ... linking ... done. Prelude> :l T [1 of 1] Compiling T ( T.hs, interpreted ) Ok, modules loaded: T. *T> Versus how this works with 7.0.3: % ghc-7.0.3 -c T.hs % ghci-7.0.3 GHCi, version 7.0.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. Loading package filepath-1.2.0.0 ... linking ... done. Prelude> :l T Ok, modules loaded: T. Prelude T> It makes a big difference when ghci decides to go interpret 100 files vs just sucking in the .o files. This is OS X, 10.6.8. Anyone else see this?

On Mon, Feb 20, 2012 at 01:46, Evan Laforge
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I thought this was deliberate because the debugger won't work with object files? -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

On Sun, Feb 19, 2012 at 10:52 PM, Brandon Allbery
On Mon, Feb 20, 2012 at 01:46, Evan Laforge
wrote: Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I thought this was deliberate because the debugger won't work with object files?
Oh I hope not. I almost never use the debugger, and it's so much slower to re-interpret all those modules. Actually, the few times I've tried the debugger it doesn't seem to like to break where I want it to, even in interpreted files, so I've given up whenever I tried it.

Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
Brandon Allbery wrote:
I thought this was deliberate because the debugger won't work with object files?
Oh I hope not. I almost never use the debugger, and it's so much slower to re-interpret all those modules.
I am surprised about this complaint. I have never noticed any significant delay for re-interpreting, even when working on projects that are quite large, with tens of thousands of LOC. I have always found the behavior of using .o files by default surprising and annoying. The "i" in GHCi stands for "interactive". I expect work in GHCi to be as interactive as possible in every way, including having access to the debugger. I expect performance to be similar to the usual performance of interpreter shells in any language; I don't mind if it doesn't match the speed of compiled Haskell. It's nice if there is a way for experts to load .o files in GHCi, e.g., for the rare case where the performance difference for some specific module is so great that you can't work effectively interactively in some other module that imports it. There could be something to set in .ghci for people who do like that behavior all the time, perhaps. But it should not be the default. GHCi is headed in that direction in many ways, and I think that's great. I don't think more flags should be "excluded from the fingerprint" by default if that would detract in any way from the interactive experience. In particular, it is especially important for -XNoMonomorphismRestriction to be the default in GHCi. See also these tickets: http://hackage.haskell.org/trac/ghc/ticket/3217 http://hackage.haskell.org/trac/ghc/ticket/3202 Thanks, Yitz

On Mon, Feb 27, 2012 at 9:56 AM, Yitzchak Gale
Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
Brandon Allbery wrote:
I thought this was deliberate because the debugger won't work with object files?
Oh I hope not. I almost never use the debugger, and it's so much slower to re-interpret all those modules.
I am surprised about this complaint. I have never noticed any significant delay for re-interpreting, even when working on projects that are quite large, with tens of thousands of LOC.
Really really? I'm even more surprised! Maybe you have a really fast computer? I just tested a module moderately high in the app, which drags in 134 modules (out of 300 or so). When it re-interprets them all, it takes around 16 seconds. When it can load them all from .o files, it's around 1 second. Usually what happens is I'll be testing, and some test will fail. If I edit some module and reload only the dependent modules get reloaded, which is usually ok. But if I switch to another module with :l instead of :r, or if there was a particularly bad syntax error, it will want to reload everything. There's a big difference between waiting under a second to try a change or experiment with a function and waiting 16 seconds! It doesn't feel interactive anymore. This is on a new MacBook Pro with an SSD, which is a relatively high end laptop. The other thing is that I use hint to insert a REPL into the app. This may be a result of me using hint wrong, or how hint uses the ghc API, but every expression apparently does the equivalent of a complete reload. It's ok when each expression takes about 1s, but not when they all take 16s each! I need to to some research here to figure out if I can implement something more ghci-like, specifically :reload.
I have always found the behavior of using .o files by default surprising and annoying. The "i" in GHCi stands for "interactive". I expect work in GHCi to be as interactive as possible in every way, including having access to the debugger. I expect performance to be similar to the usual performance of interpreter shells in any language; I don't mind if it doesn't match the speed of compiled Haskell.
Oh, I don't care so much about the speed of the code being executed, it's how long it takes that code to load into ghci.
GHCi is headed in that direction in many ways, and I think that's great. I don't think more flags should be "excluded from the fingerprint" by default if that would detract in any way from the interactive experience. In particular, it is especially important for -XNoMonomorphismRestriction to be the default in GHCi.
Yeah, I used to have that on for ghci, but it in 7.4.1 it makes it not load the .o files. I agree it's the right choice for ghci, but to me it's not worth the no-longer-interactive caveat it comes with. One other point about the fingerprint thing, is that it also affects --make. That can be the difference between a 1m build and a 15m build. I'm insulated from this because I don't use --make anymore, but it's still very convenient for pure haskell projects. Not to mention cabal uses it. Of course getting mysterious link errors due to inconsistent flags can waste much more than 15m of debugging time, so we should definitely avoid those. But I think the current settings are too conservative.
See also these tickets:
http://hackage.haskell.org/trac/ghc/ticket/3217 http://hackage.haskell.org/trac/ghc/ticket/3202
Ya, I agree with both of those tickets, but I think they're orthogonal.

On Monday 27 February 2012, 18:56:47, Yitzchak Gale wrote:
It's nice if there is a way for experts to load .o files in GHCi, e.g., for the rare case where the performance difference for some specific module is so great that you can't work effectively interactively in some other module that imports it.
Is that so rare? For me it's pretty standard that the core modules _have_ to be loaded as object files, interpreting them would make things orders of magnitude slower (100× - 1000×), they'd be unusably slow. So in my opinion it's absolutely essential that modules can be loaded as object files.
There could be something to set in .ghci for people who do like that behavior all the time, perhaps.
And that too, if it's no longer the default.
But it should not be the default.
But with it not being the default, I could live well.

On 02/20/2012 10:46 AM, Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I don't *know* but could this have anything to do with this? http://hackage.haskell.org/trac/ghc/ticket/5878 Eugene

On Mon, Feb 20, 2012 at 1:14 AM, Eugene Crosser
On 02/20/2012 10:46 AM, Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I don't *know* but could this have anything to do with this?
Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that. I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint? Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file.

On Mon, Feb 20, 2012 at 8:33 PM, Evan Laforge
On Mon, Feb 20, 2012 at 1:14 AM, Eugene Crosser
wrote: On 02/20/2012 10:46 AM, Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I don't *know* but could this have anything to do with this?
Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that.
I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint?
Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file.
After some fiddling, I think that -osuf should probably be omitted from the fingerprint. I use ghc -c -o x/y/Z.hs.o. Since I set the output directly, I don't use -osuf. But since ghci needs to be able to find the .o files, I need to pass it -osuf. The result is that I need to pass ghc -osuf when compiling to get ghci to load the .o files, even though it doesn't make any difference to ghc -c, which is a somewhat confusing requirement. In fact, since -osuf as well as the -outputdir flags affect the location of the output files, I'd think they wouldn't need to be in the fingerprint either. They affect the location of the files, not the contents. If you found the files it means you already figured out what you needed to figure out, it shouldn't matter *how* you found the files. And doesn't the same go for -i? Isn't it valid to start ghci from a different directory and it should work as long as it's able to find the files to load?

Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that.
I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint?
Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file.
After some fiddling, I think that -osuf should probably be omitted from the fingerprint. I use ghc -c -o x/y/Z.hs.o. Since I set the output directly, I don't use -osuf. But since ghci needs to be able to find the .o files, I need to pass it -osuf. The result is that I need to pass ghc -osuf when compiling to get ghci to load the .o files, even though it doesn't make any difference to ghc -c, which is a somewhat confusing requirement.
In fact, since -osuf as well as the -outputdir flags affect the location of the output files, I'd think they wouldn't need to be in the fingerprint either. They affect the location of the files, not the contents. If you found the files it means you already figured out what you needed to figure out, it shouldn't matter *how* you found the files.
And doesn't the same go for -i? Isn't it valid to start ghci from a different directory and it should work as long as it's able to find the files to load?
Further updates: this has continued to cause problems for me, and now I'm wondering if the CPP flags such as -D shouldn't be omitted from the fingerprint too. Here's the rationale: I use CPP in a few places to enable or disable some expensive features. My build system knows which files depend on which defines and hence which files to rebuild. However, ghci now has no way of loading all the .o files, since the ones that don't depend on the -D flag were probably not compiled with it and those that do were. This also plays havoc with the 'hint' library, which is a wrapper around the GHC API. I can't get it to load any .o files and it's hard to debug because it doesn't tell you why it's not loading them. In addition, ghc --make used to figure out which files depended on the changed CPP flags and recompile only those. Now it unconditionally recompiles everything. I always assumed it was because GHC ran CPP on the files before the recompilation checker. If that's the case, do the CPP flags need to be included in the fingerprint at al? It seems like they're already taken into account by the time the fingerprints are calculated. I reviewed http://hackage.haskell.org/trac/ghc/ticket/437 and I noticed there was some question about which flags should be included. Including the language flags and -main-is since that was the original motivation (but only for the module it applies to, of course) makes sense, but I feel like the rest should be omitted.

On 27/02/2012 05:08, Evan Laforge wrote:
Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that.
I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint?
Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file.
After some fiddling, I think that -osuf should probably be omitted from the fingerprint. I use ghc -c -o x/y/Z.hs.o. Since I set the output directly, I don't use -osuf. But since ghci needs to be able to find the .o files, I need to pass it -osuf. The result is that I need to pass ghc -osuf when compiling to get ghci to load the .o files, even though it doesn't make any difference to ghc -c, which is a somewhat confusing requirement.
In fact, since -osuf as well as the -outputdir flags affect the location of the output files, I'd think they wouldn't need to be in the fingerprint either. They affect the location of the files, not the contents. If you found the files it means you already figured out what you needed to figure out, it shouldn't matter *how* you found the files.
And doesn't the same go for -i? Isn't it valid to start ghci from a different directory and it should work as long as it's able to find the files to load?
Further updates: this has continued to cause problems for me, and now I'm wondering if the CPP flags such as -D shouldn't be omitted from the fingerprint too. Here's the rationale:
I use CPP in a few places to enable or disable some expensive features. My build system knows which files depend on which defines and hence which files to rebuild. However, ghci now has no way of loading all the .o files, since the ones that don't depend on the -D flag were probably not compiled with it and those that do were. This also plays havoc with the 'hint' library, which is a wrapper around the GHC API. I can't get it to load any .o files and it's hard to debug because it doesn't tell you why it's not loading them.
In addition, ghc --make used to figure out which files depended on the changed CPP flags and recompile only those. Now it unconditionally recompiles everything. I always assumed it was because GHC ran CPP on the files before the recompilation checker.
If that's the case, do the CPP flags need to be included in the fingerprint at al? It seems like they're already taken into account by the time the fingerprints are calculated. I reviewed http://hackage.haskell.org/trac/ghc/ticket/437 and I noticed there was some question about which flags should be included. Including the language flags and -main-is since that was the original motivation (but only for the module it applies to, of course) makes sense, but I feel like the rest should be omitted.
I don't see how we could avoid including -D, since it might really affect the source of the module that GHC eventually sees. We've never taken -D into account before, and that was incorrect. I can't explain the behaviour you say you saw with older GHC's. unless your CPP flags only affected the imports of the module. Well, one solution would be to take the hash of the source file after preprocessing. That would be accurate and would automatically take into account -D and -I in a robust way. It could also cause too much recompilation, if for example a preprocessor injected some funny comments or strings containing the date/time or detailed version numbers of components (like the gcc version). So for now I'm going to continue to take into account -D. Cheers, Simon

On Tue, Feb 28, 2012 at 1:53 AM, Simon Marlow
I don't see how we could avoid including -D, since it might really affect the source of the module that GHC eventually sees. We've never taken -D into account before, and that was incorrect. I can't explain the behaviour you say you saw with older GHC's. unless your CPP flags only affected the imports of the module.
In fact, that's what I do. I put system specific stuff or expensive stuff into a module and then do #ifdef EXPENSIVE_FEATURE import qualified ExpensiveFeature #else import qualified StubbedOutFeature as ExpensiveFeature #endif I think this is a pretty common strategy. I know it's common for os-specific stuff, e.g. filepath does this. Although obviously for OS stuff we're not interested in saving recompilation :)
Well, one solution would be to take the hash of the source file after preprocessing. That would be accurate and would automatically take into account -D and -I in a robust way. It could also cause too much recompilation, if for example a preprocessor injected some funny comments or strings containing the date/time or detailed version numbers of components (like the gcc version).
By "take the hash of the source file" do you mean the hash of the textual contents, or the usual hash of the interface etc? I assumed it was the latter, i.e. that the normal hash was taken after preprocessing. But suppose it's the former, I still think it's better than unconditional recompilation (which is what always including -D in the hash does, right?). Unconditionally including -D in the hash either makes it *always* compile too much--and likely drastically too much, if you have one module out of 300 that switches out depending on a compile time flag, you'll still recompile all 300 when you change the flag. And there's nothing you can really do about it if you're using --make. If you try to get around that by using a build system that knows which files it has to recompile, then you get in a situation where the files have been compiled with different flags, and now ghci can't cope since it can't switch flags while loading. If your preprocessor does something like put the date in... well, firstly I think that's much less common than switching out module imports, since for the latter as far as I know CPP is the only way to do it, while for dates or version numbers you'd be better off with a config file anyway. And it's still correct, right? You changed your gcc version or date or whatever, if you want a module to have the build date then of course you have to rebuild the module every time---you got exactly what you asked for. Even if for some reason you have a preprocessor that nondeterministically alters comments, taking the interface hash after preprocessing would handle that. And come to think of it, these are CPP flags not some arbitrary pgmF... can CPP even do something like insert the current date without also changing its -D flags?

On 02/03/2012 04:21, Evan Laforge wrote:
On Tue, Feb 28, 2012 at 1:53 AM, Simon Marlow
wrote: I don't see how we could avoid including -D, since it might really affect the source of the module that GHC eventually sees. We've never taken -D into account before, and that was incorrect. I can't explain the behaviour you say you saw with older GHC's. unless your CPP flags only affected the imports of the module.
In fact, that's what I do. I put system specific stuff or expensive stuff into a module and then do
#ifdef EXPENSIVE_FEATURE import qualified ExpensiveFeature #else import qualified StubbedOutFeature as ExpensiveFeature #endif
I think this is a pretty common strategy. I know it's common for os-specific stuff, e.g. filepath does this. Although obviously for OS stuff we're not interested in saving recompilation :)
Well, one solution would be to take the hash of the source file after preprocessing. That would be accurate and would automatically take into account -D and -I in a robust way. It could also cause too much recompilation, if for example a preprocessor injected some funny comments or strings containing the date/time or detailed version numbers of components (like the gcc version).
By "take the hash of the source file" do you mean the hash of the textual contents, or the usual hash of the interface etc? I assumed it was the latter, i.e. that the normal hash was taken after preprocessing.
But suppose it's the former, I still think it's better than unconditional recompilation (which is what always including -D in the hash does, right?). Unconditionally including -D in the hash either makes it *always* compile too much--and likely drastically too much, if you have one module out of 300 that switches out depending on a compile time flag, you'll still recompile all 300 when you change the flag. And there's nothing you can really do about it if you're using --make.
There is a way around it: create a .h file containing "#define MY_SETTING", and have the Haskell code #include the .h file. The recompilation checker does track .h files: http://hackage.haskell.org/trac/ghc/ticket/3589 When you want to change the setting, just modify the .h file. Make sure you don't #include the file in source code that doesn't depend on it. Cheers, Simon
If you try to get around that by using a build system that knows which files it has to recompile, then you get in a situation where the files have been compiled with different flags, and now ghci can't cope since it can't switch flags while loading.
If your preprocessor does something like put the date in... well, firstly I think that's much less common than switching out module imports, since for the latter as far as I know CPP is the only way to do it, while for dates or version numbers you'd be better off with a config file anyway. And it's still correct, right? You changed your gcc version or date or whatever, if you want a module to have the build date then of course you have to rebuild the module every time---you got exactly what you asked for. Even if for some reason you have a preprocessor that nondeterministically alters comments, taking the interface hash after preprocessing would handle that. And come to think of it, these are CPP flags not some arbitrary pgmF... can CPP even do something like insert the current date without also changing its -D flags?

There is a way around it: create a .h file containing "#define MY_SETTING", and have the Haskell code #include the .h file. The recompilation checker does track .h files:
http://hackage.haskell.org/trac/ghc/ticket/3589
When you want to change the setting, just modify the .h file. Make sure you don't #include the file in source code that doesn't depend on it.
Ahh, I do believe that would work. Actually, I'm not using --make but the build system I am using (shake) can easily track those dependencies. It would fix the inconsistent-flags problem because now I'm not passing any -D flags at all. It's more awkward though, I'm using make flags or env vars to control the defines, I would have to either change to editing a config.h file, or have the build system go rewrite config.h on each run, making sure to preserve the timestamp if it hasn't changed. But that's not really all that bad, and you could argue config.h is more common practice than passing -D, probably because it already cooperates with file-based make systems. I'll give it a try, thanks!

On 21/02/2012 04:57, Evan Laforge wrote:
On Mon, Feb 20, 2012 at 8:33 PM, Evan Laforge
wrote: On Mon, Feb 20, 2012 at 1:14 AM, Eugene Crosser
wrote: On 02/20/2012 10:46 AM, Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I don't *know* but could this have anything to do with this?
Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that.
I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint?
Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file.
After some fiddling, I think that -osuf should probably be omitted from the fingerprint. I use ghc -c -o x/y/Z.hs.o. Since I set the output directly, I don't use -osuf. But since ghci needs to be able to find the .o files, I need to pass it -osuf. The result is that I need to pass ghc -osuf when compiling to get ghci to load the .o files, even though it doesn't make any difference to ghc -c, which is a somewhat confusing requirement.
In fact, since -osuf as well as the -outputdir flags affect the location of the output files, I'd think they wouldn't need to be in the fingerprint either. They affect the location of the files, not the contents. If you found the files it means you already figured out what you needed to figure out, it shouldn't matter *how* you found the files.
And doesn't the same go for -i? Isn't it valid to start ghci from a different directory and it should work as long as it's able to find the files to load?
I agree - I'll omit all these flags from the recompilation check in 7.4.2. Cheers, Simon

On 21/02/2012 04:33, Evan Laforge wrote:
On Mon, Feb 20, 2012 at 1:14 AM, Eugene Crosser
wrote: On 02/20/2012 10:46 AM, Evan Laforge wrote:
Is there something that changed in 7.4.1 that would cause it to decide to interpret .hs files instead of loading their .o files? E.g.:
I don't *know* but could this have anything to do with this?
Indeed it was, I initially thought it wasn't because I wasn't using flags for either, but then I remember ghci also picks up flags from ~/.ghci. Turns out I was using -fno-monomorphism-restriction because that's convenient for ghci, but not compiling with that.
I guess in the case where an extension changes the meaning of existing code it should be included in the fingerprint and make the .o not load. But my impression is that most ExtensionFlags let compile code that wouldn't compile without the flag. So shouldn't it be safe to exclude them from the fingerprint?
Either way, it's a bit confusing when .ghci is slipping in flags that are handy for testing, because there's nothing that tells you *why* ghci won't load a particular .o file.
I just committed a fix for this:
http://hackage.haskell.org/trac/ghc/ticket/3217#comment:28
What do people think about getting this into 7.4.2? Strictly speaking
it's more than a bug fix, because it adds a new GHCi command (:seti) and
some extra functions to the GHC API, although I believe it has no effect
on existing usage of GHCi or the GHC API.
The docs explicitly mention -XNoMonomorphismRestriction. The way to
work around the problem you had is to use
:seti -XNoMonomorphismRestriction
in your ~/.ghci, instead of :set. One disadvantage of this is that your
.ghci won't work with older versions of GHC. (does anyone have some
.ghci magic for doing conditional compilation?)
Furthermore, I'm shortly going to push a patch that will add an
indication of why modules are being recompiled. Here's the log message:
commit 27d7d930ff8741f980245da1b895ceaa5294e257 (HEAD, refs/heads/master)
Author: Simon Marlow

I just committed a fix for this:
http://hackage.haskell.org/trac/ghc/ticket/3217#comment:28
What do people think about getting this into 7.4.2? Strictly speaking it's more than a bug fix, because it adds a new GHCi command (:seti) and some extra functions to the GHC API, although I believe it has no effect on existing usage of GHCi or the GHC API.
Well, I'm all for it :) You could stretch it into calling it a bug fix for a regression (it's maybe not technically a regression, but it pushed me back to 7.0.3... well, that and the -D thing...).
[1 of 1] Compiling Test2 ( Test2.hs, Test2.o ) [flags changed]
Very cool, I love it!

On 1 Mar 2012 at 14:15, Simon Marlow wrote:
does anyone have some .ghci magic for doing conditional compilation?
Do you mean something like the attached? HTH, Iain. -- ia@stryx.demon.co.uk The following section of this message contains a file attachment prepared for transmission using the Internet MIME message format. If you are using Pegasus Mail, or any other MIME-compliant system, you should be able to save it or view it from within your mailer. If you cannot, please ask your system administrator for assistance. ---- File information ----------- File: .ghci Date: 5 Mar 2011, 1:32 Size: 358 bytes. Type: Unknown

On 02/03/12 23:03, Iain Alexander wrote:
On 1 Mar 2012 at 14:15, Simon Marlow wrote:
does anyone have some .ghci magic for doing conditional compilation?
Do you mean something like the attached?
Yes! Cheers, Simon
HTH, Iain.
The following section of this message contains a file attachment prepared for transmission using the Internet MIME message format. If you are using Pegasus Mail, or any other MIME-compliant system, you should be able to save it or view it from within your mailer. If you cannot, please ask your system administrator for assistance.
---- File information ----------- File: .ghci Date: 5 Mar 2011, 1:32 Size: 358 bytes. Type: Unknown
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
participants (7)
-
Brandon Allbery
-
Daniel Fischer
-
Eugene Crosser
-
Evan Laforge
-
Iain Alexander
-
Simon Marlow
-
Yitzchak Gale