
I don't suppose this will surprise anybody greatly, but... Apparently if you write a Haskell module that is 400 KB in size and defines a single CAF consisting of a 45,000-element [String], GHCi panics when attempting to load it interpretted, and hits a stack overflow attempting to load it compiled. GHC also takes forever to compile it in the first place, and eventually spits out a 5 MB interface file later followed by a 16 MB object file. And attempting to compile a trivial module against it again causes a stack overflow. Presumably the designers of GHC just didn't expect anybody to try to do anything this weird? ;-) I was hoping that doing things this way round would be *more efficient*. But this is apparently not the case at all, so I'll just go back to reading the file at runtime instead... [Presumably if I was desparate I could convert the data into some kind of preinitialised C structure and manually link it in - if I was that determined.]