
On Sat, Jan 18, 2014 at 4:56 PM, adam vogt
Check out https://hackage.haskell.org/package/th-lift. Also, there is a of zeroTH here https://github.com/mgsloan/zeroth which works with a haskell-src-exts < 1.14.
Thanks, I'll take a look. Though since I have my faster-but-uglier solution, at this point I'm mostly only theoretically interested, and hoping to learn something about compilers and optimization :)
I'm not sure what benefit you'd get from a new mechanism (beside TH) to calculate things at compile-time. Won't it have to solve the same problems which are solved by TH already? How can those problems (generating haskell code, stage restriction) be solved without ending up with the same kind of complexity ("TH dependency gunk")?
Well, TH is much more powerful in that it can generate any expression at compile time. But in exchange, it slows down compilation a lot, introduces an order dependency in the source file, and causes complications for the build system (I don't remember exactly, but it came down to needing to find the .o files at compile time). I would think, in the handwaviest kind of way, that the compiler could compile a CAF, and then just evaluate it on the spot by just following all the code thunk pointers (similar to a deepseq), and then emit the raw data structure that comes out. Of course that assumes that there is a such thing as "raw" data, which is why I got all side tracked wondering about compile time optimization in general. I expect it's not like C where you would wind up with a nested bunch of structs you could just write directly to the .TEXT section of the binary and then mmap into place when the binary is run. Even in C you'd need to go fix up pointers. At which point it sounds like a dynamic loader :)