Status of "Improved LLVM backend"

Hi all, I was wondering what’s the current status of the “Improved LLVM backend” project ( https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend ). The page mentions a few main problems, but some seem to be already fixed: 1) Using/supporting only one version of LLVM. This has been done AFAIK. 2) Prebuilt binaries to be shipped together with GHC. I can't find anything about this. Is there a ticket? Has there been any progress on this? 3) Adding support for split-objs I found a ticket about it: https://ghc.haskell.org/trac/ghc/ticket/8300 which was closed as WONTFIX in favor of split-sections. So I guess this can also be considered as done. 4) Figuring out what LLVM optimizations are useful. Again, I can seem to find anything here. Has anyone looked at this? I only found an issue about this: https://ghc.haskell.org/trac/ghc/ticket/11295 The page also mentions that the generated IR could be improved in many cases, but it doesn't link to any tickets or discussions. Is there something I could read to better understand what are the main problems? The only thing I can recall is that proc point splitting is likely to cause issues for LLVM's ability to optimize the code. (but I only found a couple of email threads about this but couldn't find any follow-ups) Thanks, Michal

Hi, I’m trying to implement a bitcode producing llvm backend[1], which would potentially allow to use a range of llvm versions with ghc. However, this is only tangentially relevant to the improved llvm backend, as Austin correctly pointed out[2], as there are other complications besides the fragility of the textual representation. So this is mostly only relevant to the improved ir you mentioned. The bitcode code gen plugin right now follows mostly the textual ir generation, but tries to prevent the ubiquitous symbol to i8* casting. The llvm gen turns cmm into ir, at this point however at that point, the wordsize has been embedded already, which means that the current textual llvm gen as well as the bitcode llvm gen try to figure out if relative access is in multiple wordsizes to use llvms getElementPointer. I don’t know if generating llvm from stg instead of cmm would be a better approach, which is what ghcjs and eta do as far as I know. Cheers, moritz — [1]: https://github.com/ghc-proposals/ghc-proposals/pull/25 [2]: https://github.com/ghc-proposals/ghc-proposals/pull/25#issuecomment-26169718...
On Nov 27, 2016, at 3:14 AM, Michal Terepeta
wrote: Hi all,
I was wondering what’s the current status of the “Improved LLVM backend” project ( https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend ). The page mentions a few main problems, but some seem to be already fixed: 1) Using/supporting only one version of LLVM. This has been done AFAIK. 2) Prebuilt binaries to be shipped together with GHC. I can't find anything about this. Is there a ticket? Has there been any progress on this? 3) Adding support for split-objs I found a ticket about it: https://ghc.haskell.org/trac/ghc/ticket/8300 which was closed as WONTFIX in favor of split-sections. So I guess this can also be considered as done. 4) Figuring out what LLVM optimizations are useful. Again, I can seem to find anything here. Has anyone looked at this? I only found an issue about this: https://ghc.haskell.org/trac/ghc/ticket/11295
The page also mentions that the generated IR could be improved in many cases, but it doesn't link to any tickets or discussions. Is there something I could read to better understand what are the main problems? The only thing I can recall is that proc point splitting is likely to cause issues for LLVM's ability to optimize the code. (but I only found a couple of email threads about this but couldn't find any follow-ups)
Thanks, Michal
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

Hi,
I’m trying to implement a bitcode producing llvm backend[1], which would potentially allow to use a range of llvm versions with ghc. However, this is only tangentially relevant to the improved llvm backend, as Austin correctly pointed out[2], as there are other complications besides the fragility of the textual representation.
So this is mostly only relevant to the improved ir you mentioned. The bitcode code gen plugin right now follows mostly the textual ir generation, but tries to prevent the ubiquitous symbol to i8* casting. The llvm gen turns cmm into ir, at this point however at that point, the wordsize has been embedded already, which means that the current textual llvm gen as well as the bitcode llvm gen try to figure out if relative access is in multiple wordsizes to use llvms getElementPointer.
That sounds interesting, do you know where could I find out more about this? (both when it comes to the current LLVM codegen and yours)
I don’t know if generating llvm from stg instead of cmm would be a better approach, which is what ghcjs and eta do as far as I know.
Wouldn't a step from STG to LLVM be much harder (LLVM IR is a pretty low-level representation compared to STG)? There are also a few passes on the Cmm level that seem necessary, e.g., `cmmLayoutStack`. Cheers, Michal

On Nov 27, 2016, at 10:17 PM, Michal Terepeta
wrote: Hi,
I’m trying to implement a bitcode producing llvm backend[1], which would potentially allow to use a range of llvm versions with ghc. However, this is only tangentially relevant to the improved llvm backend, as Austin correctly pointed out[2], as there are other complications besides the fragility of the textual representation.
So this is mostly only relevant to the improved ir you mentioned. The bitcode code gen plugin right now follows mostly the textual ir generation, but tries to prevent the ubiquitous symbol to i8* casting. The llvm gen turns cmm into ir, at this point however at that point, the wordsize has been embedded already, which means that the current textual llvm gen as well as the bitcode llvm gen try to figure out if relative access is in multiple wordsizes to use llvms getElementPointer.
That sounds interesting, do you know where could I find out more about this? (both when it comes to the current LLVM codegen and yours)
For the llvm code gen in ghc it’s usually the `_fast` suffix functions. See [1] and the `genStore_fast` 30 lines further down. My bitcode llvm gen follows that file [1], almost identically, as can be seen in [2]. However the `_fast` path is currently disabled. An example of the generated ir for the current llvm backend, and the bitcode backend, (textual ir, via llvm-dis) can be found in [3] and [4] respectively.
I don’t know if generating llvm from stg instead of cmm would be a better approach, which is what ghcjs and eta do as far as I know.
Wouldn't a step from STG to LLVM be much harder (LLVM IR is a pretty low-level representation compared to STG)? There are also a few passes on the Cmm level that seem necessary, e.g., `cmmLayoutStack`.
There is certainly a tradeoff between retaining more high-level information and having to lower them oneself. If I remember luite correctly, he said he had a similar intermediate format to cmm, just not cmm but something richer, which allows to better target javascript. The question basically boils down to asking if cmm is too low-level for llvm already; the embedding of wordsizes is an example where I think cmm might be to low-level for llvm. — [1]: https://github.com/ghc/ghc/blob/master/compiler/llvmGen/LlvmCodeGen/CodeGen.... [2]: https://github.com/angerman/data-bitcode-plugin/blob/master/src/Data/BitCode... [3]: https://gist.github.com/angerman/32ce9395e73cfea3348fcc7da108cd0a [4]: https://gist.github.com/angerman/d87db1657aac4e06a0886801aaf44329

For the llvm code gen in ghc it’s usually the `_fast` suffix functions. See [1] and the `genStore_fast` 30 lines further down. My bitcode llvm gen follows
On Mon, Nov 28, 2016 at 2:43 AM Moritz Angermann
almost identically, as can be seen in [2]. However the `_fast` path is currently disabled.
An example of the generated ir for the current llvm backend, and the bitcode backend, (textual ir, via llvm-dis) can be found in [3] and [4] respectively.
Cool, thanks a lot for the links!
I don’t know if generating llvm from stg instead of cmm would be a better approach, which is what ghcjs and eta do as far as I know.
Wouldn't a step from STG to LLVM be much harder (LLVM IR is a pretty low-level representation compared to STG)? There are also a few passes on the Cmm level that seem necessary, e.g., `cmmLayoutStack`.
There is certainly a tradeoff between retaining more high-level information and having to lower them oneself. If I remember luite correctly, he said he had a similar intermediate format to cmm, just not cmm but something richer, which allows to better target javascript. The question basically boils down to asking if cmm is too low-level for llvm already; the embedding of wordsizes is an example where I think cmm might be to low-level for llvm.
Ok, I see. This is quite interesting - I'm wondering if it makes sense to collect thought/ideas like that somewhere (e.g., a wiki page with all the issues of using current Cmm for LLVM backend, or just adding some comments in the code). Thanks, Michal

I don’t know if generating llvm from stg instead of cmm would be a better approach, which is what ghcjs and eta do as far as I know.
Wouldn't a step from STG to LLVM be much harder (LLVM IR is a pretty low-level representation compared to STG)? There are also a few passes on the Cmm level that seem necessary, e.g., `cmmLayoutStack`.
There is certainly a tradeoff between retaining more high-level information and having to lower them oneself. If I remember luite correctly, he said he had a similar intermediate format to cmm, just not cmm but something richer, which allows to better target javascript. The question basically boils down to asking if cmm is too low-level for llvm already; the embedding of wordsizes is an example where I think cmm might be to low-level for llvm.
Ok, I see. This is quite interesting - I'm wondering if it makes sense to collect thought/ideas like that somewhere (e.g., a wiki page with all the issues of using current Cmm for LLVM backend, or just adding some comments in the code).
That indeed is an interesting question, to which I don’t have a satisfying answer yet. I’m trying to note these kinds of lookup down in code as I’m trying to go along porting over the textual ir gen to the bitcode ir gen. I would consider this to be a second iteration though. First trying to get the bitcode pipeline to work nicely (this includes having an option to have ghc emit bitcode instead of the assembled object code), and see where that takes us, and then trying to incrementally improve on that. cheers, moritz
participants (2)
-
Michal Terepeta
-
Moritz Angermann