
Norman Ramsey
writes: On x86, GHC can translate 8-bit and 16-bit operations directly into the 8-bit and 16-bit machine instructions that the hardware supports. But there are other platforms on which the smallest unit of arithmetic may be 32 or even 64 bits. Is there a central module in GHC that can take care of rewriting 8-bit and 16-bit operations into 32-bit or 64-bit operations? Or is each back end on its own for this?
As Carter indicated, this is currently done on a per-backend basis. This could indeed probably be consolidated, although we would want to make sure that in so doing we do not leave easy money on the table: It seems plausible to me that the backend may be able to generate better code than a naive lowering to wide arithmetic might otherwise generate.
The main opportunity here is the opportunity to leave garbage in the high bits of a machine register---that is, the opportunity to avoid sign extension or zero extension, which on most relevant platforms costs two instructions. The more context you have, the more likely it is that you can allow intermediate results to hold garbage. We built the context using a little type system, which would probably be easiest to apply at a high level: STG or even Core. There may also be opportunities in allowing the back end to decide how to implement "sign extend" and "zero extend," although in my experience these are less likely. (In a quick trawl through the source tree, I didn't find a definition of `PrimOp`, so I don't know whether "sign extend" and "zero extend" are already available as primitives.) Norman