
I'm resurrecting this 3-month old thread as I have some more questions about cardinality analysis. 1. I'm still a bit confused about terminology. Demand analysis, strictness analysis, cardinality analysis - do these three terms mean exactly the same thing? If no, then what are the differences? 2. First pass of full laziness is followed by floating in. At that stage we have not yet run the demand analysis and yet the code that does the floating-in checks whether a binder is one-shot (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This suggests that cardinality analysis is done earlier (but when?) and that demand analysis is not the same thing as cardinality analysis. 3. Does demand analyser perform any transformations? Or does it only annotate Core with demand information that can be used by subsequent passes? 4. BasicTypes module defines: data OneShotInfo = NoOneShotInfo -- ^ No information | ProbOneShot -- ^ The lambda is probably applied at most once | OneShotLam -- ^ The lambda is applied at most once. Do I understand correctly that `NoOneShotInfo` really means no information, ie. a binding annotated with this might in fact be one shot? If so, then do we have means of saying that a binding is certainly not a one-shot binding? 5. What is the purpose of SetLevels.lvlMFE function? Janek
The wiki page just went live:
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline
It's not yet perfect but it should be a good start.
Roughtly, a complete run of the simplifier means "run the simplifier repeatedly until nothing further happens". The iterations are the successive iterations of this loop. Currently there's a (rather arbitrary) limit of four such iterations before we give up and declare victory.
A limit or a default value for that limit?
To Ilya:
If you grep for the "late_dmd_anal" option variable in the compiler/simplCore/SimplCore.lhs module, you'll see that it triggers a phase close to the endo of getCoreToDo's tasks, which contains, in particular, the "CoreDoStrictness" pass. This is the "late" phase.
The paper said that the late pass is run to detect single-entry thunks and the reason why it is run late in the pipeline is that if it were run earlier this information could be invalidated by the transformations. But in the source code I see that this late pass is followed by the simplifier, which can invalidate the information. Also, the documentation for -flate-dmd-anal says: "We found some opportunities for discovering strictness that were not visible earlier; and optimisations like -fspec-constr can create functions with unused arguments which are eliminated by late demand analysis". This says nothing about single-netry thunks. So, is the single-entry thunk optimisation performed by GHC?
Janek