
I do not know Haskell. It looks to me as though there are several pieces of the mechanism:
1. There is, once the extensions are specified, a particular Type System, that is, a formal system with, on the syntactic side, at least, assumptions, judgements, rules of inference, terms lying in some lambda calculus, etc..
That's right. Extensions get complex too, however, and can't be necessarily easily dismissed (not to imply you were doing so), RankNTypes, for example,
2. The Type Inference Subsystem, which using some constraint solver calculates the type to be assigned to the value of Haskell expressions.
Yes, that's right, for core haskell this is typically damas milner (let bound) polymorphism
3. The machine which does "reduction", perhaps "execution", on the value of Haskell expressions. This is, by your account, the STG machine.
Yes, notably graph reduction allows sharing, which is an important part of Haskell's semantics,
There is a textual version of Haskell's Core. If it were executable and the runtime were solid and very simple and clear in its design, I think we would have something close to a "formal semantics". We'd also require that the translation to STG code be very simple.
Yes, that's right. The translation to STG (or something like it, another core language) can be found in many books and articles, But others here have also specified some good references for executable versions of Core. Still unsure if the translation from Haskell to Core has been verified, I would suspect not, as I haven't heard of any such thing. kris