Conceptually, the runtime does (runIO# Main.main RealWorld#). Practically, ghc's implementation makes the sequencing stuff go away during code generation, so the runtime just sticks Main.main on the pattern stack and jumps into the STG to reduce it; there's your initial pattern match.

I guess I wasn't clear enough with respect to the state. Every IO action is passed the "current state" and produces a "new state" (except that in reality there is no state to pass or update, since it has no runtime representation). A loop would be a sort of fold, where each iteration gets the "current state" and produces (thisResult,"new state"), then the "new state" is passed into the next loop iteration and the final result is the collection of thisResult-s and the final "new state". Again, conceptually, since the state vanishes during code generation, having served its purpose in ensuring everything happens in order.

This is a bit hacky, since it assumes ghc never gets to see that nothing ever actually uses or updates the state so it's forced to assume it's updated and must be preserved. This is where bytestring's inlinePerformIO (better known as accursedUnutterable…) went wrong, since it inlined the whole thing so ghc could spot that the injected state (it being inlined unsafePerformIO) was fake and never used, and started lifting stuff out of loops, etc. — basically optimizing it as if it were pure code internally instead of IO because it could see through IO's "purity mask".

On Tue, Nov 6, 2018 at 1:42 AM Joachim Durchholz <jo@durchholz.org> wrote:
Am 05.11.18 um 23:27 schrieb Brandon Allbery:
> No state is modified, at least in ghc's implementation of IO.

That's what I'd expect.

> IO does carry "state" around, but never modifies it; it exists solely
> to establish a data dependency (passed to and returned from all IO
> actions; think s -> (a, s),
In Haskell, a data dependency can impose constraints on evaluation
order, but it isn't always linear: which subexpression is evaluated
first depends on what a pattern match requests (at least in Haskell:
Haskell's strict operation is the pattern match).

The ordering constraint becomes linear if each function calls just a
single other function. I'm not sure that that's what happens with IO;
input operations must allow choices and loops, making me wonder how
linearity is established. It also makes me wonder how an IO expression
would look like if fully evaluated; is it an infinite data structure,
made useful only through Haskell's laziness, or is it something that's
happening in the runtime?

The other thing that's confusing me is that I don't see anything that
starts the IO processing. There's no pattern match that triggers an
evaluation.
Not that this would explain much: If IO were constructed in a way that a
pattern match starts IO execution, there'd still be the question what
starts this first pattern match.

Then there's the open question what happens if a program has two IO
expressions. How does the runtime know which one to execute?

Forgive me for my basic questions; I have tried to understand Haskell,
but I never got the opportunity to really use it so I cannot easily test
my hypotheses.

Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.


--
brandon s allbery kf8nh
allbery.b@gmail.com