
My opinion, for several years, has been that the Haskell designers erred in making outermost patterns lazy by default in let and where, and at the top level. I believe they should have gone with something much simpler: 1. Patterns are strict by default. 2. Variables are lazy by default. 3. Patterns at the top level must be marked lazy. 4. (With bang patterns) Variables at the top level may not be marked strict. That would harmonize the way patterns are handled in let and where with the way they're handled in function arguments and case expressions, as well as removing the outermost-pattern special case which makes the *strictness* of inner patterns surprising. Unfortunately, I don't think there's much chance of any of these changing in Haskell. On Mon, Jan 2, 2023, 6:00 PM Tom Ellis < tom-lists-haskell-cafe-2017@jaguarpaw.co.uk> wrote:
Consider the following:
let x0 = (undefined, ()) ; ((_, _), _) = x0 in x0 `seq` () ()
but then combining the patterns into an @-pattern, which really feels like it shouldn't change semantics:
let x1@((_, _), _) = (undefined, ()) in x1 `seq` () -> undefined
Does this strike anyone else as odd? The reason for this behaviour is that
let x@p = rhs in body
is translated to
case rhs of ~(x@p) -> body
according to section 3.12 of The Report[1] which is then translated to
(\x -> body) (case rhs of x@p -> x)
if p binds no variables, according to section 3.17.3 of The Report[2], and then to
(\x -> body) (case rhs of p -> (\x -> x) rhs)
which is equivalent to
(\x -> body) (case rhs of p -> rhs)
Putting it all together
let x0 = (undefined, ()) ; ((_, _), _) = x0 in x0 `seq` ()
desugars as
(\x0 -> x0 `seq` ()) (let v = (undefined, ()) in case v of ((_, _), _) -> v)
which evaluates as
let v = (undefined, ()) in case v of ((_, _), _) -> ()
This seems very odd to me. Why should combining two non-diverging patterns into an @-pattern cause divergence? Specifically, the translation from
let x@p = rhs in body
to
case rhs of ~(x@p) -> body
seems highly dubious. It comes from the more general rule translating
let p = rhs in body
to
case rhs of ~p in body
but it seems to interact badly with @-patterns. It seems like the rule for @-patterns should be something like
let x@p = rhs in body
translates to
case rhs of ~(x@(~p)) in body
Then the original expression containing x1 would not diverge.
Does anyone else have a view on this matter?
Tom
[1] https://www.haskell.org/onlinereport/exps.html#sect3.12
[2] https://www.haskell.org/onlinereport/exps.html#sect3.17.3 _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.