
Consider the following: let x0 = (undefined, ()) ; ((_, _), _) = x0 in x0 `seq` () () but then combining the patterns into an @-pattern, which really feels like it shouldn't change semantics: let x1@((_, _), _) = (undefined, ()) in x1 `seq` () -> undefined Does this strike anyone else as odd? The reason for this behaviour is that let x@p = rhs in body is translated to case rhs of ~(x@p) -> body according to section 3.12 of The Report[1] which is then translated to (\x -> body) (case rhs of x@p -> x) if p binds no variables, according to section 3.17.3 of The Report[2], and then to (\x -> body) (case rhs of p -> (\x -> x) rhs) which is equivalent to (\x -> body) (case rhs of p -> rhs) Putting it all together let x0 = (undefined, ()) ; ((_, _), _) = x0 in x0 `seq` () desugars as (\x0 -> x0 `seq` ()) (let v = (undefined, ()) in case v of ((_, _), _) -> v) which evaluates as let v = (undefined, ()) in case v of ((_, _), _) -> () This seems very odd to me. Why should combining two non-diverging patterns into an @-pattern cause divergence? Specifically, the translation from let x@p = rhs in body to case rhs of ~(x@p) -> body seems highly dubious. It comes from the more general rule translating let p = rhs in body to case rhs of ~p in body but it seems to interact badly with @-patterns. It seems like the rule for @-patterns should be something like let x@p = rhs in body translates to case rhs of ~(x@(~p)) in body Then the original expression containing x1 would not diverge. Does anyone else have a view on this matter? Tom [1] https://www.haskell.org/onlinereport/exps.html#sect3.12 [2] https://www.haskell.org/onlinereport/exps.html#sect3.17.3