
On Fri, 6 Nov 2009, Stefan Holdermans wrote:
What wasn't understood by then is that making a seq a type-class method is not enough to recover parametricity, which was the goal. This is explained in a recent paper by Daniel Seidel and Janis Voigtlaender:
Daniel Seidel and Janis Voigtlaender. Taming selective strictness. In Stefan Fischer, Erik Maehle, and Ruediger Reischuk, editors, INFORMATIK 2009 – Im Focus das Leben, Beitraege der 39. Jahrestagung der Gesellschaft fuer Informatik e.V. (GI), 28. September – 2. Oktober, in Luebeck, volume 154 of Lecture Notes in Informatics, pages 2916–2930. GI, 2009.
I'm still trying to understand the initial example. As far as I can see, in all of these examples an Eval constraint would prevent wrong application of implications/rules, but it might prevent application of a rule when it could be correctly applied. Is my conclusion valid?