
(see the FunctionalDependencies page for background omitted here) One of the problems with the relaxed coverage condition implemented by GHC and Hugs is a loss of confluence. Here is a slightly cut-down version of Ex. 18 from the FD-CHR paper: class B a b | a -> b class C a b c | a -> b instance B a b => C [a] b Bool Starting from a constraint set C [a] b Bool, C [a] c d, we have two possible reductions: 1) C [a] b Bool, C [a] c d => c = b, C [a] b Bool, C [a] b d (use FD on C) => c = b, B a b, C [a] b d (reduce instance) 2) C [a] b Bool, C [a] c d => C a b, C [a] c d (reduce instance) The proposed solution was to tighten the restrictions on instances to forbid those like the above one for C. However there may be another way out. The consistency condition implies that there cannot be another instance C [t1] t2 t3: a substitution unifying a and t1 need not unify b and t2. Thus we could either 1) consider the two constraint sets equivalent, since they describe the same set of ground instances, or 2) enhance the instance improvement rule: in the above example, we must have d = Bool in both cases, so both reduce to c = b, d = Bool, B a b More precisely, given a dependency X -> Y and an instance C t, if tY is not covered by tX, then for any constraint C s with sX = S tX for some substitution S, we can unify s with S t. We would need a restriction on instances to guarantee termination: each argument of the instance must either be covered by tX or be a single variable. That is less restrictive (and simpler) than the previous proposal, however. Underlying this is an imbalance between the two restrictions on instances. In the original version, neither took any account of the context of the instance declaration. The implementations change this for the coverage condition but not the consistency condition. Indeed the original form of the consistency condition is necessary for the instance improvement rule.

Ross Paterson writes:
(see the FunctionalDependencies page for background omitted here)
One of the problems with the relaxed coverage condition implemented by GHC and Hugs is a loss of confluence. Here is a slightly cut-down version of Ex. 18 from the FD-CHR paper:
class B a b | a -> b class C a b c | a -> b
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d, we have two possible reductions:
1) C [a] b Bool, C [a] c d => c = b, C [a] b Bool, C [a] b d (use FD on C) => c = b, B a b, C [a] b d (reduce instance)
2) C [a] b Bool, C [a] c d => C a b, C [a] c d (reduce instance) ^^^^^ should be B a b
The proposed solution was to tighten the restrictions on instances to forbid those like the above one for C. However there may be another way out.
The consistency condition implies that there cannot be another instance C [t1] t2 t3: a substitution unifying a and t1 need not unify b and t2. Thus we could either
1) consider the two constraint sets equivalent, since they describe the same set of ground instances, or
That's troublesome for (complete) type inference. Two constraint stores are equivalent if they are equivalent for any satisfying ground instance? How to check that?
2) enhance the instance improvement rule: in the above example, we must have d = Bool in both cases, so both reduce to
c = b, d = Bool, B a b
More precisely, given a dependency X -> Y and an instance C t, if tY is not covered by tX, then for any constraint C s with sX = S tX for some substitution S, we can unify s with S t.
I'm not following you here, you're saying? rule C [a] b d ==> d=Bool Are you sure that you're not introducing further "critical pairs"?
We would need a restriction on instances to guarantee termination: each argument of the instance must either be covered by tX or be a single variable. That is less restrictive (and simpler) than the previous proposal, however.
Underlying this is an imbalance between the two restrictions on instances. In the original version, neither took any account of the context of the instance declaration. The implementations change this for the coverage condition but not the consistency condition. Indeed the original form of the consistency condition is necessary for the instance improvement rule.
Maybe, you found a "simple" solution (that would be great) but I'not 100% convinced yet. The problem you're addressing only arises for non-full FDs. Aren't such cases very rare in practice? Martin

On Mon, Apr 10, 2006 at 05:49:15PM +0800, Martin Sulzmann wrote:
Ross Paterson writes:
1) consider the two constraint sets equivalent, since they describe the same set of ground instances, or
That's troublesome for (complete) type inference. Two constraint stores are equivalent if they are equivalent for any satisfying ground instance? How to check that?
You can't check it; it has to be proven for the system.
I'm not following you here, you're saying?
rule C [a] b d ==> d=Bool
That's right.
Are you sure that you're not introducing further "critical pairs"?
I don't see how -- there are no new redexes. Instance improvement was already applicable, but now it has a different output.
Maybe, you found a "simple" solution (that would be great) but I'not 100% convinced yet.
The problem you're addressing only arises for non-full FDs. Aren't such cases very rare in practice?
My aim is not coverage but simplicity of description, and this version still isn't simple enough.

(see the FunctionalDependencies page for background omitted here)
which seems to ignore much of the discussion of confluence we had here some weeks ago? I thought this list was for communication with the committee, and since the ticket exists, I had been hoping that the type class subcommittee would ensure that the wiki would be updated. but it seems I have been talking to myself..
One of the problems with the relaxed coverage condition implemented by GHC and Hugs is a loss of confluence. Here is a slightly cut-down version of Ex. 18 from the FD-CHR paper:
class B a b | a -> b class C a b c | a -> b
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d, we have two possible reductions:
1) C [a] b Bool, C [a] c d => c = b, C [a] b Bool, C [a] b d (use FD on C) => c = b, B a b, C [a] b d (reduce instance)
2) C [a] b Bool, C [a] c d => C a b, C [a] c d (reduce instance)
CHR-alt: B a b <=> infer_B a b, memo_B a b. memo_B a b1, memo_B a b2 ==>b1=b2. C a b c <=> infer_C a b c, memo_C a b c. memo_C a b1 c1, memo_C a b2 c2 ==> b1=b2. infer_C [a] b Bool <=> B a b. memo_C [a] b' c' ==> b'=b. reduction: 3) C [a] b Bool, C [a] c d <=> infer_C [a] b Bool, memo_C [a] b Bool, infer_C [a] c d, memo_C [a] c d {reducing instances cannot disable use of FDs!} <=> B a b, memo_C [a] b Bool, infer_C [a] c d, memo_C [a] c d {use FD now or earlier..} ==> B a b, infer_C [a] c d, memo_C [a] b Bool, memo_C [a] c d, b=c there is no loss of confluence. there is no implication that d=Bool (you could add: 'instance B a b => C [a] b Char' without violating FD consistency). just a useless instance improvement rule for C. just blindly applying the alternative translation.. what am I missing here? claus
The proposed solution was to tighten the restrictions on instances to forbid those like the above one for C. However there may be another way out.
The consistency condition implies that there cannot be another instance C [t1] t2 t3: a substitution unifying a and t1 need not unify b and t2. Thus we could either
1) consider the two constraint sets equivalent, since they describe the same set of ground instances, or
2) enhance the instance improvement rule: in the above example, we must have d = Bool in both cases, so both reduce to
c = b, d = Bool, B a b
More precisely, given a dependency X -> Y and an instance C t, if tY is not covered by tX, then for any constraint C s with sX = S tX for some substitution S, we can unify s with S t.
We would need a restriction on instances to guarantee termination: each argument of the instance must either be covered by tX or be a single variable. That is less restrictive (and simpler) than the previous proposal, however.
Underlying this is an imbalance between the two restrictions on instances. In the original version, neither took any account of the context of the instance declaration. The implementations change this for the coverage condition but not the consistency condition. Indeed the original form of the consistency condition is necessary for the instance improvement rule.
_______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://haskell.org/mailman/listinfo/haskell-prime

On Mon, Apr 10, 2006 at 02:39:18PM +0100, Claus Reinke wrote:
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d,
there is no implication that d=Bool (you could add: 'instance B a b => C [a] b Char' without violating FD consistency).
These instances (alpha-renamed): instance B a1 b1 => C [a1] b1 Bool instance B a2 b2 => C [a2] b2 Char violate the consistency condition: a substitution that unifies a1 and a2 need not unify b1 and b2, because the consistency condition makes no mention of the contexts. (If it did, the instance improvement rule would be invalid.)

there is no implication that d=Bool (you could add: 'instance B a b => C [a] b Char' without violating FD consistency).
These instances (alpha-renamed): instance B a1 b1 => C [a1] b1 Bool instance B a2 b2 => C [a2] b2 Char
violate the consistency condition: a substitution that unifies a1 and a2 need not unify b1 and b2, because the consistency condition makes no mention of the contexts. (If it did, the instance improvement rule would be invalid.)
?? the consistency condition does mention b1 and b2 as t_b and s_b, so any substitution that unifies a1 and a2 (t_a and s_a) needs to unify b1 and b2, independent of context. note also that we are talking about different things here: I am talking about FD consistency, you are talking about the FD consistency condition. let's be concrete: class B a b | a -> b class C a b c | a -> b instance B a1 b1 => C [a1] b1 Bool instance B a2 b2 => C [a2] b2 Char with these declarations, you are concerned about constraints C [a] b Bool,C [a] c Char, which you claim are substitution instances of these declaration heads even if b and c differ. but the class instances of class C are constrained by an FD, and if b and c differ, these two constraints are _not_ substitution instances under that FD. they would be substitution instances if that FD wasn't there (*). the FD consistency condition is an approximation of FD consistency, and it is a strange approximation at that, because it is neither sufficient nor necessary (with the alternative CHR translation) for consistency: - if all instance declarations fulfill the FD consistency condition, the constraint store may still be inconsistent, because it may represent an inconsistent query; so the condition is not sufficient - one of the bonuses of a confluent CHR is that deductions on inconsistent stores will not terminate successfully (if they terminate, they are certain to fail), so the condition is not necessary, either [it was with the old translation, though] as this example shows once again, there are instance declarations for which the FD consistency condition, as currently interpreted by Hugs, fails, even though no inconsistent constraints are implied. so I fail to see the point of continuing to require the FD consistency condition in unrevised form. but my point was that there is no confluence problem. everything else, with the exception of termination, follows from there. cheers, claus (*) this is similar to patterns and guards in the expression language, for instance: f a = .. -- some function c [a1] b1 Bool | b1 == f [a1] = b a1 b1 c [a2] b2 Char | b2 == f [a2] = b a2 b2 now, c [a] b1 Bool and c [a] b2 Char are defined if (b1==f [a]) and (b2==f [a]) and if b a b1 and b a b2 are defined as well. c [a] d Bool with (d/=f[a]) matches the pattern, but fails the guard, so it is not a substitution instance of c's left hand sides under the guard.

On Mon, Apr 10, 2006 at 06:48:35PM +0100, Claus Reinke wrote:
note also that we are talking about different things here: I am talking about FD consistency, you are talking about the FD consistency condition.
That would explain a few things.
as this example shows once again, there are instance declarations for which the FD consistency condition, as currently interpreted by Hugs, fails, even though no inconsistent constraints are implied. so I fail to see the point of continuing to require the FD consistency condition in unrevised form.
Do you have a revised set of restrictions on the form of instances?

Do you have a revised set of restrictions on the form of instances?
sure, but you won't like them any better now than the last time I mentioned them:-) - as for confluence, none of the restrictions seem necessary without FDs, type class inference proceeds by _matching_ of instance heads to constraints, and without overlapping instances, there cannot be more than one matching head. so the only divergence comes from reductions on separate parts of the constraint store, which are independent, so the divergence is temporary. with FDs, type class inference adds _unification_, but only for FD range parameters. instance heads that only differ in FD range positions are now overlapping, but only as far as improvement is concerned. note also that the resulting overlap in the left hand side of rules is over variables, so no variable instantiation can disable such rules. to be initially diverging, such overlapping improvement rules must have different unifications as their right hand sides. memoisation ensures that any improvement that was possible at any point in the reduction process remains possible throughout. so following either of two conflicting improvements will leave the other enabled, ensuring failure of the constraint store instead of non-determistic results. consistency condition: neither necessary nor sufficient coverage/bound variable condition: as far as confluence is concerned, these were possibly motivated by the wish to have variable-restricted CHR, without which arbitrary rule systems might introduce non-confluence by following different instantiations for the fresh variables, through unification with different left hand sides. but due to the peculiarities of Haskell's type classes, such instantiations do not appear to be a problem, as sketched above. (as far as termination is concerned, I remain unconvinced that these two rules are the right way to exclude problematic cases, as they exclude many valid programs just because these might be invoked with problematic queries. in such cases, I'd prefer to exclude the problematic queries.) early checking for superclass instances: also does not seem to be needed any more; as the superclass constraints will be introduced as proof obligations in the same step that introduces infer_Class and memo_Class constraints, they cannot be "forgotten", and inference cannot terminate successfully if those instances are missing (they may, however, be provided later than now, accepting more valid programs). (sigh, I'm sure I've written this before, as so many other things here.. I may well be wrong in what I'm saying - if that is the case, please tell me so, but _please_ don't keep asking me to repeat myself..) - as for termination, the checks not related to the conditions above are a useful start, but they are incomplete, and overly restrictive. in fact, we know that we are dealing with an undecidable problem here, so that no finite system of restrictions will be able to do the job statically. we've already discussed various improvements to the current set of checks here, as have others here and elsewhere. my preference here is unchanged: the variant without static termination checks needs to be the basis, and it needs to be defined clearly, so that all implementations of Haskell' accept the same programs and give the same results. the reason for this preference is my opinion that Haskell has become too complex a language already. heaping special cases on top of restrictions that are necessarily incomplete adds complexity to the language, while restricting its expressiveness. on top of that basis, we should have termination conditions, and we should keep trying to push the envelope by making them encompass more valid programs, but that also means that these termination conditions need to keep evolving! so we should have warnings if the current set of termination checks cannot statically determine whether or not some type class program will terminate, and by all means, have a flag to decide whether or not those warnings should become error messages (for teaching contexts, or for any other software development processes based on strict coding standard conformance, the former might be the right default option). but in the near future, I do not expect to see either of the following two: a set of termination conditions that will cover and permit all practically relevant programs, or a correct and terminating, practically motivated type class program that will run much longer than it takes me to read and understand the warnings produced by the current set of termination checks. oh, two more things: if type class inference fails to terminate, I never get to run my program, so that is perfectly type safe; and if, as will eventually happen, part of my terminating type class programs are beyond the abilities of the current set of termination checks, I do not want to have to resort to a module- or project-wide license to get my code through. I'm no expert in this, but I believe that those folks who are concerned with termination proofs accept it as a matter of routine that tailoring a termination-ordering for each particular problem is an important part of their business; but if Haskell' does not support this, and still insists on doing mandatory termination checks by default, it should at least allow me to indicate which of the type classes/instances I want to take responsibility for (via a pragma, most likely), using its checks on all others. cheers, claus

On Mon, 2006-04-10 at 14:39 +0100, Claus Reinke wrote:
(see the FunctionalDependencies page for background omitted here)
which seems to ignore much of the discussion of confluence we had here some weeks ago? I thought this list was for communication with the committee, and since the ticket exists, I had been hoping that the type class subcommittee would ensure that the wiki would be updated. but it seems I have been talking to myself..
Each topic has a ticket that can be edited by anyone as described here: http://haskell.galois.com/trac/haskell-prime/wiki/CreateProposal Please take advantage of that to capture data (consensus, pros & cons) about your topic. It is not easy to boil down pages and pages of discussion into a coherent document. Please help to do that work! Here's the ticket relating to functional dependencies. http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/ticket/36 Here are some guidelines from the wiki: The wiki and Tickets: No Discussion on the Wiki! Keep discussion on the mailing list and use the wiki to document consensus and pros & cons. It is just fine to have conflicting points of view in a ticket or on the wiki, but no back-and-forth discussion (use the mailing list for that). Create "pros" and "cons" sections in the ticket/wik. For Tickets: For tickets, use EDIT rather than adding "comments". Link this ticket with wiki pages or other tickets which are related, or for which this is a counter-proposal. peace, isaac

On Mon, Apr 10, 2006 at 02:39:18PM +0100, Claus Reinke wrote:
class B a b | a -> b class C a b c | a -> b
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d, [...]
CHR-alt:
B a b <=> infer_B a b, memo_B a b. memo_B a b1, memo_B a b2 ==>b1=b2.
C a b c <=> infer_C a b c, memo_C a b c. memo_C a b1 c1, memo_C a b2 c2 ==> b1=b2. infer_C [a] b Bool <=> B a b. memo_C [a] b' c' ==> b'=b.
OK, so you want to modify the usual instance reduction rule by recording functional dependencies from the head. That is, you get confluence by remembering all the choices you had along the way, so you can still take them later. My problem with this scheme is that it makes FDs an essential part of the semantics, rather than just a shortcut, as they are in the original system. With the old system, one can understand instances using a Herbrand model, determined by the instance declarations alone. If the instances respect FDs, types can be made more specific using improvement rules justified by that semantics. With your system, there's only the operational semantics. To understand types, programmers must operate the rewriting system. And the constraint store they must work with will be quite cluttered with all these memoized FDs.

On Mon, Apr 10, 2006 at 02:39:18PM +0100, Claus Reinke wrote:
class B a b | a -> b class C a b c | a -> b
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d, [...]
CHR-alt:
B a b <=> infer_B a b, memo_B a b. memo_B a b1, memo_B a b2 ==>b1=b2.
C a b c <=> infer_C a b c, memo_C a b c. memo_C a b1 c1, memo_C a b2 c2 ==> b1=b2. infer_C [a] b Bool <=> B a b. memo_C [a] b' c' ==> b'=b.
OK, so you want to modify the usual instance reduction rule by recording functional dependencies from the head. That is, you get confluence by remembering all the choices you had along the way, so you can still take them later.
My problem with this scheme is that it makes FDs an essential part of the semantics, rather than just a shortcut, as they are in the original system. With the old system, one can understand instances using a Herbrand model, determined by the instance declarations alone. If the instances respect FDs, types can be made more specific using improvement rules justified by that semantics. With your system, there's only the operational semantics.
The logical semantics of the original CHR program and that of Claus with the memoization is really the same, I'm sure. The justifiable improvemetns are exactly the same. The main difference is that the memoization makes the rules confluent in more cases, i.e. the difference is in the operational semantics. For that matter the operational semantics of the confluent program would be considered to capture more completely the logical semantics and justifiable improvements.
To understand types, programmers must operate the rewriting system.
Not at all. The confluent program is really more intuitive, because more of the logically justifiable improvements are done.
And the constraint store they must work with will be quite cluttered with all these memoized FDs.
This is more of an implementation issue than a conceptual one, isn't it? I can think of a number of optimizations already to reduce the overhead. In the above example, the third argument is not involved in the FD, so does not have to be remembered, yielding: memo_C a b1, memo_C a b2 ==> b1=b2. Secondly, you only need to keep one of a number of identical copies. In CHR there is what's called a simpagation rule: memo_C a b1 \ memo_C a b2 <=> b1=b2. that removes the constraint after the backslash. Here it's valid because that constraint becomes identical to the first one. Moreover, very few people will actually have to look at the constraint store, I think. -- Tom Schrijvers Department of Computer Science K.U. Leuven Celestijnenlaan 200A B-3001 Heverlee Belgium tel: +32 16 327544 e-mail: tom.schrijvers@cs.kuleuven.be

thanks, Tom! indeed, for that old thread on an alternative translation, which forked from an even older thread, I put aside all my wishlist and simply focussed on removing what I considered anomalies in the operationalisation of type classes with FDs via CHR in the old translation. the CHR resulting from the alternative translation should not allow any new constraints to be proven, it should only make sure that any FD-based information is fully used in any results, completing any incomplete deductions in the old system. I found it highly irregular that whether or not an FD was respected by the CHR system resulting from the original translation could depend on reduction strategies - the only way for real implementations to work around this shortcoming would be to iterate improvement before continuing reduction, which puts additional constraints on the implementer in an already complex system; and the only way for a formal system based on this incomplete translation to be safe was to try and cover all the possible holes with a set of restrictions that backfire by complicating the language design and making the whole idea of FDs look more tricky than it needs to be. there are interesting problems in FDs, but it seems that the confluence problems were merely problems of the old translation, not anything inherent in FDs! I really had hoped we had put that phantom to rest. (btw, in a real implementation, I wouldn't expect the memo constraints to enter the constraint store at all - they are just a CHR-based way to express a memo table, hence the name; as, in fact, I explained before even starting that alternative translation thread..). cheers, claus
OK, so you want to modify the usual instance reduction rule by recording functional dependencies from the head. That is, you get confluence by remembering all the choices you had along the way, so you can still take them later.
My problem with this scheme is that it makes FDs an essential part of the semantics, rather than just a shortcut, as they are in the original system. With the old system, one can understand instances using a Herbrand model, determined by the instance declarations alone. If the instances respect FDs, types can be made more specific using improvement rules justified by that semantics. With your system, there's only the operational semantics.
The logical semantics of the original CHR program and that of Claus with the memoization is really the same, I'm sure. The justifiable improvemetns are exactly the same. The main difference is that the memoization makes the rules confluent in more cases, i.e. the difference is in the operational semantics. For that matter the operational semantics of the confluent program would be considered to capture more completely the logical semantics and justifiable improvements.
To understand types, programmers must operate the rewriting system.
Not at all. The confluent program is really more intuitive, because more of the logically justifiable improvements are done.
And the constraint store they must work with will be quite cluttered with all these memoized FDs.
This is more of an implementation issue than a conceptual one, isn't it? I can think of a number of optimizations already to reduce the overhead.

| there are interesting problems in FDs, but it seems that the confluence | problems were merely problems of the old translation, not anything | inherent in FDs! I really had hoped we had put that phantom to rest. Claus You're doing a lot of work here, which is great. Why not write a paper? Even for people (like me) who are relatively familiar with FDs, it's hard to follow a long email thread. For others, who might well be interested, it's even harder. The phantom is not resting yet! (On the other hand, email can be a good way of developing the ideas, which is what you have been doing.) A good way forward might be to write a paper building on our recent JFP submission, and proposing whatever changes and improvements you have developed. That would make your work accessible to a much wider audience. Simon

On Thu, 13 Apr 2006, Claus Reinke wrote:
(btw, in a real implementation, I wouldn't expect the memo constraints to enter the constraint store at all - they are just a CHR-based way to express a memo table, hence the name; as, in fact, I explained before even starting that alternative translation thread..).
Sure, you can have any particular dedicated data structure to realize part of the constraint store. CHR implementations already use lists, trees, hashtables, global variables, ... If some Haskell' implementations want a dedicated CHR implementation for their type inference, there is already quite some expertise in efficient implementation and optimized compilation of CHR and I would be interested as a CHR implementor. Cheers, Tom -- Tom Schrijvers Department of Computer Science K.U. Leuven Celestijnenlaan 200A B-3001 Heverlee Belgium tel: +32 16 327544 e-mail: tom.schrijvers@cs.kuleuven.be

Hello,
I have been trying to follow this discussion, but I cannot quite
understand the problem.
On 4/10/06, Ross Paterson
One of the problems with the relaxed coverage condition implemented by GHC and Hugs is a loss of confluence. Here is a slightly cut-down version of Ex. 18 from the FD-CHR paper:
class B a b | a -> b class C a b c | a -> b
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d, we have two possible reductions:
1) C [a] b Bool, C [a] c d => c = b, C [a] b Bool, C [a] b d (use FD on C) => c = b, B a b, C [a] b d (reduce instance)
2) C [a] b Bool, C [a] c d => B a b, C [a] c d (reduce instance) (changed C to B to fix a typo)
It seems to me that the constraint sets {B a b, C [a] b d} and {B a b, C [a] c d} are equivalent in the sense that if we assume the first set we can discharge the constraints in the second, and vice versa. So why are we saying that we have lost confluence? Is there perhaps a different example that illustrates the porblem? -Iavor PS: To show that C [a] b d |- C [a] c d we can apply the improving substitution 'b=c' (using the FD on class C), and then solve the goal by assumption, the proof the other way is symmetric.

that's why Ross chose a fresh variable in FD range position: in the old translation, the class-based FD improvement rule no longer applies after reduction because there's only one C constraint left, and the instance-based FD improvement rule will only instantiate the 'b' or 'c' in the constraint with a fresh 'b_123', 'b_124', .., unrelated to 'b', 'c', or any previously generated variables in the constraint store. hth, claus another way to interpret your message: to show equivalence of the two constraint sets, you need to show that one implies the other, or that both are equivalent to a common constraint set - you cannot use constraints from one set to help discharging constraints in the other.
class B a b | a -> b class C a b c | a -> b
instance B a b => C [a] b Bool
Starting from a constraint set C [a] b Bool, C [a] c d, we have two possible reductions:
1) C [a] b Bool, C [a] c d => c = b, C [a] b Bool, C [a] b d (use FD on C) => c = b, B a b, C [a] b d (reduce instance)
2) C [a] b Bool, C [a] c d => B a b, C [a] c d (reduce instance)
(changed C to B to fix a typo) It seems to me that the constraint sets {B a b, C [a] b d} and {B a b, C [a] c d} are equivalent in the sense that if we assume the first set we can discharge the constraints in the second, and vice versa. So why are we saying that we have lost confluence? Is there perhaps a different example that illustrates the porblem? -Iavor PS: To show that C [a] b d |- C [a] c d we can apply the improving substitution 'b=c' (using the FD on class C), and then solve the goal by assumption, the proof the other way is symmetric. _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://haskell.org/mailman/listinfo/haskell-prime

Hello,
On 4/12/06, Claus Reinke
that's why Ross chose a fresh variable in FD range position: in the old translation, the class-based FD improvement rule no longer applies after reduction because there's only one C constraint left, and the instance-based FD improvement rule will only instantiate the 'b' or 'c' in the constraint with a fresh 'b_123', 'b_124', .., unrelated to 'b', 'c', or any previously generated variables in the constraint store.
I understand the reduction steps. Are you saying that the problem is that the two sets are not syntactically equal? To me this does not seem important: we just end up with two different ways to say the same thing (i.e., they are logically equivalent). I think there would really be a problem if we could do some reduction and end up with two non-equivalent constraint sets, then I think we would have lost confluence. But can this happen?
another way to interpret your message: to show equivalence of the two constraint sets, you need to show that one implies the other, or that both are equivalent to a common constraint set - I just used this notion of equivalance, becaue it is what we usually use in logic. Do you think we should use something else?
you cannot use constraints from one set to help discharging constraints in the other. I don't understand this, why not? If I want to prove that 'p iff q' I assume 'p' to prove 'q', and vice versa. Clearly I can use 'p' while proving 'q'. We must be talking about different things :-)
-Iavor

On Thu, Apr 13, 2006 at 12:07:53PM -0700, Iavor Diatchki wrote:
On 4/12/06, Claus Reinke
wrote: that's why Ross chose a fresh variable in FD range position: in the old translation, the class-based FD improvement rule no longer applies after reduction because there's only one C constraint left, and the instance-based FD improvement rule will only instantiate the 'b' or 'c' in the constraint with a fresh 'b_123', 'b_124', .., unrelated to 'b', 'c', or any previously generated variables in the constraint store.
I understand the reduction steps. Are you saying that the problem is that the two sets are not syntactically equal? To me this does not seem important: we just end up with two different ways to say the same thing (i.e., they are logically equivalent).
If c were mentioned in another constraint, they would not be equivalent.

Hello,
I understand the reduction steps. Are you saying that the problem is that the two sets are not syntactically equal? To me this does not seem important: we just end up with two different ways to say the same thing (i.e., they are logically equivalent).
If c were mentioned in another constraint, they would not be equivalent.
How so? A concrete example would really be useful. I think that the constraint 'C [a] b d' and 'C [a] c d' are equivalent and I don't see how the rest of the context can affect this (of course I have been wrong in the past :-). The one way to see that is (like I already said) --- assume the one and prove the other and vice versa. Another way to see that is as follows: All the instances of 'C' that have '[a]' in their first argument must have the same type in the second argument otherwise the functional dependecy of 'C' will be violated. Thus 'b' and 'c' above happen to be just different names for the same thing. Yet another slightly different way to think of this is that a functional dependency is a function on types (e.g., associated type synonyms give us a way to name this function). So lets say that the functional dependecy on 'C' is called 'F'. Than we can see that both 'C [a] b d' and 'C [a] c d' are really the same thing, namely 'C [a] (F [a]) d'. -Iavor

On Thu, Apr 13, 2006 at 05:10:36PM -0700, Iavor Diatchki wrote:
I understand the reduction steps. Are you saying that the problem is that the two sets are not syntactically equal? To me this does not seem important: we just end up with two different ways to say the same thing (i.e., they are logically equivalent).
If c were mentioned in another constraint, they would not be equivalent.
How so? A concrete example would really be useful. I think that the constraint 'C [a] b d' and 'C [a] c d' are equivalent and I don't see how the rest of the context can affect this (of course I have been wrong in the past :-).
They are equivalent, but C [a] b d, Num c and C [a] c d, Num c are not.

Hello,
On 4/13/06, Ross Paterson
They are equivalent, but C [a] b d, Num c and C [a] c d, Num c are not.
I agree that this is the case if you are thinking of "forall a b c d. (C [a] b d, Num c) <=> (C [a] c d, Num c)" Here is a counter example (assume we also add an instance B Char Int to the example) a = Char, b = Char, c = Int, d = Bool LHS: (C [Char] Char Bool, Num Int) => B Char Char => False RHS: (C [Char] Int Bool, Num Int) => B Char Int => True However, I don't think this is the case if you are thinking of: "forall a c d. exists b. (C [a] b d, Num c) <=> (C [a] c d, Num c)" becuase I can pick 'b' to be 'c'. In any case I think this thread is drifting in the wrong direction. I was just looking for a concrete example of where we have a problem with the non-conflence that people have found, e.g. somehitng like 'here is an expression for which we can infer these two schemas, and they have different sets of monomorphic instances'. I think such an example would be very useful for me (and perhaps others) to clarify exactly what is the problem, so that we can try to correct it. -Iavor
participants (7)
-
Claus Reinke
-
Iavor Diatchki
-
isaac jones
-
Martin Sulzmann
-
Ross Paterson
-
Simon Peyton-Jones
-
Tom Schrijvers