RE: relaxed instance rules spec (was: the MPTC Dilemma (please solve))

http://www.haskell.org/ghc/dist/current/docs/users_guide/type-extension s.html#instance-decls | | two questions/suggestions about this: | | 1. there are other termination criteria one migh think of, though | many will be out because they are not easy to specify. but here | is an annoyingly simple example that doesn't fit the current rules | even though it is clearly terminating (it's not even recursive): | | class Fail all -- no instances! | | class TypeNotEq a b | instance Fail a => TypeNotEq a a | instance TypeNotEq a b | | class Test a b where test :: a -> b -> Bool | instance TypeNotEq a b => Test a b where test _ _ = False | instance Test a a where test _ _ = True | | main = print $ (test True 'c', test True False) | | never mind the overlap, the point here is that we redirect from | Test a b to TypeNotEq a b, and ghc informs us that the | "Constraint is no smaller than the instance head". | | it is true that the parameters don't get smaller (same number, | same number of constructors, etc.), but they are passed to a | "smaller" predicate (in terms of call-graph reachability: there | are fewer predicates reachable from TypeNotEq than from | Test - in particular, Test is not reachable from TypeNotEq). | | this is a non-local criterion, but a fairly simple one. and it seems | very strange to invoke "undecidable instances" for this example | (or is there anything undecidable about it?). | | 2. the coverage condition only refers to the instance head. this | excludes programs such as good old record selection (which | should terminate because it recurses over finite types, and is | confluent -in fact deterministic- because of best-fit overlap | resolution): | | -- | field selection | infixl #? | | class Select label val rec | label rec -> val where | (#?) :: rec -> label -> val | | instance Select label val ((label,val),r) where | ((_,val),_) #? label = val | | instance Select label val r => Select label val (l,r) where | (_,r) #? label = r #? label | | now, it is true that in the second instance declaration, "val" is | not covered in {label,(l,r)}. however, it is covered in the recursive | call, subject to the very same FD, if that recursive call complies | with the (recursive) coverage criterion. in fact, for this
Claus, I urge you to read our paper "Understanding functional dependencies via Constraint Handling Rules", which you can find here http://research.microsoft.com/%7Esimonpj/papers/fd%2Dchr/. It will tell you more than you want to know about why relaxing apparently-conservative rules is entirely non-trivial. It's one of those areas in which it is easy to suggest a plausible-sounding alternative, but much harder to either prove or disprove whether the alternative a sound one. The paper describes rules we are reasonably sure of. It would be great to relax those rules. But you do need to have a proof that the relaxation preserves the properties we want. Go right ahead! The paper provides a good framework for such work, I think. Simon | -----Original Message----- | From: Claus Reinke [mailto:claus.reinke@talk21.com] | Sent: 28 February 2006 19:54 | To: Simon Peyton-Jones; haskell-prime@haskell.org | Subject: relaxed instance rules spec (was: the MPTC Dilemma (please solve)) | | >The specification is here: | particular | task, that is the only place where it could be covered. | | would it be terribly difficult to take such indirect coverage (via | the instance constraints) into account? there'd be no search | involved (the usual argument against looking at the constraints), | and it seems strange to exclude such straightforward "consume | a type" recursions, doesn't it? | | cheers, | claus

I urge you to read our paper "Understanding functional dependencies via Constraint Handling Rules", which you can find here http://research.microsoft.com/%7Esimonpj/papers/fd%2Dchr/.
Simon, I had already read that paper briefly, but had moved on to scan some of the other publications in that area (ats, ..), because I was unhappy with its limitations, omissions, and what I perceived as overly quick conclusions. I have now returned to it, and although I still haven't read it in full, I see that my misgivings were mostly the result of impatience - it may not be the final word, but, together with: P. J. Stuckey and M. Sulzmann. A theory of overloading. ACM Transactions on Programming Languages and Systems (TOPLAS), 27(6):1-54, 2005. it does provide a good, and necessary start, and a useful basis for discussions. [may I suggest a reference section on the main Haskell' page, listing these and other important inputs to difficult Haskell' issues?]
It will tell you more than you want to know about why relaxing apparently-conservative rules is entirely non-trivial. It's one of those areas in which it is easy to suggest a plausible-sounding alternative, but much harder to either prove or disprove whether the alternative a sound one.
what I've read so far tells me less than I want to know, exactly because I think more relaxations are necessary, but are left for future work. also, while it is true that plausible-sounding variations may lead to surprising results, it is unfortunately also true that the first explanation of such surprises isn't always the best one. case in point: you replied to my suggestion to relax the coverage condition to a recursive coverage condition, and the paper you refer to contains an example which seems to demonstrate that this wouldn't be safe (example 6): class Mul a b c | a b -> c where.. instance Mul Int Int Int where .. instance Mul a b c => Mul a (Vec b) (Vec c) where .. this looks terminating, because it recurses down a finite type in its second parameter, but it violates the coverage condition in the second instance, where c is not in {a,b}. as I've argued in my message, I'd like to think of recursive coverage here, because c is covered via the FD in the recursive call. now, the paper presents an example expression that leads to the following constraint, with a non-terminating inference: Mul a (Vec b) b --{b=Vec c}--> Mul a (Vec c) c and the paper claims that this is a direct consequence of the violation of the coverage condition! so, one of us is wrong, either my "plausibility" argument for recursive coverage, or the paper's claim that recursive coverage (or, for that matter, any relaxation of coverage that would permit that instance declaration) leads to non-termination. but note that the paper says "direct consequence", not "necessary consequence", and whether or not my informal argument can be proven, if you just read it side by side with the non-terminating inference in question, you'll see immediately that the paper is jumping to conclusions here (or at least leading the reader to such jumps..): - Mul recurses down a type in its second parameter - types in Haskell are finite - there is a non-terminating Mul inference the problem is that we have somehow conjured up an infinite type for Mul to recurse on without end! Normally, such infinite types are ruled out by occurs-checks (unless you are working with Prolog III;-), so someone forgot to do that here. why? where? how? well, there is that functional dependency that says that parameters a and b are inputs and parameter c is an output of Mul, and there is that initial constraint that says that b contains c! we have managed to write a (non-productive) cyclic program at the type-level, and while each recursive step strips one layer of its second input, it also refines it to include an extra layer, via the output and cycle! imho, the instance is okay, but the constraint should be rejected because if we write the FD as a function call with result (c=FD-Mul a b), then the constraint asks for: b=FD-Mul a (Vec b)) which should trigger a (generalised, conservative) occurs-check error (pending more refined tests). for the purposes of termination in this kind of example, the non-recursive coverage condition appears to be an (overly conservative) approximation. in fact, even the generalised occurs- check may be overly conservative (a functional dependency is not a type constructor, so the result of FD-C a b may not even include both a and b). the paper does discuss this as the (refined) weak coverage condition, with stronger constraints for confluence (fulfilled here), and under assumption of other termination proofs (theorem 2). unfortunately, it still assumes that example 6 is non-terminating, whereas I think that generalising the standard occurs-check to account for FDs gives a simple way to dispell that myth. so, I'll stick to my suggestion, on the grounds that -barring cyclic type-level programs- there are no expressions of infinite types in Haskell. but I'm open to arguments!-) cheers, claus | 2. the coverage condition only refers to the instance head. this | excludes programs such as good old record selection (which | should terminate because it recurses over finite types, and is | confluent -in fact deterministic- because of best-fit overlap | resolution): | | -- | field selection | infixl #? | | class Select label val rec | label rec -> val where | (#?) :: rec -> label -> val | | instance Select label val ((label,val),r) where | ((_,val),_) #? label = val | | instance Select label val r => Select label val (l,r) where | (_,r) #? label = r #? label | | now, it is true that in the second instance declaration, "val" is | not covered in {label,(l,r)}. however, it is covered in the recursive | call, subject to the very same FD, if that recursive call complies | with the (recursive) coverage criterion. in fact, for this particular | task, that is the only place where it could be covered. | | would it be terribly difficult to take such indirect coverage (via | the instance constraints) into account? there'd be no search | involved (the usual argument against looking at the constraints), | and it seems strange to exclude such straightforward "consume | a type" recursions, doesn't it? | | cheers, | claus

On Thu, Mar 02, 2006 at 03:53:45AM -0000, Claus Reinke wrote:
- Mul recurses down a type in its second parameter - types in Haskell are finite - there is a non-terminating Mul inference
the problem is that we have somehow conjured up an infinite type for Mul to recurse on without end! Normally, such infinite types are ruled out by occurs-checks (unless you are working with Prolog III;-), so someone forgot to do that here. why? where? how?
Polymorphic recursion allows the construction of infinite types if I understand what you mean. if you are clever (or unlucky) you can get jhcs middle end to go into an infinite loop by using them.
-- an infinite binary tree data Bin a = Bin a (Bin (a,a))
John -- John Meacham - ⑆repetae.net⑆john⑈

it has been a long time since I looked into nested types so I may well be wrong, but: nested types allow for structures containing infinitely many types, but you can never write down such infinity, unless by a finite representation or an infinite process. when you want to work with such nested types, you either use a unifiably infinity of workers (finitely represented via polymorphic recursion) or you write a generator for a family of workers (Blampied/Jones, Hinze). so you have no infinite types to start with, but you may have infinitary producers of types (and as the Mul example shows, that may not only happen by adding stuff on the outside, but also by refining variables in the inside). and if you connect one of those to an otherwise terminating consumer, you might still end up with a non-terminating inference. my argument was -the above being the case- we should not blame the consumers because they may be connected with bad producers, but _only_ those bad producers themselves. in the Mul case, Mul is fine as a consumer, but using cyclic programming to turn it into its own producer is not. cheers, claus
- Mul recurses down a type in its second parameter - types in Haskell are finite - there is a non-terminating Mul inference
the problem is that we have somehow conjured up an infinite type for Mul to recurse on without end! Normally, such infinite types are ruled out by occurs-checks (unless you are working with Prolog III;-), so someone forgot to do that here. why? where? how?
Polymorphic recursion allows the construction of infinite types if I understand what you mean. if you are clever (or unlucky) you can get jhcs middle end to go into an infinite loop by using them.
-- an infinite binary tree data Bin a = Bin a (Bin (a,a))
John -- John Meacham - ⑆repetae.net⑆john⑈ _______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://haskell.org/mailman/listinfo/haskell-prime

John Meacham wrote:
On Thu, Mar 02, 2006 at 03:53:45AM -0000, Claus Reinke wrote:
the problem is that we have somehow conjured up an infinite type for Mul to recurse on without end! Normally, such infinite types are ruled out by occurs-checks (unless you are working with Prolog III;-), so someone forgot to do that here. why? where? how?
Polymorphic recursion allows the construction of infinite types if I understand what you mean.
No, that's different. An infinite type can't be written in (legal) Haskell. It's something like type T = [T] -- Ben

On 3/7/06, Ben Rudiak-Gould
{-# OPTIONS -fglasgow-exts #-}
data Zero = Zero data Succ n = Succ n
-- Error: infinite type! -- x = Succ x
data Inf = forall n . Inf n y = Inf (Succ y)
Jim

Coverage is a sufficient condition to maintain termination. Though, Coverage is also sufficient to maintain confluence. Hence, we propose Weak Coverage which we know maintains confluence under some conditions. The main focus of the FD paper is how to restore confluence which is important for complete inference. We don't say much about termination (once Coverage is broken). Therefore, we either assume CHRs are terminating or check whether CHRs are terminating for a particular goal (e.g. by imposing a limit on the number of CHR steps). Hence, the inference methods in the FD paper are complete but semi-decidable. As you point out, Mul a (Vec b) b should be rejected, cause there can never by any ground instance, unless we allow for "infinite" types. Here's a more complicated example: Mul a (Vec b) c, Mul a (Vec c) b The cycle involves two constraints. Another one: Mul a (Vec b) c, Mul a (Vec c) d, Foo d b where rule Foo d b <==> d=b What I'm trying to demonstrate here is that only during inference (ie. applying CHRs) we may come across cycles. Hence, the quest is to come up with *dynamic* termination methods. I think that's what you are interested in? FYI, in a technical report version of the FD paper, we already address such issues, briefly. I believe there's a huge design space for dynamic termination methods. For example, in some email exchanges between Stefan Wehr, Oleg and myself on Haskell Cafe, we observed that some "simple" FD/AT programs were non-terminating. Non-termination wasn't due to "infinite types". No, types grew "bigger" due to improvement conditions which fired in between instance reduction steps. As in case of the Mul example, instances itself are terminating. I suggested some conditions which guarantee *static* termination. Though, this may rule out some reasonable programs (recall: the Mul example is also ruled out once we enforce Coverage). Hence, we need yet another kind of dynamic termination check. What's in there for Haskell'? The confluence story (i.e. complete inference) is fairly settled. Though, termination is still a very subtle problem, regardless whether we use FDs, ATs or whatever type class improvement mechanism. In fact, I don't think there will ever be a good answer to the termination problem unless we consider specific cases. Martin Claus Reinke writes:
I urge you to read our paper "Understanding functional dependencies via Constraint Handling Rules", which you can find here http://research.microsoft.com/%7Esimonpj/papers/fd%2Dchr/.
Simon,
I had already read that paper briefly, but had moved on to scan some of the other publications in that area (ats, ..), because I was unhappy with its limitations, omissions, and what I perceived as overly quick conclusions. I have now returned to it, and although I still haven't read it in full, I see that my misgivings were mostly the result of impatience - it may not be the final word, but, together with:
P. J. Stuckey and M. Sulzmann. A theory of overloading. ACM Transactions on Programming Languages and Systems (TOPLAS), 27(6):1-54, 2005.
it does provide a good, and necessary start, and a useful basis for discussions.
[may I suggest a reference section on the main Haskell' page, listing these and other important inputs to difficult Haskell' issues?]
It will tell you more than you want to know about why relaxing apparently-conservative rules is entirely non-trivial. It's one of those areas in which it is easy to suggest a plausible-sounding alternative, but much harder to either prove or disprove whether the alternative a sound one.
what I've read so far tells me less than I want to know, exactly because I think more relaxations are necessary, but are left for future work. also, while it is true that plausible-sounding variations may lead to surprising results, it is unfortunately also true that the first explanation of such surprises isn't always the best one.
case in point: you replied to my suggestion to relax the coverage condition to a recursive coverage condition, and the paper you refer to contains an example which seems to demonstrate that this wouldn't be safe (example 6):
class Mul a b c | a b -> c where.. instance Mul Int Int Int where .. instance Mul a b c => Mul a (Vec b) (Vec c) where ..
this looks terminating, because it recurses down a finite type in its second parameter, but it violates the coverage condition in the second instance, where c is not in {a,b}. as I've argued in my message, I'd like to think of recursive coverage here, because c is covered via the FD in the recursive call.
now, the paper presents an example expression that leads to the following constraint, with a non-terminating inference:
Mul a (Vec b) b --{b=Vec c}--> Mul a (Vec c) c
and the paper claims that this is a direct consequence of the violation of the coverage condition!
so, one of us is wrong, either my "plausibility" argument for recursive coverage, or the paper's claim that recursive coverage (or, for that matter, any relaxation of coverage that would permit that instance declaration) leads to non-termination.
but note that the paper says "direct consequence", not "necessary consequence", and whether or not my informal argument can be proven, if you just read it side by side with the non-terminating inference in question, you'll see immediately that the paper is jumping to conclusions here (or at least leading the reader to such jumps..):
- Mul recurses down a type in its second parameter - types in Haskell are finite - there is a non-terminating Mul inference
the problem is that we have somehow conjured up an infinite type for Mul to recurse on without end! Normally, such infinite types are ruled out by occurs-checks (unless you are working with Prolog III;-), so someone forgot to do that here. why? where? how?
well, there is that functional dependency that says that parameters a and b are inputs and parameter c is an output of Mul, and there is that initial constraint that says that b contains c! we have managed to write a (non-productive) cyclic program at the type-level, and while each recursive step strips one layer of its second input, it also refines it to include an extra layer, via the output and cycle!
imho, the instance is okay, but the constraint should be rejected because if we write the FD as a function call with result (c=FD-Mul a b), then the constraint asks for:
b=FD-Mul a (Vec b))
which should trigger a (generalised, conservative) occurs-check error (pending more refined tests).
for the purposes of termination in this kind of example, the non-recursive coverage condition appears to be an (overly conservative) approximation. in fact, even the generalised occurs- check may be overly conservative (a functional dependency is not a type constructor, so the result of FD-C a b may not even include both a and b).
the paper does discuss this as the (refined) weak coverage condition, with stronger constraints for confluence (fulfilled here), and under assumption of other termination proofs (theorem 2). unfortunately, it still assumes that example 6 is non-terminating, whereas I think that generalising the standard occurs-check to account for FDs gives a simple way to dispell that myth.
so, I'll stick to my suggestion, on the grounds that -barring cyclic type-level programs- there are no expressions of infinite types in Haskell. but I'm open to arguments!-)
cheers, claus
| 2. the coverage condition only refers to the instance head. this | excludes programs such as good old record selection (which | should terminate because it recurses over finite types, and is | confluent -in fact deterministic- because of best-fit overlap | resolution): | | -- | field selection | infixl #? | | class Select label val rec | label rec -> val where | (#?) :: rec -> label -> val | | instance Select label val ((label,val),r) where | ((_,val),_) #? label = val | | instance Select label val r => Select label val (l,r) where | (_,r) #? label = r #? label | | now, it is true that in the second instance declaration, "val" is | not covered in {label,(l,r)}. however, it is covered in the recursive | call, subject to the very same FD, if that recursive call complies | with the (recursive) coverage criterion. in fact, for this particular | task, that is the only place where it could be covered. | | would it be terribly difficult to take such indirect coverage (via | the instance constraints) into account? there'd be no search | involved (the usual argument against looking at the constraints), | and it seems strange to exclude such straightforward "consume | a type" recursions, doesn't it? | | cheers, | claus
_______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://haskell.org/mailman/listinfo/haskell-prime

The main focus of the FD paper is how to restore confluence which is important for complete inference.
That is indeed my second major problem with the paper!-) There are examples in the paper that show how some conditions are sufficient to ensure confluence, but they hardly ever seem to be necessary for the original program, only for the current mapping. The way I interpret FDs implies that they introduce a systematic space leak into the inference process: if any stage in the inference uses *any* constraint with a class which is subject to an FD, that results in an additional bit of information about that class' FD. and we *cannot* throw that bit of information away unless we know that we will never need to refer to it in the remainder of the inference chain. As I mentioned before, capturing this is the one point where I can imagine an advantage of using CHRs, by putting the "memo-table" for the type function implied by an FD into the global constraint store. So I was rather surprised to see the way FDs are currently mapped into CHRs. And indeed, as far as I noticed, *nearly all* the confluence problems are due to that mapping, not due to FDs. In a way, it is nice that loosing confluence shows up cases where information about an FD may be thrown away while there are still opportunities to use it, but since confluence is hard to check without termination, that doesn't really help in practice unless we take it as an additional reason to want guaranteed termination.. So, why not get rid of that problem and clearly model the memo effect of FDs as an aspect separable from the instance constraint? class C a b | a -> b instance Ctxt => C t1 t2 in the current mapping, the class FD CHR needs two constraints for C to generate an equation for the two b parameters, but the instance simplification rules may take constraints away before that class FD CHR was applied, resulting in the danger of non-confluence. the instance FD CHRs, instead, need only a single constraint to generate an equation, because they compare that constraint to one of the instances. still, the constraint might be simplified away before its instance FD CHR was applied. this is slightly less risky, because any new constraint that might profit from the information we threw away is itself subject to the instance FD CHR. but it still results in the danger of non-confluence, corresponding to ignoring some information about the class' FD contributed by independent constraints "communicating" via the same instance FD CHR! the paper suggests not generating the class FD CHR in the first place, or restricting the order in which rules may be applied. but why not encode the memo aspect of FDs directly in the constraint store? we'd have to represent _separately_ the constraints that need to be proven, and the memo aspect of those constraints we _have used_ at any point in the deduction. for the class C above, something roughly like this (tweak as needed;): (instance FD) rule C t1 t2 <==> Ctxt, MEMO-C t1 t2 (class FD) rule C t1 t2, MEMO-C t1 t3 ==> t2==t3 note that the (instance FD) CHR could also be called the (constraint FD) CHR, because it triggers whenever any constraint is simplified using a specific instance declaration in the inference. unlike the current instance improvement CHR, which hard-codes the part of the information gathered from the instance declaration, this mapping separates the generation and use of FD information, but ties the generation to simplification. unlike ordinary constraints, MEMO constraints no longer imply proof obligations in terms of instance inference, they just record a type dependency that has been used in a simplification step. if my understanding of propagation rules in CHR is correct, applications of the class FD CHR will preserve that recorded information, so there will always be a second constraint to compare with when we want to use FDs to generate equations. I'm not entirely sure how consistency of constraints is checked in CHR - we may need to add a rule that exposes inconsistent MEMO constraints: (inconsistent FD) rule MEMO-C t1 t2, MEMO t1 t3 ==> t2/=t3 | YELL!-) am I on the right track here? cheers, claus

On Thu, Mar 02, 2006 at 12:03:59PM -0000, Claus Reinke wrote:
The way I interpret FDs implies that they introduce a systematic space leak into the inference process: if any stage in the inference uses *any* constraint with a class which is subject to an FD, that results in an additional bit of information about that class' FD. and we *cannot* throw that bit of information away unless we know that we will never need to refer to it in the remainder of the inference chain.
But with the current system you can throw it away, because it's implied by the FDs on the classes in the context. A system where that was not the case would be significantly more complex than the current FDs, which I think disqualifies it for Haskell'. The original FDs have the property that the set of valid ground predicates is determined by the instance declarations alone, and is unaffected by the addition of FDs. The FDs merely imply that some predicates can never be valid, even if other instances are added later, and thus justify type improvement and reduced ambiguity. I think this separation makes them much easier to think about, and we should be cautious about losing it.

On Thu, Mar 02, 2006 at 12:03:59PM -0000, Claus Reinke wrote:
The way I interpret FDs implies that they introduce a systematic space leak into the inference process: if any stage in the inference uses *any* constraint with a class which is subject to an FD, that results in an additional bit of information about that class' FD. and we *cannot* throw that bit of information away unless we know that we will never need to refer to it in the remainder of the inference chain.
But with the current system you can throw it away, because it's implied by the FDs on the classes in the context. A system where that was not the case would be significantly more complex than the current FDs, which I think disqualifies it for Haskell'.
I'm not sure what FD system you are referring to here? in the CHR representation of FDs (which seems to be the only candidate for Haskell'), you _cannot_ throw this information away, and the potential of doing so accounts for the majority of confluence violations in the paper "understanding FDs via CHRs": if you simplify a constraint before using it for propagation, you end up missing some improvements and you may not be able to rejoin the diverging deductions. that's what I was referring to: there shouldn't be any confluence problems with the original code (type classes w FDs), but there are confluence problems with the CHRs used to represent that code! the worst example of that is example 14 (fragment of a type-directed evaluator), for which the paper's CHR translation is _not_ confluent, and for which the paper suggests that adding coverage would be sufficient to guarantee confluence, but coverage is violated.. (cf also the end of section 6, where a variant of example 14 without the class-FD-CHR is considered to avoid non-confluence, but the result doesn't support improvement as well as expected). the two diverging deductions given on p21 correspond to doing both simplifications first, thus disabling propagation (throwing away the FD-related information), or first doing propagation (accounting for the FD-related info), then doing simplifications. in the system I suggested, confluence would not depend on coverage, because the two roles of constraints (inference and FDs) would be treated separately, and the FD-related info would always be available for propagation (implied by the class FDs), even after doing the simplification steps first. so the translation I'd like to see would be _simpler_ (having fewer problems) than the current candidate. the translation into CHRs would be something like: class C a b | a -> b instance ctxt => C t1 t2 --> C a b <=> infer_C a b, memo_C a b. (two roles) infer_C a b <=> ctxt. (instance inference) memo_C a b1, memo_C a b2 => b1=b2. (class FD) so there'd be one preparatory inference rule to split all constraints occurring in the process into their two roles, one inference rule for each instance declaration, and one propagation rule for each FD. the CHR succeeds if there are no non-memo constraints left. the Chameleon implementation appears to be in a transitory state, but if we forgoe Haskell syntax, there are lots of other free CHR implementations around, especially with Prolog systems: http://www.cs.kuleuven.ac.be/~dtai/projects/CHR/chr-impl.html I append a translation of example 14 into CHR that works, eg, with SWI Prolog. it quite can't succeed for the example call from the paper eval(Env,expabs(X,Exp),T1),eval(Env,expabs(X,Exp),T2). because the code fragment doesn't contain all instances; so one infer-constraint remains in the final store, but it should still be confluent, no matter whether you treat inference or FDs first. cheers, claus :- module(tc,[eval/3]). :- use_module(library(chr)). :- chr_constraint eval/3,infer_eval/3,memo_eval/3. /* accounting for the FD */ memo_eval(Env,Exp,T1), memo_eval(Env,Exp,T2) ==> T1=T2. /* separating proof and FD aspects */ eval(Env,Exp,T) <=> infer_eval(Env,Exp,T), memo_eval(Env,Exp,T). /* accounting for the instance */ infer_eval(Env,expabs(X,Exp),V) <=> V=to(V1,V2), eval(cons((X,V1),Env),Exp,V2). /* removing duplicate constraints */ infer_eval(Env,Exp,T) \ infer_eval(Env,Exp,T) <=> true. memo_eval(Env,Exp,T) \ memo_eval(Env,Exp,T) <=> true.

the translation into CHRs would be something like:
class C a b | a -> b instance ctxt => C t1 t2
-->
C a b <=> infer_C a b, memo_C a b. (two roles)
infer_C a b <=> ctxt. (instance inference)
memo_C a b1, memo_C a b2 => b1=b2. (class FD)
so there'd be one preparatory inference rule to split all constraints occurring in the process into their two roles, one inference rule for each instance declaration, and one propagation rule for each FD. the CHR succeeds if there are no non-memo constraints left.
to avoid misunderstandings: this is not yet complete. it is sufficient for the example from the paper, because the FD-based improvements there are triggered by the two goal constraints, but the translation also needs to account for FD-related information taken from the instance declarations. claus

.. the translation also needs to account for FD-related information taken from the instance declarations.
actually, since we've already accounted for the two roles of inference and FD by splitting the constraints, this might be as simple as merging the instance and instance-improvement CHRs in the original translation (variation A). alternatively, we can make sure that instances generate memo constraints as well (variation B). this email is somewhat long, but might serve as a synchronisation point for those who feel a bit lost in the flood of emails and information fragments (at least as far as the new subject line is concerned:-). as some of you seem to be less than enthusiastic about my suggested modifications, I'd like to give you something concrete to attack,-) so here goes the first attempt (cf Fig. 2, p13, in the FD-CHR paper): ============= Tc2CHR alternative, with separated roles class C => TC a1..an | fd1,..,fdm where fdi is of the form: ai1..aik -> ai0 -> TC a b <=> infer_TC a b, memo_TC a b. (two roles) -> memo_TC a1..an, memo_TC th(b1)..th(bn) => ai0=bi0. (fdi) where th(bij) | j>0 = aij th(bl) | not (exists j>0. l==ij) = bl ============= Variation A (merge instance inference/propagation): instance C => TC t1..tn -> infer_TC th(b1)..th(bn) <=> c1,..,cn, C. (instance inference/improvement) where th(bl) | exists i. l==i0 = bl th(bl) | otherwise = tl cl | exists i. l==i0 = bl=tl cl | otherwise = true -> [only needed for non-full FDs?] memo_TC th(b1)..th(bn) ==> c1,..,cn. (non-full FDi instance improvement) where th(bl) | exists i,j>0. l==ij = tl th(bl) | otherwise = bl cl | exists i. l==i0 = (bl=tl) cl | otherwise = true ============= Variation B (separately record instance FD info): instance C => TC t1..tn -> infer_TC t1..tn <=> C. (instance inference) -> fact ==> memo_TC th(b1)..th(bn). (FDi instance info) where th(bl) | exists j. l==ij = tl th(bl) | otherwise = bl ============================================= in case I messed up the formalization, here are the ideas again: - each constraint arising during instance inference or given as a goal needs to be proven (reduced to true). we account for that with the infer_X constraints and their rules. - for each constraint used in the instance inference process, each class FD implies a functional dependency between the concrete types in that constraint, which has to be consistent accross all constraints. we account for that with the memo_X constraints (gathering FD info) and their rules (using FD info). - the CHR succeeds if there are no infer constraints left. Variation A: - for each instance declaration, each class FDi implies that the type in its range (ti0) is a function of the types in its domain (ti1..tik). we account for that by replacing the range types in the head of the instance inference CHR with variables, which are bound by unification in the body of the CHR. - for non-full FDs, there may be instances with non-variable types in non-FD positions, in which case the combined inference/ improvement rules won't fully express the FDs. we account for these cases by a separate instance improvement rule. Variation B: - the instance CHRs only account for instance inference - for each instance/FD, we need to generate extra FD-related information (for technical reasons, those propagation rules are triggered by the trivial constraint fact, rather than true, as one might expect; so queries have to be "fact,query", see examples at the end) the main differences to the original translation are: - separating constraint roles should aid confluence, as the memory of constraints remains even after their inference has started (this should also make it easier to relax constraints later, and to focus on their effect on termination rather than confluence) - separating constraint roles allows merging of instance inference and improvement rules, further avoiding conflicts/orderings between these two stages (variation A; this should make it easier to discuss extensions later, such as interaction of FDs with overlapping resolution, where both features have to be taken into account for instance selection) I'd like to get variation A working, but in case that merging instance inference and propagation should turn out to be impossible for some reason, I include the less adventurous variation B, which still reflects the splitting of roles/improved confluence aspect. for those who like to experiment, to get a better feeling for the translation, by-hand translation soon gets tedious, so I attach a TH-based naive implemenation of variation B. it is naive, so expect problems, but it is also simple, so feel free to suggest patches;-) (*) it is quite likely that I missed something, but I hope this is concrete enough so that you can express your concerns concretely as well (e.g., Ross: which properties do you fear would be lost by this?). cheers, claus (*) load MainTc2CHR into ghci -fglasgow-exts, run: toFile; this creates a file i.chr, representing the translation of the declarations in Tc2CHR.i . then start up SWI Prolog, load the i.chr module, and explore the CHRs: -------------------- i = [d| class C a b | a -> b instance C Int Bool class D a b c | a -> c where d :: a -> b -> c instance C a b => D a b c where d a b = undefined instance D Int Bool Bool where d a b = undefined |] -------------------- 2 ?- consult('i.chr'). .. 3 ?- fact,d(int,int,bool). No 4 ?- fact,d(int,bool,bool). fact memo_c(int, bool) memo_d(int, bool, bool) memo_d(int, _G44169, bool) memo_d(_G43789, _G43790, _G43791) <-- only fact and memo_ constraints left, indicating success Yes 5 ?- fact,d(A,B,C). fact infer_c(_G43323, _G43324) <---- unresolved infer constraint memo_c(_G43323, _G43324) memo_c(int, bool) memo_d(_G43323, _G43324, _G43325) memo_d(int, _G44166, bool) memo_d(_G43786, _G43787, _G43788) A = _G43323{i = ...} B = _G43324{i = ...} C = _G43325{i = ...} ; <--- conditional success, corresponding to ghc's error messages with suggested fix: "add instance for C _G43323 _G43324" No 6 ?- fact,d(int,B,C). fact memo_c(int, bool) memo_d(int, bool, bool) memo_d(int, _G44163, bool) memo_d(_G43783, _G43784, _G43785) B = bool <-- determined by superclass fd C = bool ; <-- determined by class fd No

the first patch I found myself:-) I forgot to account for superclasses: diff -rN old-chr/tc2chr.txt new-chr/tc2chr.txt 24c24 < -> TC a b <=> infer_TC a b, memo_TC a b. (two roles) ---
-> TC a b <=> infer_TC a b, memo_TC a b, C. (two roles +superclasses)
186c186 < B = bool <-- determined by superclass fd ---
B = bool <-- determined by instance context fd
--------------------------------------------------- this was not entirely accidental, though. I'd been thinking about that old reverse-the-superclass-arrow issue, and I now find to my surprise that the authors of the FD-CHR papers agree with me on this issue, even though they claim not to!-) recall the ancient folklore that a superclass constraint class C => TC ought to be interpreted as the reverse implication between the type predicates (and so the superclass arrow ought to be reversed): TC |- C we rehashed that argument only last month on this very list ("The worst piece of syntax in Haskell" and "superclass implications"), and, once again, I felt fairly lonely when I said that the implication could indeed be interpreted as an implication (either by considering two phases of checks, as in Haskell at the moment, or by accepting more programs as valid). now, the FD-CHR authors appear to be firmly in the reverse-the-arrow camp: 1) For each class C => TC t we generate a propagation CHR rule TC t ==> C Ex: class Eq a => Ord a translates to rule Ord a ==> Eq a ... BTW, 1) shows that the superclass arrow should indeed be the other way around .. http://www.haskell.org//pipermail/haskell-prime/2006-February/000798.html however, while fixing the superclass omission in my alternative translation, I finally remembered that arrows are not arrows! in qualified types, we're talking about predicate entailment, ie, when we have the class predicate, we can infer the superclass predicate. in CHR, we're talking about adding and removing _proof obligations_ to/from the constraint store, ie, that propagation rule says: when we need to prove the class constraint, we also need to prove the superclass constraint. in other words, the CHR translations treat the superclass constraints as a pre-condition of the class constraints, just as the superclass arrow in the Haskell syntax suggests, not as a consequence! how can that be right, you might ask? the way the class CHR works, (a) it can never be used to infer the validity of the superclass constraints from the validity of the class constraint (b) it generates a proof obligation for the superclass constraints whenever the class constraint is considered the point to note here are the conditions that Haskell places the validity of instance declarations. these imply that (a) is never needed, because the system has already checked that the superclass constraints can be inferred without adding that extra implication, which also means that (b) is always going to succeed if the the class constraint can be discharged. that alone is surprising enough - it seems to indicate that the reverse-the-arrow campaign was a red herring!-) why then, is that class CHR needed in the original translation to CHR, and why do I have to add something similar to my alternative translation? we know that the proof obligaton has already been discharged by the validity check, but we need to connect the class constraint to the superclass constraints in order to inherit any FD information. in fact, it is noted in the earlier "theory of overloading" paper that without those validity checks, the original translation into CHR would not give a confluent rule system, as class CHR and instance CHR form a critical pair (using the instance CHR before the class CHR disables the latter, preventing use of superclass FD info). cheers, claus

a second oversight, in variation B: CHR rules are selected by matching, not by unification (which is quite essential to modelling the way type class inference works). this means that the idea of generating memo_ constraints for the instance fdis and relying on the clas fdi rules to use that information is not going to work directly. however, we can look at the intended composition of those fdi instance rules with the fdi class rules, and specialize the latter when applied to the rhs of the former (assuming unification while doing so). !! the nice thing about this is that variation B now looks very much like the original translation, differing only in the splitting of roles, without any other tricks merged in. that means it should now be more obvious why variation B is a modification of the original translation with better confluence properties. all confluence problems in the FD-CHR paper, as far as they were not due to instances inconsistent with the FDs, seem to be due to conflicts between improvement and inference rules. we restore confluence by splitting these two constraint roles, letting inference and improvements work on constraints in separate roles, thus removing the conflicts. ============= Tc2CHR alternative, with separated roles class C => TC a1..an | fd1,..,fdm where fdi is of the form: ai1..aik -> ai0 -> TC a b <=> infer_TC a b, memo_TC a b, C. (two roles +superclasses) -> memo_TC a1..an, memo_TC th(b1)..th(bn) => ai0=bi0. (fdi) where th(bij) | j>0 = aij th(bl) | not (exists j>0. l==ij) = bl ============= Variation B (separate instance inference/FD improvement): instance C => TC t1..tn -> infer_TC t1..tn <=> C. (instance inference) -> memo_TC th(b1)..th(bn) => ti0=bi0. (fdi instance improvement) where th(bij) | j>0 = tij th(bl) | not (exists j>0. l==ij) = bl ============================================= in particular, the new CHRs for examples 14 and 18 (coverage violations, hence not variable-restricted, hence confluence proof doesn't apply) should now be confluent, because even after simplification, we can still use the class FDs for improvement. here are the relevant rules for example 14: /* one constraint, two roles + superclasses */ eval(Env,Exp,T) <=> infer_eval(Env,Exp,T), memo_eval(Env,Exp,T), true. /* functional dependencies */ memo_eval(Env,Exp,T1), memo_eval(Env,Exp,T2) ==> T1=T2. /* instance inference: */ infer_eval(Env,expAbs(X, Exp),to(V1, V2)) <=> eval(cons((X, V1), Env), Exp, V2). /* instance improvements: */ memo_eval(Env_,Exp_,T_) ==> T_=to(V1, V2). and the troublesome example constraints: eval(Env,expAbs(X,Exp),T1), eval(Env,expAbs(X,Exp),T2). -> infer_eval(Env,expAbs(X,Exp),T1), infer_eval(Env,expAbs(X,Exp),T2), memo_eval(Env,expAbs(X,Exp),T1), memo_eval(Env,expAbs(X,Exp),T2). [ -> [class FD first] infer_eval(Env,expAbs(X,Exp),T2), memo_eval(Env,expAbs(X,Exp),T2), T1=T2. | -> [instance improvement and simplification first] eval(cons((X,V11),Env),Exp,V12), eval(cons((X,V21),Env),Exp,V22), memo_eval(Env,expAbs(X,Exp),T1), memo_eval(Env,expAbs(X,Exp),T2), T1=to(V11,V12), T2=to(V21,V22). ] -> [rejoin inferences] eval(cons((X,V21),Env),Exp,V22), memo_eval(Env,expAbs(X,Exp),T2), T1=T2, T2=to(V21,V22). -> .. cheers, claus ps I've only listed the updated variation B here, to limit confusion. if you want the updated code and full text, you should be able to use darcs get http://www.cs.kent.ac.uk/~cr3/chr/

[still talking to myself..?]
all confluence problems in the FD-CHR paper, as far as they were not due to instances inconsistent with the FDs, seem to be due to conflicts between improvement and inference rules. we restore confluence by splitting these two constraint roles, letting inference and improvements work on constraints in separate roles, thus removing the conflicts.
I should have mentioned that the improved confluence obtained by separating the dimensions of FD-based improvement and instance inference buys us a lot more than just permitting more valid programs (compared to the original, incomplete CHR): - separating the two dimensions of inference and improvement leads to better confluence (implementations are no longer forced to iterate improvement before continuing inference; fewer conservative restrictions are needed in the static semantics of TC; more valid code can be accepted) - better confluence guarantees that all improvement rules that apply will be run eventually, which means that the new CHR is self-checking wrt FD consistency! [if consistency is violated, there are at least two instances with different FD range types for the same FD domain types; that means there will be two instance improvement rules with the same lhs, but different equations on their rhs; if any constraint arises that would run into the FD inconsistency by using one of those improvement rules, the other will cause the derivation to fail] we can see this in action by looking at the relevant example of the FD-CHR paper (last revised Feb2006), section 5.1 Confluence, example 5: class Mul a b c | a b -> c instance Mul Int Float Float instance Mul Int Float Int the old CHR for this example (which violates FD consistency) is not confluent, allowing derivation of both c=Float and c=Int for the constraint Mul Int Float c. the revised paper still claims that consistency is "inuitively necessary" to guarantee confluence (it also still claims that it isn't sufficient, referring to the example we dealt with in the previous email). but if we apply the new Tc2CHR translation, we obtain a confluent CHR for the same example (there doesn't appear to be a way to switch off the consistency check in GHC, so I had to translate the two instances separately..): mul(A,B,C) <=> infer_mul(A,B,C), memo_mul(A,B,C). /* functional dependencies */ memo_mul(A,B,C1), memo_mul(A,B,C2) ==> C1=C2. /* instance inference: */ infer_mul(int,float,float) <=> true. infer_mul(int,float,int) <=> true. /* instance improvements: */ memo_mul(int,float,C) ==> C=float. memo_mul(int,float,C) ==> C=int. now, if we consider the problematic constraint again, and its two initially diverging derivations, we see that the derivations can be rejoined, exposing the inconsistency: mul(int,float,C) <=> infer_mul(int,float,C), memo_mul(int,float,C) [ ==> infer_mul(int,float,C), memo_mul(int,float,C), C=float <=> true, memo_mul(int,float,float), C=float ==> memo_mul(int,float,float), C=float, float=int | ==> infer_mul(int,float,C), memo_mul(int,float,C), C=int <=> true, memo_mul(int,float,int), C=int ==> memo_mul(int,float,int), C=int, int=float ] <=> fail this dynamic safety does not mean that we should drop the consistency check in the static semantics completely! but whereas the old CHR translation _depends_ on the consistency check for safety, and is therefore stuck with it, the new translation gives us some manouvering space when we try to relax that check to account for the combination of overlap resolution and FDs. cheers, claus

On 3/13/06, Claus Reinke
[still talking to myself..?]
This is all wonderful stuff! Are you perhaps planning to put it all
together into a paper?
What effect do you think this can have on existing algorithms to resolve FDs?
--
Taral

Thanks, Taral,
it is good to know that I'm not just writing for the archives!-)
a paper, yes, at some point (unless someone shoots a hole in my
suggestions first;), but at the moment, I'm more concerned with
keeping my hopes for Haskell' alive, and completing my case.
when Haskell' was announced, most of us thought that the committee
would just collect all those old and proven extensions like MPTC,
FDs, overlapping instances, undecidable instances, more flexible
instances, etc., figure out the common story behind them and weave
all of that into a coherent new standard, leaving the newer extensions
like GADTs, ATS, etc. for future standards. unfortunately, the idea
that well-established popular extensions implied well-defined
behaviour turned out to be an illusion, so unless we're doing the
work now, we're not going to have the useful standard we wanted.
which makes it all the more important to have genuine discussions
here - there are so many extensions that have been proposed and
partially implemented over the years since Haskell 98, for which
noone is even bothering to speak up on this list (generics in their
various forms and implementations? better support for faking
dependent types? template meta-programming? a genuine type
Dynamic, as in Clean? ..). I am a bit worried that many
Haskellers appear to have given up listening here, let alone
arguing for their interests. and with the extreme timeline the
committee is insisting on, there just wont be time to wait for
the first draft and start complaining then.
I can't argue for all the features I'm missing in the discussions so
far, but I can try to help with a few of them, and hope that others
will wake up before the committee closes the doors.
you ask about effects on existing handling of FDs: I appreciate the
work that has gone into FD-CHR, and into the refined conditions
now implemented in GHC head, but I cannot accept them as the
last word (for reasons explained in previous emails, the restrictions
are too restrictive in practice, for real programs; eg. since the
change, I suddenly have to use "undecidable" instances for
instances that are obviously decidable, which kind of defeats the
purpose of that flag; as a minimum benchmark, I'd like to be
able to use the Data.Record.hs stuff, in its simple form, without
the hacks, in whatever Haskell' turns out to be - and currently,
we are far from passing that criterium).
I hope I have now explained what I meant when I said that most
of the confluence issues were due to the translation, not inherent
in FDs, and I intend to use this groundwork for tackling the
combination of FDs and overlap resolution, in the way explained
informally in my early emails here. I also hope that this simpler
basis might help implementors to simplify and gain more confidence
in their code bases (in which these features have grown over years,
in wild combinations with other experiments, often driven only by
examples and counter-examples).
unfortunately, tracking down the reasons for why these conditions
were considered necessary in this form has been a slow process,
as has been trying to show that they might not be. so it really helps
to know that I'm not the only one who expects more from Haskell'.
having the formal specification in the FD-CHR paper, and having
some of it implemented in GHC, is one of the best examples of the
Haskell' process actually producing useful deliverables, and could
set the example for the other aspects of Haskell'. so I can only
encourage Haskellers to read the paper, and to try GHC head,
and see whether they can live with the suggested limitations and
formalizations. if not, raise your voice here, now!
cheers,
claus
----- Original Message -----
From: "Taral"
[still talking to myself..?]
This is all wonderful stuff! Are you perhaps planning to put it all
together into a paper?
What effect do you think this can have on existing algorithms to resolve FDs?
--
Taral

another interesting improvement in confluence, not related to FDs: the old CHR translation generated proof obligations for superclass contexts via propagation rules, which (similar to the propagation rules for instance improvement) might be in conflict with the instance simplification rules, jeopardizing confluence unless existence of superclass instances is checked separately (as currently required in Haskell). by putting superclass constraints as proof obligations on par with instance inference and FD-based improvement in the new CHR translation, we have ensured that the CHR will be confluent even if the superclass check is omitted. take example 28 from "theory of overloading": class E t instance E t => E [t] instance E Int class E t => O t instance O [t] which generates these CHR (I switched the implementation from TH to HSX as a frontend, to avoid GHC's built-in checks): /* one constraint, two roles + superclasses */ e(T) <=> infer_e(T), memo_e(T), true. /* instance inference: */ infer_e(list(T)) <=> e(T). /* instance inference: */ infer_e(int) <=> true. /* one constraint, two roles + superclasses */ o(T) <=> infer_o(T), memo_o(T), e(T). /* instance inference: */ infer_o(list(T)) <=> true. consider the problematic constraint, and we see that there is not even room for initial divergence, as the proof obligation for the superclass constraint is established before inference for o can even begin: o(list(U)) <=> infer_o(list(U)), memo_u(list(U)), e(list(U)) <=> memo_u(list(U)), e(list(U)) <=> memo_u(list(U)), infer_e(list(U)), memo_e(list(U)) (note the outstanding proof obligation infer_e(list(U)) ). this means that we could remove the early superclass constraint check from Haskell, and the resulting uglinesses: - superclass instances have to be visible at subclass instance declaration time, not at subclass constraint usage time (this is really weird) - lack of superclass instances causes compilation to fail, instead of just causing unusability of subclass instances - superclass contexts cannot be used to abstract out common contexts from methods/instances/goals, because superclass contexts behave differently from other contexts what do you think about this suggestion? cheers, claus ps that example 28 is interesting for another reason, too: the missing instance is "instance E t", but GHC will _not_ accept the program if you add that instance, unless we specify allow-incoherent-instances! Hugs will accept it, but its behaviour is that of GHC's "bad" flag (always select the most general instance).. the error message given by GHC, without that flag, is the interesting bit: even though we have instances of E for any type, GHC can't determine that (it would require a differentiation by cases)!

On Sat, Mar 04, 2006 at 10:32:58PM -0000, Claus Reinke wrote:
On Thu, Mar 02, 2006 at 12:03:59PM -0000, Claus Reinke wrote:
The way I interpret FDs implies that they introduce a systematic space leak into the inference process: if any stage in the inference uses *any* constraint with a class which is subject to an FD, that results in an additional bit of information about that class' FD. and we *cannot* throw that bit of information away unless we know that we will never need to refer to it in the remainder of the inference chain.
But with the current system you can throw it away, because it's implied by the FDs on the classes in the context. A system where that was not the case would be significantly more complex than the current FDs, which I think disqualifies it for Haskell'.
I'm not sure what FD system you are referring to here? in the CHR representation of FDs (which seems to be the only candidate for Haskell'), you _cannot_ throw this information away, and the potential of doing so accounts for the majority of confluence violations in the paper "understanding FDs via CHRs": if you simplify a constraint before using it for propagation, you end up missing some improvements and you may not be able to rejoin the diverging deductions.
that's what I was referring to: there shouldn't be any confluence problems with the original code (type classes w FDs), but there are confluence problems with the CHRs used to represent that code!
I think we're applying different value judgements to the same data. The CHR implementation faithfully reflects Mark Jones's original rules (section 6.2 of his paper, plus context reduction as inherited from Haskell 98). Martin is trying to relax Mark's restrictions on the form of instances while keeping those rules working (confluent and terminating). You think the problem is the context reduction rule, which should be changed to retain FD information. Fair enough, but I disagree, for the reason I gave.

What I'm trying to demonstrate here is that only during inference (ie. applying CHRs) we may come across cycles. Hence, the quest is to come up with *dynamic* termination methods. I think that's what you are interested in?
first, lets clarify: all of this is happening before program runtime, so instead of talking about static-static and static-dynamic, I'd prefer to talk about decisions made (a) on the basis of a single instance declaration for a class C (b) on the basis of all declarations for a class C (c) on the basis of each use of class C your concern then seems to be that while you'd like to encourage exploration of termination proofs based on (c), you suggest restriction to those based on (a) for now. but even if I accepted that, I would still not put the generalised occurs-check into the same category as other (c) conditions. the reason being that having occurs-checks whenever types are unified is a built-in assumption in all of Haskell's static type system. my point is that FDs introduce a new source of type unifications, and that this new source needs to be provided with an adequate version of occurs-checks for the rest of the type system to be safe from non-termination. consequently: - we need to devise such an occurs-check anyway - once we've done so, we might as well assume it in FD handling as well in other words, even though using some form of occurs-check to guarantee termination of examples like Mul looks like it would fall under (c), it actually falls under (a), because that specific "dynamic" check is part of the environment. that said, yes, I agree that we need to tackle termination with more than just (a) - the sooner the better. it seems strange to rule out definitions because of possible invalid uses; even with static typing, we only check under which conditions a definition is internally consistent, then reject any *uses* that would violate those conditions (we don't reject length because someone might try to call it with a non-list as parameter).
FYI, in a technical report version of the FD paper, we already address such issues, briefly.
where would I find that report? cheers, claus

FYI, in a technical report version of the FD paper, we already address such issues, briefly.
where would I find that report?
http://www.comp.nus.edu.sg/~sulzmann/chr/download/fd-chr.ps.gz See Section 5 and Appendix E. Martin
participants (8)
-
Ben Rudiak-Gould
-
Claus Reinke
-
Jim Apple
-
John Meacham
-
Martin Sulzmann
-
Ross Paterson
-
Simon Peyton-Jones
-
Taral