equations and patterns

Hi, buddies. I am a newbie on Haskell. Recently I want to implement a simple Lattice in Haskell, but I met some difficulties. Scrap of the code is as below, but I met syntax error:
class Lattice e where join :: e -> e -> e meet :: e -> e -> e
-- associative law join x (join y z) = join (join x y) z join (join x y) z = join x (join y z)
The main problem is that mathematic equations is bi-directed, but in Haskell the pattern matching is only in one direction - from left to right. And only variables and constructors could be occur in the right side of the pattern matching. Seems mathematic axioms and pattern matching are different things. So how could I rewrite the equations to pattern matching? Which technique should I learn? Thanks for your help. Regards, Mingli

Mingli,
class Lattice e where join :: e -> e -> e meet :: e -> e -> e
-- associative law join x (join y z) = join (join x y) z join (join x y) z = join x (join y z)
If you are not to sell your soul to advanced and perhaps obscure type hacking, you cannot express laws like this *in* Haskell. More concretely, one usually does not provide such laws as default implementations of a class' methods. Instead, they are stated in, for instance, comments and the documentation that goes with your library. These then form an informal obligation for programmers that provide instances of your class to let these instances obey the laws. If you provide an instance of the class you could use testing framework, e.g., QuickCheck [1], to assert that the required properties hold. Cheers, Stefan [1] www.cs.chalmers.se/~rjmh/QuickCheck/

If you want to enforce associativity just create your own Eq instance and
make it a pattern there.
Initially when I started doing Haskell it seemed that you could just type
an equation of constructors and have it enforced as a rule. This actually
isn't the case (someone correct me if I'm wrong) but it is being researched
ATM.
Dan
On 5/31/07, Stefan Holdermans
Mingli,
class Lattice e where join :: e -> e -> e meet :: e -> e -> e
-- associative law join x (join y z) = join (join x y) z join (join x y) z = join x (join y z)
If you are not to sell your soul to advanced and perhaps obscure type hacking, you cannot express laws like this *in* Haskell.
More concretely, one usually does not provide such laws as default implementations of a class' methods. Instead, they are stated in, for instance, comments and the documentation that goes with your library. These then form an informal obligation for programmers that provide instances of your class to let these instances obey the laws.
If you provide an instance of the class you could use testing framework, e.g., QuickCheck [1], to assert that the required properties hold.
Cheers,
Stefan
[1] www.cs.chalmers.se/~rjmh/QuickCheck/
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 5/31/07, Stefan Holdermans
Dan,
If you want to enforce associativity just create your own Eq instance and make it a pattern there.
Could you elaborate on that? It's still early here and I've had only one cup of of coffee yet.
Cheers,
Stefan
QuickCheck allows you to approximately verify properties by testing them on randomly generated input. The stated properties thus cannot formally be proved, but they act as a pretty good formal specification. (Full automatic theorem proving for a language as expressive as Haskell is impossible or infeasible. So we have to approximate.) prop_assocJoin x y z = join x (join y z) == join (join x y) z -- or, more generally associative :: (Eq a) => (a -> a -> a) -> a -> a -> a -> Bool associative f x y z = f x (f y z) == f (f x y) z prop_assocJoin = associative join -- to check this for a given implementation of "join", you need to: import Test.QuickCheck Main> quickCheck prop_assocJoin This also requires that QC can generate arbitrary values of type "e". See the QuickCheck documentation for more infos on that: http://www.cs.chalmers.se/~rjmh/QuickCheck/manual.html / Thomas -- "Remember! Everytime you say 'Web 2.0' God kills a startup!" - userfriendly.org, Jul 31, 2006

On Thu, 31 May 2007, Stefan Holdermans wrote:
Mingli,
class Lattice e where join :: e -> e -> e meet :: e -> e -> e
-- associative law join x (join y z) = join (join x y) z join (join x y) z = join x (join y z)
If you are not to sell your soul to advanced and perhaps obscure type hacking, you cannot express laws like this *in* Haskell.
More concretely, one usually does not provide such laws as default implementations of a class' methods. Instead, they are stated in, for instance, comments and the documentation that goes with your library. These then form an informal obligation for programmers that provide instances of your class to let these instances obey the laws.
Like here: http://darcs.haskell.org/numericprelude/src/Algebra/Lattice.hs
If you provide an instance of the class you could use testing framework, e.g., QuickCheck [1], to assert that the required properties hold.
I assume they can in some way also be used for GHC's optimizer. http://www.haskell.org/haskellwiki/Playing_by_the_rules

mingli yuan wrote:
Seems mathematic axioms and pattern matching are different things. So how could I rewrite the equations to pattern matching? Which technique should I learn?
Haskell is more suitable for re-writing systems, which are based on oriented equations. The question of orientation is well-described, for example, in the book Term Rewriting and All That by Franz Baader, Tobias Nipkow http://www4.in.tum.de/~nipkow/TRaAT/ I should also point out Clean, where equations like associativity can be stated -- and even proven by Clean's built-in proof assistant. But something can be done in Haskell too. It is quite a general technique: free term algebra. It is explained in two excellent papers, John Hughes: The Design of a Pretty-printing Library http://citeseer.ist.psu.edu/hughes95design.html and Ralf Hinze: Deriving Backtracking Monad Transformers (ICFP2000) http://citeseer.ist.psu.edu/hinze00deriving.html The following Haskell98 code demonstrates the technique for your problem. Well, actually we take quite a similar problem: instead of lattices, we take rings -- or, to be precise, Num. They have two operations (+) and (*) which are too (usually taken to be) associative. This property is not stated in the Num class. Rather, it is described informally in comments. But we can deal with that property formally and directly. We introduce a free term algebra as follows
data FN = N Int | Add FN FN | Mul FN FN deriving (Eq,Show)
and define an embedding of it in Haskell:
instance Num FN where fromInteger = N . fromInteger (+) = Add (*) = Mul
So, we can write
test1 :: FN test1 = (1 + 2) * 3 * 4 + 5
and see the result Add (Mul (Mul (Add (N 1) (N 2)) (N 3)) (N 4)) (N 5) Well, this doesn't actually do anything. It is a free term algebra after all. We need to give it a meaning, that is, define an observation function, or an interpreter
-- Interpreter (the observation function): big-step semantics interp :: FN -> Int interp (N x) = x interp (Add x y) = interp x + interp y interp (Mul x y) = interp x * interp y
Now, evaluating "interp test1" gives 41. One may question the wisdom of this round-about way of adding and multiplying integers. Indeed, first we `compile' an arithmetic expression to the `byte-code'. Next, we wrote a byte-code interpreter, 'interp'. The benefit is that we can insert a `byte-code optimizer' before the interpretation. And here we may associate to the right
mul_right :: FN -> Maybe FN mul_right (Mul (Mul a b) c) = Just (Mul a (Mul b c)) mul_right x = congruence mul_right x
test3 = closure mul_right test1
*FreeNum> test3 Add (Mul (Add (N 1) (N 2)) (Mul (N 3) (N 4))) (N 5) or to the left, distribute multiplication over addition
mul_add_distr (Mul a (Add b c)) = Just (Add (Mul a b) (Mul a c)) mul_add_distr (Mul (Add b c) a) = Just (Add (Mul b a) (Mul c a)) mul_add_distr x = congruence mul_add_distr x
test4 = closure mul_add_distr test1 *FreeNum> test4 Add (Add (Mul (Mul (N 1) (N 3)) (N 4)) (Mul (Mul (N 2) (N 3)) (N 4))) (N 5)
or combine arbitrary number of steps
test5 = closure (compose mul_add_distr mul_right) test1
*FreeNum> test5 Add (Add (Mul (N 1) (Mul (N 3) (N 4))) (Mul (N 2) (Mul (N 3) (N 4)))) (N 5) Here's the complete code -- Haskell98! -- Free Term algebra for Nums module FreeNum where -- we'll use Ints as the base for simplicity data FN = N Int | Add FN FN | Mul FN FN deriving (Eq,Show) instance Num FN where fromInteger = N . fromInteger (+) = Add (*) = Mul test1 :: FN test1 = (1 + 2) * 3 * 4 + 5 -- Interpreter (the observation function): big-step semantics interp :: FN -> Int interp (N x) = x interp (Add x y) = interp x + interp y interp (Mul x y) = interp x * interp y test2 = interp test1 -- optimizer -- the driver: it applies an optimization step. If it succeeded, it applies -- again. Otherwise, we are finished. -- In other words, compute the transitive closure of the optimization step closure step term = maybe term (closure step) (step term) congruence step (N _) = Nothing congruence step (Add x y) = choose Add (x,step x) (y,step y) congruence step (Mul x y) = choose Mul (x,step x) (y,step y) -- Here's the benefit of laziness: we don't actually do step y -- if step x succeeded choose fn (_,Just x) (y,_) = Just $ fn x y choose fn (x,Nothing) (_,Just y) = Just $ fn x y choose fn (_,Nothing) (_,Nothing) = Nothing -- One step: associate Mul to the right -- (a `Mul` b) `Mul` c ==> a `Mul` (b `Mul` c) -- or: (Mul (Mul a b) c) ==> (Mul a (Mul b c)) mul_right :: FN -> Maybe FN mul_right (Mul (Mul a b) c) = Just (Mul a (Mul b c)) mul_right x = congruence mul_right x -- do the opposite mul_left :: FN -> Maybe FN mul_left (Mul a (Mul b c)) = Just (Mul (Mul a b) c) mul_left x = congruence mul_left x test3 = closure mul_right test1 -- Let us distribute multiplication over addition -- a * (b+c) => a*b + a*c -- (b+c) * a => b*a + c*a mul_add_distr (Mul a (Add b c)) = Just (Add (Mul a b) (Mul a c)) mul_add_distr (Mul (Add b c) a) = Just (Add (Mul b a) (Mul c a)) mul_add_distr x = congruence mul_add_distr x test4 = closure mul_add_distr test1 -- combine steps (it is mplus) compose step1 step2 x = maybe (step2 x) Just (step1 x) test5 = closure (compose mul_add_distr mul_right) test1

Thanks all. I just found this list is very nice. Everybody are so friendly. Regards, Mingli On 5/31/07, oleg@pobox.comwrote: > > > mingli yuan wrote: > > Seems mathematic axioms and pattern matching are different things. > > So how could I rewrite the equations to pattern matching? Which > technique > > should I learn? > > Haskell is more suitable for re-writing systems, which are based > on oriented equations. The question of orientation is well-described, > for example, in the book > > Term Rewriting and All That > by Franz Baader, Tobias Nipkow > http://www4.in.tum.de/~nipkow/TRaAT/ > > I should also point out Clean, where equations like associativity can > be stated -- and even proven by Clean's built-in proof assistant. > > But something can be done in Haskell too. It is quite a general > technique: free term algebra. It is explained in two excellent papers, > > John Hughes: The Design of a Pretty-printing Library > http://citeseer.ist.psu.edu/hughes95design.html > > and > > Ralf Hinze: Deriving Backtracking Monad Transformers (ICFP2000) > http://citeseer.ist.psu.edu/hinze00deriving.html > > The following Haskell98 code demonstrates the technique for your > problem. Well, actually we take quite a similar problem: instead of > lattices, we take rings -- or, to be precise, Num. They have two > operations (+) and (*) which are too (usually taken to be) > associative. This property is not stated in the Num class. Rather, it > is described informally in comments. But we can deal with that property > formally and directly. > > We introduce a free term algebra as follows > > > data FN = N Int | Add FN FN | Mul FN FN > > deriving (Eq,Show) > > and define an embedding of it in Haskell: > > > instance Num FN where > > fromInteger = N . fromInteger > > (+) = Add > > (*) = Mul > > So, we can write > > > test1 :: FN > > test1 = (1 + 2) * 3 * 4 + 5 > > and see the result > Add (Mul (Mul (Add (N 1) (N 2)) (N 3)) (N 4)) (N 5) > > Well, this doesn't actually do anything. It is a free term algebra > after all. We need to give it a meaning, that is, define an > observation function, or an interpreter > > > -- Interpreter (the observation function): big-step semantics > > interp :: FN -> Int > > interp (N x) = x > > interp (Add x y) = interp x + interp y > > interp (Mul x y) = interp x * interp y > > Now, evaluating "interp test1" gives 41. > > One may question the wisdom of this round-about way of adding and > multiplying integers. Indeed, first we `compile' an arithmetic > expression to the `byte-code'. Next, we wrote a byte-code interpreter, > 'interp'. The benefit is that we can insert a `byte-code optimizer' > before the interpretation. And here we may associate to the right > > > mul_right :: FN -> Maybe FN > > mul_right (Mul (Mul a b) c) = Just (Mul a (Mul b c)) > > mul_right x = congruence mul_right x > > > test3 = closure mul_right test1 > > *FreeNum> test3 > Add (Mul (Add (N 1) (N 2)) (Mul (N 3) (N 4))) (N 5) > > or to the left, distribute multiplication over addition > > > mul_add_distr (Mul a (Add b c)) = Just (Add (Mul a b) (Mul a c)) > > mul_add_distr (Mul (Add b c) a) = Just (Add (Mul b a) (Mul c a)) > > mul_add_distr x = congruence mul_add_distr x > > > test4 = closure mul_add_distr test1 > *FreeNum> test4 > Add (Add (Mul (Mul (N 1) (N 3)) (N 4)) (Mul (Mul (N 2) (N 3)) (N 4))) (N > 5) > > or combine arbitrary number of steps > > > test5 = closure (compose mul_add_distr mul_right) test1 > > *FreeNum> test5 > Add (Add (Mul (N 1) (Mul (N 3) (N 4))) (Mul (N 2) (Mul (N 3) (N 4)))) (N > 5) > > > Here's the complete code > > -- Haskell98! > > -- Free Term algebra for Nums > > module FreeNum where > > -- we'll use Ints as the base for simplicity > data FN = N Int | Add FN FN | Mul FN FN > deriving (Eq,Show) > > instance Num FN where > fromInteger = N . fromInteger > (+) = Add > (*) = Mul > > test1 :: FN > test1 = (1 + 2) * 3 * 4 + 5 > > -- Interpreter (the observation function): big-step semantics > interp :: FN -> Int > interp (N x) = x > interp (Add x y) = interp x + interp y > interp (Mul x y) = interp x * interp y > > test2 = interp test1 > > -- optimizer > > -- the driver: it applies an optimization step. If it succeeded, it > applies > -- again. Otherwise, we are finished. > -- In other words, compute the transitive closure of the optimization step > > closure step term = maybe term (closure step) (step term) > > > congruence step (N _) = Nothing > congruence step (Add x y) = choose Add (x,step x) (y,step y) > congruence step (Mul x y) = choose Mul (x,step x) (y,step y) > > -- Here's the benefit of laziness: we don't actually do step y > -- if step x succeeded > choose fn (_,Just x) (y,_) = Just $ fn x y > choose fn (x,Nothing) (_,Just y) = Just $ fn x y > choose fn (_,Nothing) (_,Nothing) = Nothing > > > -- One step: associate Mul to the right > -- (a `Mul` b) `Mul` c ==> a `Mul` (b `Mul` c) > -- or: (Mul (Mul a b) c) ==> (Mul a (Mul b c)) > > mul_right :: FN -> Maybe FN > mul_right (Mul (Mul a b) c) = Just (Mul a (Mul b c)) > mul_right x = congruence mul_right x > > -- do the opposite > mul_left :: FN -> Maybe FN > mul_left (Mul a (Mul b c)) = Just (Mul (Mul a b) c) > mul_left x = congruence mul_left x > > test3 = closure mul_right test1 > > -- Let us distribute multiplication over addition > -- a * (b+c) => a*b + a*c > -- (b+c) * a => b*a + c*a > mul_add_distr (Mul a (Add b c)) = Just (Add (Mul a b) (Mul a c)) > mul_add_distr (Mul (Add b c) a) = Just (Add (Mul b a) (Mul c a)) > mul_add_distr x = congruence mul_add_distr x > > test4 = closure mul_add_distr test1 > > -- combine steps (it is mplus) > > compose step1 step2 x = maybe (step2 x) Just (step1 x) > > test5 = closure (compose mul_add_distr mul_right) test1 > > >
participants (6)
-
Dan Mead
-
Henning Thielemann
-
mingli yuan
-
oleg@pobox.com
-
Stefan Holdermans
-
Thomas Schilling