
All these smart math guys hanging around and nobody's given a really decent answer to this? It's actually not arbitrary. There's a strong connection between predicates (functions that return boolean values) and sets. Any predicate uniquely determines a set - the set of values for which it returns true. Similarly, any set determines a predicate - one that returns true exactly when the argument is a member of the set. It's natural to define a partial order among sets from inclusion: A ≤ B iff A ⊆ B. Viewing the sets as predicates, the corresponding relationship between the predicates is implication. A ⊆ B iff (x ∊ A) ⇒ (x ∊ B) - so predicates are naturally ordered by implication. Viewed as sets, the predicate that always returns False is equivalent to ∅ - the empty set, while the predicate that always returns True is equivalent to U - the universal set that contains everything (in naive set theory, anyway - in axiomatic theories it gets a little more complicated). So, since "subset" gives a "natural" order to sets, "implication" gives a natural order to predicates, thus it's "natural" to say that the Haskell expressions (const False) and (const True), which are predicates, are ordered: (const False) < (const True). And therefore, it's natural to say that False < True. Anyway, that's my explanation for it. :) Derek Elkins wrote:
Henning Thielemann wrote:
On Thu, 31 May 2007, Paul Hudak wrote:
PR Stanley wrote:
I think so, too. In Boolean algebra (which predates computers, much less C), FALSE has traditionally been associated with 0, and TRUE with 1. And since 1 > 0, TRUE > FALSE.
The question, however, still remains: why False = 0 and True 1? I appreciate that it's so in boolean algebra but why? Why not True = 0 and False = 1?
Because if you take (&&) to be (*), and (||) to be (+), you get a homomorphism between the two resulting algebras (assuming 1+1 = 1).
It seems however, that if the number representations of False and True are flipped, then we only need to flip the interpretations of (&&) and (||). For me the choice fromEnum False == 0 && fromEnum True == 1 seems rather arbitrary.
It -is- arbitrary, in boolean algebra as well. not is an automorphism between the two. However, we tend to think of 0 as associated with nothing/the empty set and that maps naturally to {x | False}. There are intuitive reasons why this choice was chosen, but no formal reasons. Obviously, there are pragmatic reasons to use this choice in programming languages, e.g. many languages use 0 to represent false and non-zero or 1 to represent true and this is consistent with that. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- ----- What part of "ph'nglui bglw'nafh Cthulhu R'lyeh wagn'nagl fhtagn" don't you understand?