Re: [Haskell-cafe] Bool is not...safe?!

Indeed, why use Bool when you can have your own algebraic datatype? Why not data Equality = Equal | NotEqual (==) :: Eq a => a -> a -> Equality when we have essentially the same for 'compare' already? Why not use data Result = Success | Failure as the return type of effectful code? Equality and Result are Booleans with some provenance. But wait! Equality does not come with all the nice functions like (||) and (&&), and Failure still does not tell us what exactly went wrong. Maybe a derivable type class IsomorphicToBool could remedy the former problem. For the latter, Haskell has exception monads. One of the things I love about Haskell is that I do not have to use Int as a return type and remember that negative numbers mean failure of some sort, or worse, the return type is Void but some global variable may now contain the error code of the last function call. In some post in haskell-cafe which I am unable to find right now it was mentioned that the GHC source contains many types that are isomorphic but not equal to Bool. Does anyone know whether the C and C++ folks are now using more informative structs as return types? Cheers, Olaf

On 07/05/2018 10:26 PM, Olaf Klinke wrote:
Indeed, why use Bool when you can have your own algebraic datatype? Why not
data Equality = Equal | NotEqual (==) :: Eq a => a -> a -> Equality
when we have essentially the same for 'compare' already? Why not use
data Result = Success | Failure
as the return type of effectful code? Equality and Result are Booleans with some provenance. But wait! Equality does not come with all the nice functions like (||) and (&&), and Failure still does not tell us what exactly went wrong. Maybe a derivable type class IsomorphicToBool could remedy the former problem. For the latter, Haskell has exception monads. One of the things I love about Haskell is that I do not have to use Int as a return type and remember that negative numbers mean failure of some sort, or worse, the return type is Void but some global variable may now contain the error code of the last function call. In some post in haskell-cafe which I am unable to find right now it was mentioned that the GHC source contains many types that are isomorphic but not equal to Bool. Does anyone know whether the C and C++ folks are now using more informative structs as return types?
Herb Sutter made proposal recently to turn exceptions into ADT's (internally). C++ has std::optional and preferred way to return errors is via discriminated unions. I think that main reason for this is that exceptions are non-deterministic. Greets, Branimir.

Am 05.07.2018 um 22:39 schrieb Olga Ershova
: On Thu, Jul 5, 2018 at 4:27 PM Olaf Klinke
wrote: Does anyone know whether the C and C++ folks are now using more informative structs as return types? When returning errors, they call them "exceptions". Quite so. One might say that the equivalent of try...catch blocks in Haskell is the pattern match
case (action :: Either err a) of Right result -> ... Left err -> deal_with err But what keeps bugging me about exceptions is this: When working with libraries having nested dependencies, is there a way of knowing the complete set of exceptions that might be thrown by a block of code? It is generally considered bad style to write a catch-all. Instead, only catch the exceptions you know how to deal with. But if I don't know what can be thrown, then I can not make my code bullet-proof. The explicit return type seems the clearer approach to this problem. Olaf

Am 06.07.2018 um 22:04 schrieb Olaf Klinke:
Am 05.07.2018 um 22:39 schrieb Olga Ershova
: On Thu, Jul 5, 2018 at 4:27 PM Olaf Klinke
wrote: Does anyone know whether the C and C++ folks are now using more informative structs as return types? When returning errors, they call them "exceptions".
True but not really. C does not have exceptions, so C APIs have always been returning discriminated unions. C++ exceptions tend to be used in different ways, sometimes like Olga says, sometimes differently.
Quite so. One might say that the equivalent of try...catch blocks in Haskell is the pattern match
case (action :: Either err a) of Right result -> ... Left err -> deal_with err
That's kind of how "checked exceptions" in Java work: You declare them in the function signature.
But what keeps bugging me about exceptions is this: When working with libraries having nested dependencies, is there a way of knowing the complete set of exceptions that might be thrown by a block of code?
So for checked exceptions you know what may go up. I.e. it's really equivalent to pattern matching. "Unchecked" exceptions are not declared. I.e. you can use them as untyped pattern match.
It is generally considered bad style to write a catch-all. Instead, only catch the exceptions you know how to deal with. But if I don't know what can be thrown, then I can not make my code bullet-proof. The explicit return type seems the clearer approach to this problem.
Not generally actually. At the top level, you do indeed catch all exceptions, as a signal that the called computation failed. Typical reactions to that are: - Abort the program. - Log the error and wait for the next request (background programs). - Report an error message to the user (interactive programs). - Retry (a useful strategy if nondeterminism is involved). - Fall back to a simpler strategy (for really large software systems). Some design philosophies do this not just at the top level, but at each intermediate level. Some systems are built using "monitor layers", modules that do this kind of subtask monitoring and report an exception only if retrying or falling back didn't work. Regards, Jo

By the way, there are a lot of articles why checked exceptions are bad design (more problems than benefit).
More interesting for me is this snippet:
case (action :: Either err a) of
Right result -> ...
Left err -> deal_with err
Somebody says me that to use Bool in such way is “not safe” because no way to know what does “True” returning from “action” mean. Success or failure? But with some ADT (Success|Failure) code is more Haskellish and safe.
But is it true? We are talking about semantics, not type-safety because type-checker passes both cases (positive branch under False/Failure, negative branch under True/Success). But the same with Right/Left: nothing says you to use Right for positive and Left for error case. Is it true?
Bool is algebra <1, 0, +, *> or more familiar
Am 05.07.2018 um 22:39 schrieb Olga Ershova
: On Thu, Jul 5, 2018 at 4:27 PM Olaf Klinke
wrote: Does anyone know whether the C and C++ folks are now using more informative structs as return types? When returning errors, they call them "exceptions".
True but not really. C does not have exceptions, so C APIs have always been returning discriminated unions. C++ exceptions tend to be used in different ways, sometimes like Olga says, sometimes differently.
Quite so. One might say that the equivalent of try...catch blocks in Haskell is the pattern match
case (action :: Either err a) of Right result -> ... Left err -> deal_with err
That's kind of how "checked exceptions" in Java work: You declare them in the function signature.
But what keeps bugging me about exceptions is this: When working with libraries having nested dependencies, is there a way of knowing the complete set of exceptions that might be thrown by a block of code?
So for checked exceptions you know what may go up. I.e. it's really equivalent to pattern matching. "Unchecked" exceptions are not declared. I.e. you can use them as untyped pattern match.
It is generally considered bad style to write a catch-all. Instead, only catch the exceptions you know how to deal with. But if I don't know what can be thrown, then I can not make my code bullet-proof. The explicit return type seems the clearer approach to this problem.
Not generally actually. At the top level, you do indeed catch all exceptions, as a signal that the called computation failed. Typical reactions to that are: - Abort the program. - Log the error and wait for the next request (background programs). - Report an error message to the user (interactive programs). - Retry (a useful strategy if nondeterminism is involved). - Fall back to a simpler strategy (for really large software systems). Some design philosophies do this not just at the top level, but at each intermediate level. Some systems are built using "monitor layers", modules that do this kind of subtask monitoring and report an exception only if retrying or falling back didn't work. Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 07.07.2018 um 07:15 schrieb Paul:
By the way, there are a lot of articles why checked exceptions are bad design (more problems than benefit).
That's because exceptions disrupt control flow. If they're used as an alternative to pattern matching, i.e. thrown only in tail position and always either fully handled or rethrown unchanged, then they're fine. I.e. checked exceptions are bad because they make it easy to construct bad design. Unchecked exceptions can be abused in the same way but there seem to be psychological reasons why this isn't done so often.
More interesting for me is this snippet:
case (action :: Either err a) of Right result -> ... Left err -> deal_with err
Somebody says me that to use Bool in such way is “not safe” because no way to know what does “True” returning from “action” mean. Success or failure? But with some ADT (Success|Failure) code is more Haskellish and safe.
Terminology N.B.: (Success|Failure) is not an abstract data type (ADT). It's pretty much the opposite of abstract.
But is it true? We are talking about semantics, not type-safety because type-checker passes both cases (positive branch under False/Failure, negative branch under True/Success). But the same with Right/Left: nothing says you to use Right for positive and Left for error case. Is it true?
There's no good way to use Bool as the type of action, so the point is moot. Of course you can reconstruct action's type to be a struct consisting of a Bool, a SuccessResult, and a FailureResult, but that's just awkward so nobody is doing this.
Bool is algebra <1, 0, +, *>
Let me repeat: the algebraic and mathematical properties of Bool are irrelevant.
Problem with Haskell is that it’s imperative and not declarative language.
Haskell isn't imperative. I.e. it matches almost none of the definitions for "imperative" that I have ever seen. It matches a substantial percentage of the definitions for "declarative" actually, but not enough of them that I'd instinctively say it is really declarative - others with a different set of definitions encountered may feel differently.
The root of the problem is that imperative nature of Haskell leads to missing term “evaluation to something”.
The most compact definition of "imperative" that I have seen is "computation progresses by applying side effects". By that definition, the ability to distinguish evaluated and unevaluated expressions is irrelevant.
Haskell program is not evaluated expression and in the same as in Java or C: it is IO action. But in Prolog every piece of code is evaluated to TRUTH or to FALSE. Each sentence is a goal which can be achieved and becomes a fact or can not be – and becomes lie.
Wrong terminology. Nothing in a computer is a lie. Prolog works by turning stuff into either tautologies or contradictions. A lie is something entirely different: a statement that intentionally contradicts reality. Programming language semantics does not have intention; programs may have state that matches or contradicts reality, but take this message which isn't a lie but still does not match any reality because no sheet of paper with these glyphs ever existed (there may be in the future if somebody prints this message, but that doesn't turn this message from a lie into a truth). Please, get your terminology right. If you keep working with "almost right" terminology in your head, you will continuously "almost misunderstand" whatever you read, leading you to conclusions which are almost right but take a huge amount of effort to clean up. People who know the concepts and the terminology will find it too tiresome to talk with you because it is all about educating you (which is not the right level to talk with a person anyway), and will eventually drop out of discussions with you; the only persons who talk to you will be those who happen to have similarly vague ideas, placing you in an echo chamber of ideas that lead nowhere because everything is subtly wrong. Just saying. I'm not going to force anything on you - but I find that this message is entirely about terminology correction and almost nothing about the topics that really interest you, which means it's been mostly a waste of time of both of us. Regards, Jo

@Olaf Hello, you are right. Prolog allows side-effects, which means that you has predicates which is not “pure”. Today, after “pure” functional programming fashion, there is fashion of pure declarative programming too 😊 You define nature of predicates with determinism categories but “variables” nature with +/- notation (if variable is “output” only then it’s not pure predicate, for example, “X is Y + 5” is not pure because it’s functional but not declarative, ie, it’s evaluation but not relation). Mercury has “!” annotation, which is close to linear types I suppose (am I right?). Idea was side-effects is coded as input-output variable which is changing (state of external world). Unfortunately, I learned Prolog when I was student... When we are talking about truth, probably truth, etc, I think we mean https://www.mercurylang.org/information/doc-latest/mercury_ref/Determinism-c.... In Prolog we apply restrictions/relations to answer to get more strict answer. It’s linked with declarative nature: no evaluation but some kind of DSL to barrier answers. Sure, it’s not strict definition or something like this 😊 My intuition is like Prolog evaluation can be imagine in Haskell terms like some logic monad on top level instead of IO monad (main :: SomeLogic (), for example). Btw, how works Curry? And another my intuition is that it’s possible in Haskell with some (new?) monad. I will observe your examples, thanks a lot! @Joachim. Hello again. Let’s talk about Haskell and not persons. I suppose that my not good English may lead to misunderstanding. Excuse me, in this case. If you allow me, I would like to show reasons why I said “imperative” and not declarative. In Haskell I can write: factorial 0 = 1 factorial n = n * factorial (n - 1) I can call it: factorial 5 => 120. And often I meet in Web similar examples with note: “You see – Haskell is declarative language, it really looks declaratively”. IMHO it looks only (you can check Erlang syntax, which is similar to Prolog even, but Erlang is not declarative language). Let’s try the same in Prolog. factorial(0, 1). factorial(N, F) :- N #> 0, N1 #= N - 1, F #= N * F1, factorial(N1, F1). I can call it: factorial(5, X) => X=120. But also: factorial(X, 120) => X=5. We see principal difference: 1. In Haskell we map inputs (arguments) to result. In Prolog we define not mapping but relation, which is bi-directional while Haskell is one-directional, like C, Java, etc. We can find also arguments from result in Prolog. 2. In Haskell like in any imperative language we describe exact algorithm/solution. We define HOW. In declarative language we define WHAT: we restrict result and don’t know actually how it will be evaluated. 3. In Haskell all is evaluated as you wrote code. Exactly. No magic. But in Prolog we restrict some solver to find answer (or answers). In the example this solver is called CLP(FD). Sure, there are other solvers. Modern Prolog systems contain a lot of solvers, expert system engines, etc, etc. So, Haskell has not solvers/engines. Prolog has. Prolog is DSL for solvers. Interesting analogue is SQL as DSL for RDBMS😊 If we want to achieve the same magic in Haskell, we must write such solver explicitly (ie. “how”). Another interesting speculation about real nature of Haskell is List monad. We often see “Haskell is declarative because it has backtracking” with attached some example of List monad/do/guard. But we can say that Python is declarative because it has permutations in its itertools module which allows to solve similar tasks. We understand that List monad is not backtracking, and “guard” is similar to Smalltalk “ifTrue” – no magic, no real backtracking. But like Python itertools can be used to solve some logic tasks in bruteforce/permutations style (as many other modern languages with sequences, F#, for example). You said that “imperative” term is based on term of side-effects. May be I’m seriously wrong, but please, correct me in this case. IMHO side-effects are totally unrelated to imperative/declarative “dichotomy”. For example, int sum(int a, int b) { return (a + b); } I don’t see here side-effects. I have not problems to write all my C code in such style. Without any side-effects. Also I find “pure” functions in D language, “function” keyword in Pascal and Basic. But also I see main :: IO () in Haskell. So, I don’t think side-effects are relative to declarative/imperative dichotomy at whole. When I was student, criteria of declarative/imperative was: functional, procedural, OOP languages are imperative, due to one-directional evaluation/execution and you code HOW to be evaluated, but declarative is bi-directional, you code relations (WHAT), between predicates and apply some restrictions. I am surprised that there is another classification: based on side-effects. Please, correct me, where I’m wrong and if you can do it without such emotion – I will be very glad to get new knowledge. PS. Out of scope is declarative DSLs written in imperative languages: like Scons Python build system. Sure, there are a lot of similar Haskell libraries/tools. IMHO more correct is to say “Scons has declarative Python-based DSL, but not Python is declarative itself”. From: Joachim Durchholz Sent: 7 июля 2018 г. 11:17 To: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Bool is not...safe?! Am 07.07.2018 um 07:15 schrieb Paul:
By the way, there are a lot of articles why checked exceptions are bad design (more problems than benefit).
That's because exceptions disrupt control flow. If they're used as an alternative to pattern matching, i.e. thrown only in tail position and always either fully handled or rethrown unchanged, then they're fine. I.e. checked exceptions are bad because they make it easy to construct bad design. Unchecked exceptions can be abused in the same way but there seem to be psychological reasons why this isn't done so often.
More interesting for me is this snippet:
case (action :: Either err a) of Right result -> ... Left err -> deal_with err
Somebody says me that to use Bool in such way is “not safe” because no way to know what does “True” returning from “action” mean. Success or failure? But with some ADT (Success|Failure) code is more Haskellish and safe.
Terminology N.B.: (Success|Failure) is not an abstract data type (ADT). It's pretty much the opposite of abstract.
But is it true? We are talking about semantics, not type-safety because type-checker passes both cases (positive branch under False/Failure, negative branch under True/Success). But the same with Right/Left: nothing says you to use Right for positive and Left for error case. Is it true?
There's no good way to use Bool as the type of action, so the point is moot. Of course you can reconstruct action's type to be a struct consisting of a Bool, a SuccessResult, and a FailureResult, but that's just awkward so nobody is doing this.
Bool is algebra <1, 0, +, *>
Let me repeat: the algebraic and mathematical properties of Bool are irrelevant.
Problem with Haskell is that it’s imperative and not declarative language.
Haskell isn't imperative. I.e. it matches almost none of the definitions for "imperative" that I have ever seen. It matches a substantial percentage of the definitions for "declarative" actually, but not enough of them that I'd instinctively say it is really declarative - others with a different set of definitions encountered may feel differently.
The root of the problem is that imperative nature of Haskell leads to missing term “evaluation to something”.
The most compact definition of "imperative" that I have seen is "computation progresses by applying side effects". By that definition, the ability to distinguish evaluated and unevaluated expressions is irrelevant.
Haskell program is not evaluated expression and in the same as in Java or C: it is IO action. But in Prolog every piece of code is evaluated to TRUTH or to FALSE. Each sentence is a goal which can be achieved and becomes a fact or can not be – and becomes lie.
Wrong terminology. Nothing in a computer is a lie. Prolog works by turning stuff into either tautologies or contradictions. A lie is something entirely different: a statement that intentionally contradicts reality. Programming language semantics does not have intention; programs may have state that matches or contradicts reality, but take this message which isn't a lie but still does not match any reality because no sheet of paper with these glyphs ever existed (there may be in the future if somebody prints this message, but that doesn't turn this message from a lie into a truth). Please, get your terminology right. If you keep working with "almost right" terminology in your head, you will continuously "almost misunderstand" whatever you read, leading you to conclusions which are almost right but take a huge amount of effort to clean up. People who know the concepts and the terminology will find it too tiresome to talk with you because it is all about educating you (which is not the right level to talk with a person anyway), and will eventually drop out of discussions with you; the only persons who talk to you will be those who happen to have similarly vague ideas, placing you in an echo chamber of ideas that lead nowhere because everything is subtly wrong. Just saying. I'm not going to force anything on you - but I find that this message is entirely about terminology correction and almost nothing about the topics that really interest you, which means it's been mostly a waste of time of both of us. Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 08.07.2018 um 09:43 schrieb Paul:
@Joachim.
Hello again. Let’s talk about Haskell and not persons. I suppose that my not good English may lead to misunderstanding. Excuse me, in this case.
Nah, you are mixing up terminology that is easily looked up.
If you allow me, I would like to show reasons why I said “imperative” and not declarative.
In Haskell I can write:
factorial 0 = 1
factorial n = n * factorial (n - 1)
I can call it: factorial 5 => 120.
And often I meet in Web similar examples with note: “You see – Haskell is declarative language, it really looks declaratively”.
"It's declarative because it looks declarative" is a circular argument, and misses whatever definition of "declarative" was actually used. BTW you don't define it either. Arguing around an undefined term is pointless.
We see principal difference:
1. In Haskell we map inputs (arguments) to result. In Prolog we define not mapping but *relation*, which is *bi-directional* while Haskell is *one-directional*, like C, Java, etc. We can find also arguments from result in Prolog.
Yeah, that's the defining feature of Prolog-style languages. And where the pure subset of Prolog is conceptually different from the pure subset of Haskell.
2. In Haskell like in any imperative language we describe exact algorithm/solution. We define *HOW*. In declarative language we define *WHAT*: we *restrict result* and don’t know actually how it will be evaluated. 3. In Haskell all is evaluated as you wrote code. Exactly. No magic.
Just as much or little magic as in Prolog. Hey, FORTRAN's expression sublanguage was considered magic at the time FORTRAN was written in all upper-case letters: You just write down the expression, and the compiler magically determines what data to move where to make it all happen. So... "no magic" is not suitable as a criterion, because what's magic and what's not is in the eye of the beholder. Having written a Prolog interpreter, Prolog is as non-magical as C or C++ to me; in fact I find Haskell's implementation slightly "more magic" simply because I have not yet written a run-time for a lazy language, but not *much* "more magic" because I happen to know the essential algorithms. Still, Haskell's type system and the possibilities it offers are pretty baffling.
But in Prolog we restrict some *solver* to find answer (or answers).
Except in cases where the solver is too slow or goes into an endless loop. At which point we start to mentally trace the solver's mechanisms so whe know where to place the cut, and suddenly that magic thing turns into a complex-but-mundane machinery.
In the example this solver is called CLP(FD). Sure, there are other solvers. Modern Prolog systems contain a lot of solvers, expert system engines, etc, etc. So, Haskell has not solvers/engines. Prolog has. Prolog is DSL for solvers. Interesting analogue is SQL as DSL for RDBMS😊
Agreed.
If we want to achieve the same magic in Haskell, we must write such solver explicitly (ie. “how”).
I got pretty disillusioned about Prolog when I tried to use it for real-world problems. So I don't consider it much of a loss if you have to code a solver if you want a solver.
Another interesting speculation about real nature of Haskell is List monad. We often see “Haskell is declarative because it has backtracking”
Where did you see that claim? Because at face value, this claim is hogwash. Haskell's semantics does not have backtracking at all, and I doubt you'll find any runtime that uses backtracking even as a mere implementation strategy.
with attached some example of List monad/do/guard.
Which is unrelated to backtracking.
But we can say that Python is declarative because it has permutations in its itertools module which allows to solve similar tasks.
Python is a language where even class and function "declarations" are executable statements (making it really hard to work with mutually-referencing declarations so this isn't a mere theoretical quibble but a hard, real-life problem), and that's as non-declarative as you can get without becoming totally useless. I'm awfully sorry, but this is the most ridiculous claim I heard in a long time. I am even more sorry but I do have to talk about humans: Please get your definitions right. It's easy, even if English isn't your native language (it isn't mine either).
We understand that List
monad is not backtracking, and “guard” is similar to Smalltalk “ifTrue” – no magic, no real backtracking.
Well if that example was hogwash, why did you bring it up?
But like Python itertools can be used
to solve some logic tasks in bruteforce/permutations style (as many other modern languages with sequences, F#, for example).
Sure. You can to functional, pure, declarative, context-free etc. stuff in imperative languages no problem. You can even do that in assembler - large systems have been built using such approaches. That doesn't make the languages themselves functional/pure/etc.; the definition for that is that the language does not allow doing anything that is not functional/pure/whatever. Very few languages are really one of these properties, it's almost always just a subset of a language; still we say that Haskell "is pure" because the pure subset is very useful and in fact most programmers avoid going outside that subset. (People rarely use UnsafeIO. Whether _|_, or equality under lazy evaluation, are compatible with purity depends on details of your definition of pure, some people would say yes, some would say no.)
You said that “imperative” term is based on term of side-effects. May be I’m seriously wrong, but please, correct me in this case. IMHO side-effects are totally unrelated to imperative/declarative “dichotomy”.
That depends on what you consider declarative. I am not aware of any widely-agreed-upon definition, so you have to provide one before we can talk.
But
also I see
main :: IO ()
in Haskell.
As far as I know, the IO monad does not execute any IO. It merely constructs descriptions of what effects to invoke given what descriptions. The actual execution happens outside the program, in the runtime. That's why UnsafeIO is unsafe: It triggers IO execution inside the pure Haskell world, possibly creating impurities where the language semantics assumes purity (IOW you may find that compiler optimizations may change the behaviour of the program).
When I was student, criteria of declarative/imperative was: functional, procedural, OOP languages are imperative, due to one-directional evaluation/execution and you code HOW to be evaluated, but declarative is bi-directional, you code relations (WHAT), between predicates and apply some restrictions.
That's Prolog's idea of "declarative". HTML, CSS, and most Turing-incomplete languages are considered declarative, too. Some people define "declarative" to be exactly the opposite of "imperative". It's really a matter of definition.

Hello, Olaf! It makes sense absolutely and totally! Even more, I'm using BoolValue and Boolean (from AC-Boolean package) instances already. And IMHO it's right way to proceed with this. Such way allows to treat Unix shell exit codes as "success", "failure" even (but in the case with them it will be homomorphism, which is interesting case, because in logic we have "base" values {0, 1} for classical logic, [0...1] for fuzzy logic, this is some kind of logical base too - how is it called?) 05.07.2018 23:26, Olaf Klinke wrote:
Indeed, why use Bool when you can have your own algebraic datatype? Why not
data Equality = Equal | NotEqual (==) :: Eq a => a -> a -> Equality
when we have essentially the same for 'compare' already? Why not use
data Result = Success | Failure
as the return type of effectful code? Equality and Result are Booleans with some provenance. But wait! Equality does not come with all the nice functions like (||) and (&&), and Failure still does not tell us what exactly went wrong. Maybe a derivable type class IsomorphicToBool could remedy the former problem. For the latter, Haskell has exception monads. One of the things I love about Haskell is that I do not have to use Int as a return type and remember that negative numbers mean failure of some sort, or worse, the return type is Void but some global variable may now contain the error code of the last function call. In some post in haskell-cafe which I am unable to find right now it was mentioned that the GHC source contains many types that are isomorphic but not equal to Bool. Does anyone know whether the C and C++ folks are now using more informative structs as return types?
Cheers, Olaf

Am 06.07.2018 um 07:39 schrieb PY
: Hello, Olaf! It makes sense absolutely and totally! Even more, I'm using BoolValue and Boolean (from AC-Boolean package) instances already. And IMHO it's right way to proceed with this. AC-Boolean ist precisely my intention when I thought about IsomorphicToBool. Of course by defining an instance you force upon everyone else which value you consider to be 'True'.
Such way allows to treat Unix shell exit codes as "success", "failure" even (but in the case with them it will be homomorphism, which is interesting case, because in logic we have "base" values {0, 1} for classical logic, [0...1] for fuzzy logic, this is some kind of logical base too - how is it called?) You might be interested in another kind of Boolean-like types: Commutative monads. A monad is called commutative if the order of actions does not matter, i.e. if the following always holds. do { x <- mx; y <- my; return f x y} == do {y <- my; x <- mx; return f x y} For example, Maybe is commutative, and [] is up to permutation. For such a monad m consider the type m (). Observe that Maybe () is isomorphic to Bool. Can we derive some operations generically? Indeed,
true = return () (&&) = liftM2 const (||) = mplus false = mzero Notice that this is different from AC-Boolean, where false = return false. The monad being commutative makes the binary operations commutative, as required for proper logic. For 'not', we need to work a little harder. A monadic implementation of 'not' means that you can pattern match on mzero, which is not baked into any monad type class. But having a monadic 'not' makes the monad really interesting. If you have a probability monad, for instance, then 'not' (or rather implication, of which 'not' is a special case) provides conditional probabilities. Olaf
05.07.2018 23:26, Olaf Klinke wrote:
Indeed, why use Bool when you can have your own algebraic datatype? Why not
data Equality = Equal | NotEqual (==) :: Eq a => a -> a -> Equality
when we have essentially the same for 'compare' already? Why not use
data Result = Success | Failure
as the return type of effectful code? Equality and Result are Booleans with some provenance. But wait! Equality does not come with all the nice functions like (||) and (&&), and Failure still does not tell us what exactly went wrong. Maybe a derivable type class IsomorphicToBool could remedy the former problem. For the latter, Haskell has exception monads. One of the things I love about Haskell is that I do not have to use Int as a return type and remember that negative numbers mean failure of some sort, or worse, the return type is Void but some global variable may now contain the error code of the last function call. In some post in haskell-cafe which I am unable to find right now it was mentioned that the GHC source contains many types that are isomorphic but not equal to Bool. Does anyone know whether the C and C++ folks are now using more informative structs as return types?
Cheers, Olaf

You might be interested in another kind of Boolean-like types: Commutative monads. A monad is called commutative if the order of actions does not matter, i.e. if the following always holds. do { x <- mx; y <- my; return f x y} == do {y <- my; x <- mx; return f x y} For example, Maybe is commutative, and [] is up to permutation. For such a monad m consider the type m (). Observe that Maybe () is isomorphic to Bool. Can we derive some operations generically? Indeed,
true = return () (&&) = liftM2 const (||) = mplus false = mzero
I don't think it's by definition yet, but surely by convention, that this is a restricted form of true = pure () (&&) = (<*) (||) = (<|>) false = empty All of these functions require only an Applicative or an Alternative, and except for "true", they're all in the library. Which shows another reason to think twice before using a bool: Applicatives can be a better fit. They can often carry the question behind the bool and the value this question is about in one convenient structure. Cheers, MarLinn
participants (7)
-
Branimir Maksimovic
-
Joachim Durchholz
-
MarLinn
-
Olaf Klinke
-
Olga Ershova
-
Paul
-
PY