Imperative vs. declarative (was: Bool is not...safe?!)

Although there is little to add to Joachim's patient explanations, and there are certainly better resources, here is my own attempt at a definition of imperative vs. declarative. The word "imperative" stems from Latin and means to command, to rule. Imperative programs tell the computer what to do, and when. Now allocate some memory, now assign this value to this variable, now check this condition and branch on it, etc. The word "declarative" also stems from Latin, meaning to make precise. Declarative languages tell the computer what data is, not when and how exactly to compute it. Haskell as a lazy language takes this even further - some code you have declared may actually never get executed. These decisions are abstracted away and made by the Haskell runtime system, in our case. Your own example of factorial is very much declarative in the above sense, because it only declares what the factorial function is, in terms of the relationship between factorial(n) and factorial(n-1). Of course the functional programmer must have a mental model of the runtime's behaviour in mind. (Recursively calling the function, in this case.) But what happens on the lower, imperative level when computing factorial(n) is not relevant for the definition of the function. Whether the principal programming constructs are functions or relations is not relevant to being declarative. In fact, the Prolog beginner can easily run into traps by declaring rules that make the solver (the low-level, imperative part of the execution) go into infinite loops, although all declarations are logically correct. Hence in Haskell as in Prolog, although both are declarative, the programmer must have some knowledge of the imperative nature of the runtime system. Olaf

Your own example of factorial is very much declarative in the above sense, because it only declares what the factorial function is, in terms of the relationship between factorial(n) and factorial(n-1). Of course the functional programmer must have a mental model of the runtime's behaviour in mind. (Recursively calling the function, in this case.) But what happens on the lower, imperative level when computing factorial(n) is not relevant for the definition of the function. My point was that in Haskell we define how to calculate result from arguments, exactly as in C# and with the same pattern-matching. But in Prolog I coded relation, so Prolog know how to calculate not only factorial but also argument from the result like we have 2 different evaluation coded in Prolog. Currently it's obvious: there are different classification. I showed only what I myself studied as a student :) Another interesting note is: are XML, HTML, CSS declarative language? When I was student they were called formats and not languages. Because Haskell execution/evaluation is based on lambda calculus, classical Prolog on formal logic on 1st order predicates, but on what calculation model are based XML? There are a lot of XML or CSS parsers on any language :) So, they don't subscribe evaluation model but only data. But another contra-example: XML -> DocBook -> Postscript. Is it format or language? :) I think currently there are a lot of hybrid-languages: OOP+FP (F#, C#, Ocaml, CL...), FP+PL (Mercury, Curry...). Also there are a lot of PL libraries, for example: yieldProlog for Python :) So, there are a lot of cases when it's difficult to make right classification. So, I understand that classification becomes more unclear and difficult,
Hello, Olaf! 08.07.2018 23:56, Olaf Klinke wrote: that's true. Like that may be different way to classify them. Olaf, but I have another question. You was talking about commutative monads. I check it, something like this: |doa <-ma b <-mb f a b is equal to: ||dob <-mb a <-ma f a b| && and || are commutative sure. But question is: why in this case in C/C++, Bash (what else) order has matter, even more: order is fixed in standard. `e1 && e2` is equals to `if e1 then e2`. And there is a lot of code which relies on this. Why they implements boolean operations in such way? Order has not matter for +, -, *, etc in the same languages (they are commutative). Why so many languages have not commutative bool operations? When i think about it, i find next example: Haskell function is pure, but is it really true? :) In practical world we can have 2 functions one like `f a b = a + b`. And another `g` may be some wavelet transformation or calculation of some super big fractal. No side effects (effects in external world), but when you evaluate `f` - you can not see effect. But when you calculate `g` - you can even touch the effect on CPU case with fingers (it will be hot!) :-) So, there is difference to write `f && g` or `g && f`. If some code relies on order of execution and use `&&` instead of `if` - it has matter. May be bool operations were not implemented commutative in those languages because it allows to write "multi-ifs" (a & b & c & d ...) in short circuit way? I never though about this early :) I remember that there were orelse and andalso in Basic and Ocaml... So, seems there is such tradition in CS: to have mandatory non-commutative and's/or's and optionally commutative and's/or's ? === Best regards, Paul ||

Hi Paul, 1. What exactly is the difference between "what to do?" and "how to do it?" for you? For example: in Prolog, we write _how to_ build a relation between some variables. 2. Of course, function purity is an abstraction. Now imagine a scale of impurity; a Haskell pure function is purer than a typical Bash function. I'm genuinely interested in your reasoning, but I will join the small chorus of people asking you for definitions. Perhaps, you could write up your ideas in a blog post somewhere that we can then discuss? - Sergiu Thus quoth PY on Mon Jul 09 2018 at 08:09 (+0200):
Hello, Olaf!
Your own example of factorial is very much declarative in the above sense, because it only declares what the factorial function is, in terms of the relationship between factorial(n) and factorial(n-1). Of course the functional programmer must have a mental model of the runtime's behaviour in mind. (Recursively calling the function, in this case.) But what happens on the lower, imperative level when computing factorial(n) is not relevant for the definition of the function. My point was that in Haskell we define how to calculate result from arguments, exactly as in C# and with the same pattern-matching. But in Prolog I coded relation, so Prolog know how to calculate not only factorial but also argument from the result like we have 2 different evaluation coded in Prolog. Currently it's obvious: there are different classification. I showed only what I myself studied as a student :) Another interesting note is: are XML, HTML, CSS declarative language? When I was student they were called formats and not languages. Because Haskell execution/evaluation is based on lambda calculus, classical Prolog on formal logic on 1st order predicates, but on what calculation model are based XML? There are a lot of XML or CSS parsers on any language :) So, they don't subscribe evaluation model but only data. But another contra-example: XML -> DocBook -> Postscript. Is it format or language? :) I think currently there are a lot of hybrid-languages: OOP+FP (F#, C#, Ocaml, CL...), FP+PL (Mercury, Curry...). Also there are a lot of PL libraries, for example: yieldProlog for Python :) So, there are a lot of cases when it's difficult to make right classification. So, I understand that classification becomes more unclear and difficult,
08.07.2018 23:56, Olaf Klinke wrote: that's true. Like that may be different way to classify them. Olaf, but I have another question. You was talking about commutative monads. I check it, something like this:
|doa <-ma b <-mb f a b is equal to: ||dob <-mb a <-ma f a b|
&& and || are commutative sure. But question is: why in this case in C/C++, Bash (what else) order has matter, even more: order is fixed in standard.
`e1 && e2` is equals to `if e1 then e2`. And there is a lot of code which relies on this. Why they implements boolean operations in such way? Order has not matter for +, -, *, etc in the same languages (they are commutative). Why so many languages have not commutative bool operations? When i think about it, i find next example: Haskell function is pure, but is it really true? :) In practical world we can have 2 functions one like `f a b = a + b`. And another `g` may be some wavelet transformation or calculation of some super big fractal. No side effects (effects in external world), but when you evaluate `f` - you can not see effect. But when you calculate `g` - you can even touch the effect on CPU case with fingers (it will be hot!) :-) So, there is difference to write `f && g` or `g && f`. If some code relies on order of execution and use `&&` instead of `if` - it has matter. May be bool operations were not implemented commutative in those languages because it allows to write "multi-ifs" (a & b & c & d ...) in short circuit way? I never though about this early :) I remember that there were orelse and andalso in Basic and Ocaml... So, seems there is such tradition in CS: to have mandatory non-commutative and's/or's and optionally commutative and's/or's ?
=== Best regards, Paul || _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
participants (3)
-
Olaf Klinke
-
PY
-
Sergiu Ivanov