If you'd design a Haskell-like language, what would you do different?

Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different? Thanks in advance for all answers, yours Robert Clausecker

A mascot :)
On Mon, Dec 19, 2011 at 8:20 PM, Robert Clausecker
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
Robert Clausecker
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

After eight years I'm still discovering why various decisions made in
Haskell are right.
On Mon, Dec 19, 2011 at 11:20 AM, Robert Clausecker
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
Robert Clausecker
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Mon, Dec 19, 2011 at 11:20 AM, Robert Clausecker
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
* Lenses as the default record infrastructure. (Maybe...) * Better organization of numeric (and algebraic/categorical) type classes in the Prelude. * Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it. * Getting rid of the Functor/Monad nonsense. (Every monad is in fact a functor, but we can't use fmap on arbitrary monads in Haskell) * The inclusion of something like Djinn to automatically generate free theorems from types. It would be nice if GHCi included an interactive Djinn-like interface to generate alternative non-free functions for a type. * An API to make automating REPL and text editor interactions straight-forward. (For example, if we were to use the hypothetical Djinn-like feature, we could select the implementation we want from a list and have it pasted into our text editor of choice automatically)

* Alexander Solla
* Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"? -- Roman I. Cheplyaka :: http://ro-che.info/

On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
* Alexander Solla
[2011-12-19 19:10:32-0800] * Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one! Bottoms will be the null pointers of the 2010's, you watch. Ben.

How would you represent it then?
Would it cause a compiler error?
Thiago.
2011/12/20 Ben Lippmeier
On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
* Alexander Solla
[2011-12-19 19:10:32-0800] * Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
Ben.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 20/12/2011, at 9:06 PM, Thiago Negri wrote:
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
How would you represent it then?
Types probably. In C, the badness of null pointers is that when you inspect an int* you don't always find an int. Of course the superior Haskell solution is to use algebraic data types, and represent a possibly exceptional integer by "Maybe Int". But then when you inspect a "Maybe Int" you don't always get an .. ah.
Would it cause a compiler error?
Depends whether you really wanted an Int or not. Ben.

On Tue, Dec 20, 2011 at 8:46 PM, Ben Lippmeier
On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
* Alexander Solla
[2011-12-19 19:10:32-0800] * Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
This ×1000. Errors go in an error monad.
Ben.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Dec 20, 2011, at 8:30 PM, Jesse Schalken wrote:
On Tue, Dec 20, 2011 at 8:46 PM, Ben Lippmeier
wrote: On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
This ×1000. Errors go in an error monad.
Including all possible manifestations of infinite loops? Cheers, Greg

In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
This ×1000. Errors go in an error monad.
Including all possible manifestations of infinite loops?
Some would say that non-termination is a computational effect, and I can argue either way depending on the day of the week. Of course, the history books show that monads were invented *after* it was decided that Haskell would be a lazy language. Talk about selection bias. Ben.

On Dec 20, 2011, at 8:38 PM, Ben Lippmeier wrote:
Some would say that non-termination is a computational effect, and I can argue either way depending on the day of the week.
*shrug* I figure that whether you call _|_ a value is like whether you accept the Axiom of Choice: it is a situational decision that depends on what you are trying to learn more about.
Of course, the history books show that monads were invented *after* it was decided that Haskell would be a lazy language. Talk about selection bias.
True, but I am not quite sure how that is relevant to _|_... Cheers, Greg

On 20/12/2011, at 21:52 , Gregory Crosswhite wrote:
Some would say that non-termination is a computational effect, and I can argue either way depending on the day of the week.
*shrug* I figure that whether you call _|_ a value is like whether you accept the Axiom of Choice: it is a situational decision that depends on what you are trying to learn more about.
I agree, but I'd like to have more control over my situation. Right now we have boxed and lifted Int, and unboxed and unlifted Int#, but not the boxed and unlifted version, which IMO is usually what you want.
Of course, the history books show that monads were invented *after* it was decided that Haskell would be a lazy language. Talk about selection bias.
True, but I am not quite sure how that is relevant to _|_...
I meant to address the implicit question "why doesn't Haskell use monads to describe non-termination already". The answer isn't necessarily "because it's not a good idea", it's because "that wasn't an option at the time".
Dec 20, 2011, в 14:40, Jesse Schalken
написал(а):
Including all possible manifestations of infinite loops?
So... this imaginary language of yours would be able to solve the halting problem?
All type systems are incomplete. The idea is to do a termination analysis, and if the program can not be proved to terminate, then it is marked as "possibly non-terminating". This isn't the same as deciding something is "*definitely* non-terminating", which is what the halting problem is about. This "possibly non-terminating" approach is already used by Coq, Agda and other languages. Ben.

On Tue, Dec 20, 2011 at 9:34 PM, Gregory Crosswhite
On Dec 20, 2011, at 8:30 PM, Jesse Schalken wrote:
On Tue, Dec 20, 2011 at 8:46 PM, Ben Lippmeier
wrote: On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
This ×1000. Errors go in an error monad.
Including all possible manifestations of infinite loops?
Definitely. If you think a value might not reduce, return an error in an error monad. Then the caller is forced to handle the case of an error, or propagate the error upwards. The error can also be handled in pure code this way, whereas bottom can only be handled within the IO monad.
Cheers, Greg

On Dec 20, 2011, at 8:40 PM, Jesse Schalken wrote:
If you think a value might not reduce, return an error in an error monad.
Okay, I'm completely convinced! Now all that we have to do is to solve the halting problem to make your solution work... :-) Cheers, Greg

On Tue, Dec 20, 2011 at 9:47 PM, Gregory Crosswhite
On Dec 20, 2011, at 8:40 PM, Jesse Schalken wrote:
If you think a value might not reduce, return an error in an error monad.
Okay, I'm completely convinced! Now all that we have to do is to solve the halting problem to make your solution work... :-)
Why do you have to solve the halting problem? Consider integer division by 0. intDiv x y = if y > x then 0 else 1 + (intDiv (x - y) y) This is correct, but does not reduce with y = 0. The Prelude version returns bottom in this case. Here is a version that returns an error in an Either String. intDiv :: (Ord a, Num a) => a -> a -> Either String a intDiv _ 0 = Left "Division by 0!" intDiv x y = if y > x then Right 0 else intDiv (x - y) y >>= (Right . (1 +)) This is all I was talking about. Cheers,
Greg

On Dec 20, 2011, at 9:18 PM, Jesse Schalken wrote:
Why do you have to solve the halting problem?
You have to solve the halting problem if you want to replace every place where _|_ could occur with an Error monad (or something similar), because _|_ includes occasions when functions will never terminate.
Consider integer division by 0. [...] This is all I was talking about.
But imagine there was an occasion where you *knew* that the divisor was never zero --- say, because the divisor was constructed to be a natural number. Now there is no point in running in the Error monad because there will never such a runtime error; in fact, it is not clear what you would even *do* with a Left value anyway, short of terminating the program and printing and error, which is what would have happened anyway. Furthermore, it is easy to imagine circumstances where you have now forced your entire program to run in the Error monad, which makes everything incredibly inconvenient with no benefit at all. This is the problem with arguments against partial functions; they don't solve any problems at all except in the case where you have untrusted data in which case you should be using a different function or manually checking it anyway, and they add a lot of wasted overhead. Cheers, Greg

On Tue, Dec 20, 2011 at 10:43 PM, Gregory Crosswhite
On Dec 20, 2011, at 9:18 PM, Jesse Schalken wrote:
Why do you have to solve the halting problem?
You have to solve the halting problem if you want to replace every place where _|_ could occur with an Error monad (or something similar), because _|_ includes occasions when functions will never terminate.
I think we're talking about different things. By "bottom" I mean the function explicitly returns "error ..." or "undefined". In those cases, it should go in an error monad instead. In cases where there is an infinite loop, the function doesn't return anything because it never finishes, and indeed this separate problem will never be solved while remaining Turing complete because it is the halting problem.
Consider integer division by 0. [...] This is all I was talking about.
But imagine there was an occasion where you *knew* that the divisor was never zero --- say, because the divisor was constructed to be a natural number.
Then use a separate type for natural numbers excluding 0. Then you can define a total integer division function on it (although the return value may be zero and so needs a different type).
Now there is no point in running in the Error monad because there will never such a runtime error; in fact, it is not clear what you would even *do* with a Left value anyway, short of terminating the program and printing and error, which is what would have happened anyway.
What you do with a Left value is up to you - that's the point, you now have a choice. In fact, the value might not even be being handled by you, in which case someone else now has a choice. Handling of the error is done in the same place as handling of the result, no IO needed.
Furthermore, it is easy to imagine circumstances where you have now forced your entire program to run in the Error monad, which makes everything incredibly inconvenient with no benefit at all.
This "inconvenience" I imagine is the extra code required to compose functions which return values in a monad as opposed to straight values. To me this is a small price to pay for knowing my code won't randomly crash, and I would rather this be handled syntactically to make composing monadic values more concise. The point is your program shouldn't be able to make assumptions about values without proving them with types. It's often easier not to make the assumption and propagate some error in an error monad instead, but that's better than getting away with the assumption and having the program crash or behave erratically because the assumption turned out false. This is the problem with arguments against partial functions; they don't
solve any problems at all except in the case where you have untrusted data in which case you should be using a different function or manually checking it anyway, and they add a lot of wasted overhead.
The whole term "untrusted data" baffles me. How often can you actually "trust" your data? When you send your software out into the wild, what assumptions can you make about its input? What assumptions can you make about the input to a small part of a larger program which is millions of lines? You can often deduce that certain values do/do not occur in small parts of code, but the difficulty of such deductions increases exponentially with the size of the codebase, and is a job done much better by a type system. Also I would like to think this "wasted overhead" can be optimised away at some stage of compilation, or somehow removed without the programmer needing to think about it. Maybe I'm just dreaming on those fronts, however. Cheers,
Greg

What I think to be the hard part to do is to put this on the type system, e.g.:
intDiv x y = if y > x then 0 else 1 + (intDiv (x - y) y)
Should not compile. Otherwise you will need the bottom value.
Am I missing something?
Thiago.
2011/12/20 Jesse Schalken
On Tue, Dec 20, 2011 at 10:43 PM, Gregory Crosswhite
wrote: On Dec 20, 2011, at 9:18 PM, Jesse Schalken wrote:
Why do you have to solve the halting problem?
You have to solve the halting problem if you want to replace every place where _|_ could occur with an Error monad (or something similar), because _|_ includes occasions when functions will never terminate.
I think we're talking about different things. By "bottom" I mean the function explicitly returns "error ..." or "undefined". In those cases, it should go in an error monad instead. In cases where there is an infinite loop, the function doesn't return anything because it never finishes, and indeed this separate problem will never be solved while remaining Turing complete because it is the halting problem.
Consider integer division by 0. [...] This is all I was talking about.
But imagine there was an occasion where you *knew* that the divisor was never zero --- say, because the divisor was constructed to be a natural number.
Then use a separate type for natural numbers excluding 0. Then you can define a total integer division function on it (although the return value may be zero and so needs a different type).
Now there is no point in running in the Error monad because there will never such a runtime error; in fact, it is not clear what you would even *do* with a Left value anyway, short of terminating the program and printing and error, which is what would have happened anyway.
What you do with a Left value is up to you - that's the point, you now have a choice. In fact, the value might not even be being handled by you, in which case someone else now has a choice. Handling of the error is done in the same place as handling of the result, no IO needed.
Furthermore, it is easy to imagine circumstances where you have now forced your entire program to run in the Error monad, which makes everything incredibly inconvenient with no benefit at all.
This "inconvenience" I imagine is the extra code required to compose functions which return values in a monad as opposed to straight values. To me this is a small price to pay for knowing my code won't randomly crash, and I would rather this be handled syntactically to make composing monadic values more concise.
The point is your program shouldn't be able to make assumptions about values without proving them with types. It's often easier not to make the assumption and propagate some error in an error monad instead, but that's better than getting away with the assumption and having the program crash or behave erratically because the assumption turned out false.
This is the problem with arguments against partial functions; they don't solve any problems at all except in the case where you have untrusted data in which case you should be using a different function or manually checking it anyway, and they add a lot of wasted overhead.
The whole term "untrusted data" baffles me. How often can you actually "trust" your data? When you send your software out into the wild, what assumptions can you make about its input? What assumptions can you make about the input to a small part of a larger program which is millions of lines? You can often deduce that certain values do/do not occur in small parts of code, but the difficulty of such deductions increases exponentially with the size of the codebase, and is a job done much better by a type system.
Also I would like to think this "wasted overhead" can be optimised away at some stage of compilation, or somehow removed without the programmer needing to think about it. Maybe I'm just dreaming on those fronts, however.
Cheers, Greg
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Dec 20, 2011, at 11:21 PM, Jesse Schalken wrote:
On Tue, Dec 20, 2011 at 10:43 PM, Gregory Crosswhite
wrote: On Dec 20, 2011, at 9:18 PM, Jesse Schalken wrote:
Why do you have to solve the halting problem?
You have to solve the halting problem if you want to replace every place where _|_ could occur with an Error monad (or something similar), because _|_ includes occasions when functions will never terminate.
I think we're talking about different things. By "bottom" I mean the function explicitly returns "error ..." or "undefined". In those cases, it should go in an error monad instead. In cases where there is an infinite loop, the function doesn't return anything because it never finishes, and indeed this separate problem will never be solved while remaining Turing complete because it is the halting problem.
Then honestly you should choose a different term because I am pretty certain that my use of the term "bottom" is the commonly accepted one which (among other places) appears in denotation semantics.
Consider integer division by 0. [...] This is all I was talking about.
But imagine there was an occasion where you *knew* that the divisor was never zero --- say, because the divisor was constructed to be a natural number.
Then use a separate type for natural numbers excluding 0. Then you can define a total integer division function on it (although the return value may be zero and so needs a different type).
That would certainly be a lovely idea *if* we were programming in Agda, but I was under the assumption that this conversation was about Haskell. :-)
Now there is no point in running in the Error monad because there will never such a runtime error; in fact, it is not clear what you would even *do* with a Left value anyway, short of terminating the program and printing and error, which is what would have happened anyway.
What you do with a Left value is up to you - that's the point, you now have a choice.
Yes, but it is a pointless choice because if you had any reason to believe that your value was an invalid input to a function you would have checked it by now or used an alternative non-partial function that did run in an Error monad for that specific purpose.
In fact, the value might not even be being handled by you, in which case someone else now has a choice. Handling of the error is done in the same place as handling of the result, no IO needed.
Yes, but all that the user of your library knows at this point is that there is a bug somewhere in your library that violated an invariant. Nearly all of the time there is no way to recover from this in a useful way and so all the user will end up doing in response to your Left value is to abort anyway.
The point is your program shouldn't be able to make assumptions about values without proving them with types.
I agree but, again, we aren't talking about Agda here, we are talking about Haskell. :-)
The whole term "untrusted data" baffles me. How often can you actually "trust" your data?
All the time! For example, if I create a counter that starts at 1, only increase it, and give nobody else access to it, then I can be as certain as it is possible to be can be that it is not 0. Also, there are occasionally times when I essentially check that a Maybe value is Just in one part of the code, and then in another part of the code need to extract the value from the Just; in such cases there is no point in using method *other* than simply fromJust to extract the value. (Of course, it would be better to have condensed all of this into a single case statement in the first place, but sometimes --- say, when interfacing with others' libraries --- this ends up not being an available option.)
When you send your software out into the wild, what assumptions can you make about its input?
None, which is why in that case you need to test your input in that case.
Also I would like to think this "wasted overhead" can be optimised away at some stage of compilation, or somehow removed without the programmer needing to think about it.
I was thinking in terms of overhead from the coder's point of view, not from the compiler's point of view. Cheers, Greg

On Wed, Dec 21, 2011 at 1:09 AM, Gregory Crosswhite
On Dec 20, 2011, at 11:21 PM, Jesse Schalken wrote:
On Tue, Dec 20, 2011 at 10:43 PM, Gregory Crosswhite < gcrosswhite@gmail.com> wrote:
On Dec 20, 2011, at 9:18 PM, Jesse Schalken wrote:
Why do you have to solve the halting problem?
You have to solve the halting problem if you want to replace every place where _|_ could occur with an Error monad (or something similar), because _|_ includes occasions when functions will never terminate.
I think we're talking about different things. By "bottom" I mean the function explicitly returns "error ..." or "undefined". In those cases, it should go in an error monad instead. In cases where there is an infinite loop, the function doesn't return anything because it never finishes, and indeed this separate problem will never be solved while remaining Turing complete because it is the halting problem.
Then honestly you should choose a different term because I am pretty certain that my use of the term "bottom" is the commonly accepted one which (among other places) appears in denotation semantics.
I apologize if I was using the wrong terminology. I've seen "error ..." shown as, and understood it to be, _|_, but it seems _|_ refers to either a value that does not reduce or Haskell's "error" function depending on the context.
Consider integer division by 0. [...] This is all I was talking about.
But imagine there was an occasion where you *knew* that the divisor was never zero --- say, because the divisor was constructed to be a natural number.
Then use a separate type for natural numbers excluding 0. Then you can define a total integer division function on it (although the return value may be zero and so needs a different type).
That would certainly be a lovely idea *if* we were programming in Agda, but I was under the assumption that this conversation was about Haskell. :-)
I don't have experience with proof assistants, but maybe my answer to this thread can be summed up as giving Haskell that kind of capability. ;)
Now there is no point in running in the Error monad because there will never such a runtime error; in fact, it is not clear what you would even *do* with a Left value anyway, short of terminating the program and printing and error, which is what would have happened anyway.
What you do with a Left value is up to you - that's the point, you now have a choice.
Yes, but it is a pointless choice because if you had any reason to believe that your value was an invalid input to a function you would have checked it by now or used an alternative non-partial function that did run in an Error monad for that specific purpose.
In fact, the value might not even be being handled by you, in which case someone else now has a choice. Handling of the error is done in the same place as handling of the result, no IO needed.
Yes, but all that the user of your library knows at this point is that there is a bug somewhere in your library that violated an invariant. Nearly all of the time there is no way to recover from this in a useful way and so all the user will end up doing in response to your Left value is to abort anyway.
What if for example the program is a server which is always running. If you use "error ...", the server will crash, and someone has to go start it up again. The person who wrote the server has to remember to wrap each request in a try...catch block in the IO monad in the outer layer to make sure the server doesn't crash due to errors in pure code. What if they forget? If you use an error monad, they can't forget, because the type system forces them to handle the error. Maybe they will choose to "exit" in the case of an error, but at least then the program is crashing because they've explicitly told it to rather than because they forgot something. More likely they will respond to the request with an "error" response, allowing the server to continue running, but either way the type system has forced them to make a decision.
The point is your program shouldn't be able to make assumptions about values without proving them with types.
I agree but, again, we aren't talking about Agda here, we are talking about Haskell. :-)
Now I really want to look at proof assistants!
The whole term "untrusted data" baffles me. How often can you actually "trust" your data?
All the time! For example, if I create a counter that starts at 1, only increase it, and give nobody else access to it, then I can be as certain as it is possible to be can be that it is not 0.
Start your counter at 0 then. Then if you really don't want to deal with 0, treat 0 as 1 and 1 as 2 etc - hence a type for non-zero naturals. I think structuring code such that the types you use contain no useless or impossible values is often easy with a little thought, and worth it because the compiler is now verifying the assumptions you made. In cases where that isn't possible (or you can't be bothered), I'd rather just propagate an error in a monad than resort to Haskell's "error ..." for above reasons.
Also, there are occasionally times when I essentially check that a Maybe value is Just in one part of the code, and then in another part of the code need to extract the value from the Just; in such cases there is no point in using method *other* than simply fromJust to extract the value.
If you checked that a Maybe is a Just, then continued to pass on the Maybe to a function which assumed it is Just, then you should have just passed on the value in the Just instead. If it's Maybe, you must handle Nothing. If you've already handled Nothing, it doesn't make sense to keep dealing with a Maybe.
(Of course, it would be better to have condensed all of this into a single case statement in the first place, but sometimes --- say, when interfacing with others' libraries --- this ends up not being an available option.)
When you send your software out into the wild, what assumptions can you make about its input?
None, which is why in that case you need to test your input in that case.
Also I would like to think this "wasted overhead" can be optimised away at some stage of compilation, or somehow removed without the programmer needing to think about it.
I was thinking in terms of overhead from the coder's point of view, not from the compiler's point of view.
Cheers, Greg

On Tue, Dec 20, 2011 at 17:52, Jesse Schalken
On Wed, Dec 21, 2011 at 1:09 AM, Gregory Crosswhite
wrote:
That would certainly be a lovely idea *if* we were programming in Agda, but I was under the assumption that this conversation was about Haskell. :-)
I don't have experience with proof assistants, but maybe my answer to this thread can be summed up as giving Haskell that kind of capability. ;)
You want to look at Agda: a proof assistant language whose syntax is similar to Haskell, and which supports dependent types and the like. -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

On Dec 21, 2011, at 8:52 AM, Jesse Schalken wrote:
I don't have experience with proof assistants, but maybe my answer to this thread can be summed up as giving Haskell that kind of capability. ;)
Okay, then suffice it to say that most of what you said *is* implemented in real languages like Coq and Agda so you should check them out. :-) Unfortunately there are two hurdles one faces when using these languages: first, it is often very cumbersome to actually prove all of the invariants you want to prove to make your code total, and second, there is not yet a nice way of doing I/O actions and/or interfacing with external libraries written in other languages. However, until we have solved the problems that make working entirely in dependently-typed languages a pain, we will need to stick with languages with weaker type systems like Haskell, which means that we can't get around _|_ since we cannot encode all of our invariants into the type system. Incidentally, I would highly recommend learning more about Coq and Agda. There is a great textbook to start from available here: http://adam.chlipala.net/cpdt/ It is valuable because it is not Coq-specific but rather is a good general introduction to, exactly as the title promises, certified programming with dependent types. You should also really check out Agda; the main difference between it and Coq is the language in which proofs are written. To make a long story short, in Coq proofs are often easier to write because they use a specialized language for that purpose but this often makes them a bit of a black box, whereas in Agda the proofs are fully explicit which makes them much more transparent but also more verbose and often harder to write. Unfortunately the documentation for Agda tends to be sparse and split over several kinds of documents (such as theses, papers, etc.) so isn't an obvious guide for me to recommend to you, but if you look around you should find enough information to get you started. The home page is http://wiki.portal.chalmers.se/agda/pmwiki.php For me part of the most exciting part was reading through the Agda standard libraries, which you can download from http://wiki.portal.chalmers.se/agda/pmwiki.php?n=Libraries.StandardLibrary Just reading through their code is enlightening because it shows you just how much power is available from dependent types; the standard library also covers a lot more than Coq's does. Unfortunately it also showcases one of the limitations of Agda, which is things can often get cryptic and arcane very quickly, especially since Agda makes you work with "universes" explicitly (Coq handles them implicitly) which adds another layer of complexity. :-) (FYI, the universes are a hierarchy such that values are at the bottom, types of values one rung up, types of types two rungs up, types of types of types three rungs up, etc. Haskell is limited in only having a three-level hierarchy, with values, types, and kinds.) Cheers, Greg

Отправлено с iPhone
Dec 20, 2011, в 14:40, Jesse Schalken
On Tue, Dec 20, 2011 at 9:34 PM, Gregory Crosswhite
wrote: On Dec 20, 2011, at 8:30 PM, Jesse Schalken wrote:
On Tue, Dec 20, 2011 at 8:46 PM, Ben Lippmeier
wrote: On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Bottoms will be the null pointers of the 2010's, you watch.
This ×1000. Errors go in an error monad.
Including all possible manifestations of infinite loops?
Definitely.
If you think a value might not reduce, return an error in an error monad. Then the caller is forced to handle the case of an error, or propagate the error upwards. The error can also be handled in pure code this way, whereas bottom can only be handled within the IO monad.
So... this imaginary language of yours would be able to solve the halting problem?
Cheers, Greg
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

MigMit wrote:
Dec 20, 2011, в 14:40, Jesse Schalken
написал(а): If you think a value might not reduce, return an error in an error monad. Then the caller is forced to handle the case of an error, or propagate the error upwards. The error can also be handled in pure code this way, whereas bottom can only be handled within the IO monad.
So... this imaginary language of yours would be able to solve the halting problem?
Depends on what you mean "solve" the halting problem. Agda and Coq are two languages that will only compile programs that can be proven to terminate. Non terminating programs are rejected. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/

On Tue, Dec 20, 2011 at 1:46 AM, Ben Lippmeier
On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:
* Alexander Solla
[2011-12-19 19:10:32-0800] * Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of "fix id"?
There isn't one!
Indeed, that is my point. "Bottom" is the representation of a computation which cannot be computed. It does not have a semantic in anything less than the first infinite ordinal. It should be treated as essentially unique. It exists as a syntactic construct, but it cannot be given an interpretation in Haskell. In particular, we cannot compare "distinct" (or even indistinct) bottoms, because of the halting problem. There is no consistent logic in which (forall x, x = x) does not hold. Treating bottom the same way we treat 1, "abc", (Just 10), and the other (in principle) comparable values introduces contradiction to Haskell, as a logic. (That is why bottom has the syntactic symbol _|_, the syntax for /the/ unique contradiction) Denotational semantics is nice, but it goes way out of the realm of realistic computation to deal with infinite ordinals just to have a complete semantic. (We can't run computations even up to the first infinite ordinal in real life) I would rather have an incomplete semantic, and have all the incomplete parts collapsed into something we call "bottom". We can then be smart and stay within a total fragment of the language (where bottom is guaranteed to not occur). Or we may have to move into the non-total fragment of the language for performance reasons. That's fine, but we shouldn't treat things like 'seq' as if they produce values. 'seq', 'unsafeCoerce', and many others are literally compiler magic, and as such, should be used with extreme care not warranted when dealing with "regular" values. I have used the phrase "proto-value" for describing these magical objects. I just wish the documentation made the distinction between a "proto-value" and a real value more clear.

On 21 Dec 2011, at 08:24, Alexander Solla wrote:
I would rather have an incomplete semantic, and have all the incomplete parts collapsed into something we call "bottom".
I don't see the reason to limit ourselves to that. Of course, in total languages like Agda there is no need for (_|_). But in a turing-complete lazy language like Haskell we really need it. Of course, it makes not much sense to write "fix id" anywhere in your program; but, for example, lists like "1:2:3:4:5:_|_" can be really useful. And denotational semantics is not just nice. It is useful. It's the best way to understand why the program we just wrote doesn't terminate.

On Tue, Dec 20, 2011 at 9:16 PM, MigMit
On 21 Dec 2011, at 08:24, Alexander Solla wrote:
I would rather have an incomplete semantic, and have all the incomplete parts collapsed into something we call "bottom".
I don't see the reason to limit ourselves to that. Of course, in total languages like Agda there is no need for (_|_). But in a turing-complete lazy language like Haskell we really need it. Of course, it makes not much sense to write "fix id" anywhere in your program; but, for example, lists like "1:2:3:4:5:_|_" can be really useful.
It is not "limiting" to make distinctions that capture real differences. An overly broad generalization limits what can be proved. Can we prove that every vehicle with wheels has a motor? Of course not -- bicycles exist. Can we prove every car has a motor? Yes we can. Throwing bottoms into the collection of values is like throwing bicycles into the collection of cars. We can say /less/ about the collection than we could before, /because/ the collection is more general.
And denotational semantics is not just nice. It is useful. It's the best way to understand why the program we just wrote doesn't terminate.
Denotational semantics is unrealistic. It is a Platonic model of constructive computation. Alan Turing introduced the notion of an "oracle" to deal with what we are calling bottom. An oracle is a "thing" that (magically) "knows" what a bottom value denotes, without having to wait for an infinite number of steps. Does Haskell offer oracles? If not, we should abandon the use of distinct bottoms. The /defining/ feature of a bottom is that it doesn't have an interpretation. Note that I am not suggesting abandoning the notion of /a/ bottom. They should all be treated alike, and be treated differently from every other Haskell value. Every other Haskell value /does/ have an interpretation. Bottom is different from every "other value". We should exclude it from the collection of values. Treating things that are not alike as if they are introduces a loss of information. We can prove useful things about the collection "real" values that we cannot prove about bottom, and so, about the collection of real values and bottoms. I happen to only write Haskell programs that terminate. It is not that hard. We must merely restrict ourselves to the total fragment of the language, and there are straight-forward methods to do so. In particular, I suggest the paper "Fast and Loose Reasoning is Morally Correct": http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.59.8232

On Dec 22, 2011, at 12:25 PM, Alexander Solla wrote:
It is not "limiting" to make distinctions that capture real differences. An overly broad generalization limits what can be proved. Can we prove that every vehicle with wheels has a motor? Of course not -- bicycles exist. Can we prove every car has a motor? Yes we can. Throwing bottoms into the collection of values is like throwing bicycles into the collection of cars. We can say /less/ about the collection than we could before, /because/ the collection is more general.
Sure, throwing bottom into the set of values means that we can no longer prove as many nice properties about them. However, since bottom *does* exist in this set since functions cannot be guaranteed to terminate, the properties that we do prove will have more relevance. Cheers, Greg

On 22 Dec 2011, at 06:25, Alexander Solla wrote:
Denotational semantics is unrealistic.
And so are imaginary numbers. But they are damn useful for electrical circuits calculations, so who cares?
The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
They should all be treated alike, and be treated differently from every other Haskell value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation.
Every other Haskell value /does/ have an interpretation.
So, (_|_) is bad, but (1, _|_) is good? You know, my scientific advisor used to say "math is about calling different things by the same name; philosophy is about calling the same thing by different names". It seems to me that philosophy is something you're doing now, whereas programming is all about math.
I happen to only write Haskell programs that terminate.
Sure, but if you've ever used recursion, then you do have bottoms in your program.

On Wed, Dec 21, 2011 at 8:39 PM, MigMit
On 22 Dec 2011, at 06:25, Alexander Solla wrote:
Denotational semantics is unrealistic.
And so are imaginary numbers. But they are damn useful for electrical circuits calculations, so who cares?
Not a fair comparison. Quaternions are not particularly useful for electrical circuits, because it is unrealistic to apply a four-dimensional construct to two-dimensional phase spaces. In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values. http://en.wikipedia.org/wiki/Model_theory
They should all be treated alike, and be treated differently from every other Haskell value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation.
Mere syntax. What is fst doing? It computes "forall (x,y) |- x". Using the language of model theory, we can say that Haskell computes an interpretation "forall (x,y) |= x". It is _|_ that lacks an intepretation, not fst. We can see that by trying "fst (_|_,1), which reduces to _|_ by the proof rule for fst. _|_ lacks an interpretation, so the run-time either loops forever or throws an error (notice that error throwing is only done with compiler magic -- if you define your own undefined = undefined, using it will loop. GHC is specially trained to look for Prelude.undefined to throw the "undefined" error.)
Every other Haskell value /does/ have an interpretation.
So, (_|_) is bad, but (1, _|_) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into. Using examples from denotational semantics is not going to change my mind about the applicability of one or the other. It is clear that denotational semantics is a Platonic model of constructive computation.
You know, my scientific advisor used to say "math is about calling different things by the same name; philosophy is about calling the same thing by different names". It seems to me that philosophy is something you're doing now, whereas programming is all about math.
Then you are mistaken. I am talking about choosing the appropriate mathematical model of computation to accurately, clearly, and simply describe the language's semantics.

On Sat, Dec 24, 2011 at 8:20 PM, Alexander Solla
On Wed, Dec 21, 2011 at 8:39 PM, MigMit
wrote: On 22 Dec 2011, at 06:25, Alexander Solla wrote:
Denotational semantics is unrealistic.
And so are imaginary numbers. But they are damn useful for electrical circuits calculations, so who cares?
Not a fair comparison. Quaternions are not particularly useful for electrical circuits, because it is unrealistic to apply a four-dimensional construct to two-dimensional phase spaces. In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
http://en.wikipedia.org/wiki/Model_theory
They should all be treated alike, and be treated differently from every other Haskell value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation.
Mere syntax. What is fst doing? It computes "forall (x,y) |- x". Using the language of model theory, we can say that Haskell computes an interpretation "forall (x,y) |= x". It is _|_ that lacks an intepretation, not fst. We can see that by trying "fst (_|_,1), which reduces to _|_ by the proof rule for fst. _|_ lacks an interpretation, so the run-time either loops forever or throws an error (notice that error throwing is only done with compiler magic -- if you define your own undefined = undefined, using it will loop. GHC is specially trained to look for Prelude.undefined to throw the "undefined" error.)
Every other Haskell value /does/ have an interpretation.
So, (_|_) is bad, but (1, _|_) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Using examples from denotational semantics is not going to change my mind about the applicability of one or the other. It is clear that denotational semantics is a Platonic model of constructive computation.
You make it sound as though platonism is somehow bad. Before we get into that lets look at a capsule of our history: Kronecker and Cantor disagree on whether sets exist. K: God created the natural numbers; all the rest is the work of man. C: All manner of infinite sets exist (presumably in the mind of God?) A generation later and continuing, Hilbert and Brouwer disagree on what constitutes a proof. A little later Goedel sets out to demolish Hilbert's formalist agenda. The (interpretations of) Hilbert's philosophical position is particularly curious: For Brouwer, Hilbert is wrong because he is a platonist -- believes in completed infinities For Goedel, Hilbert is wrong because he is not a platonist -- he is looking for mechanizable proofs. Talk of two stools!! Turing tries to demolish Goedel. His real intention is AI not the Turing machine. He does not succeed -- simple questions turn out to be undecidable/non-computable. However a 'side-effect' (ahem) of his attempts via the TM is ... the computer. That Goedel was a (mostly closet) platonist: http://guidetoreality.blogspot.com/2006/12/gdels-platonism.html Am I arguing that platonism is good? Dunno... I am only pointing out that the tension between platonism and its opponents (formalist, empiricist, constructivist etc) is evidently a strong driver for progress BTW a more detailed look at the historical arguments would show that the disagreements were more violent than anything on this list :-) Hopefully on this list we will be cordial and wish each other a merry Christmas!

Отправлено с iPad
24.12.2011, в 18:50, Alexander Solla
In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
And why exactly should we limit ourselves to some theory you happen to like?
The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
But (_|_) IS a concrete value.
But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation.
Mere syntax.
So what?
Every other Haskell value /does/ have an interpretation.
So, (_|_) is bad, but (1, _|_) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Well, that's a different story. But it seems to me that the term "Haskell-like" won't apply to that kind of language. Also, it seems to me (though I don't have any kind of proof) that denotational semantics is something that is much simpler.
It is clear that denotational semantics is a Platonic model of constructive computation.
Could you please stop offending abstract notions?
Then you are mistaken. I am talking about choosing the appropriate mathematical model of computation to accurately, clearly, and simply describe the language's semantics.
Well, domain theory does exactly that for Haskell.

2011/12/24 MigMit
Отправлено с iPad
24.12.2011, в 18:50, Alexander Solla
написал(а): In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
And why exactly should we limit ourselves to some theory you happen to like?
Because the question was about MY IDEAL. I have spoken at length why my ideal is preferable to the current state of affairs. You still continue to misunderstand my point, and respond with red herrings.
The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
http://en.wikipedia.org/wiki/Model_theory
But (_|_) IS a concrete value.
Um, perhaps in denotational semantics. But even in that case, it is not a HASKELL value. You seem to be mixing up syntax and semantics.
But they ARE very similar to other values. They can be members of
otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation.
Mere syntax.
So what?
So we give meaning to syntax through our semantics. That is what this whole conversation is all about. I am proposing we give Haskell bottoms semantics that bring it in line with the bottoms from various theories including lattice theory, the theory of sets, the theory of logic, as opposed to using denotational semantics' bottom semantic, which is unrealistic for a variety of reasons. Haskell bottoms can't be compared, due to Rice's theorem. Haskell bottoms cannot be given an interpretation as a Haskell value. What happens to referential transparency when distinct things are all defined by the same equation? ... = let x = x in x undefined, seq, unsafeCoerce, and many other "primitives" are defined using that equation. (See GHC.Prim) The Haskell definition for these distinct things /does nothing/. It loops. The semantics we get for them (an error message if we use undefined, a causal side-effect if we use seq, type coercion if we use unsafeCoerce) is done /magically/ by the compiler. As far as Haskell, as a language, is concerned, all of these are bottom, and they are all /equal/, because of referential transparency/substitutionality. Oops. So Haskell, as a logic, is telling us that all of these "distinct" bottoms are not so distinct. And that any interpretation function providing semantics should map them all to the same value in the model.
Every other Haskell value /does/ have an interpretation.
So, (_|_) is bad, but (1, _|_) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Well, that's a different story.
No, it's the same story that I've been telling.
But it seems to me that the term "Haskell-like" won't apply to that kind of language. Also, it seems to me (though I don't have any kind of proof) that denotational semantics is something that is much simpler.
Haskell is already a paraconsistent logic. How is giving Haskell operational and interpretive semantics not "Haskell-like"? Its denotational semantics is a platonic completion of the logical semantics.
It is clear that denotational semantics is a Platonic model of constructive computation.
Could you please stop offending abstract notions?
What? Platonic does not mean "bad". But it does mean that the theory is "too big" to be appropriate in this case.


Alexander Solla wrote:
And denotational semantics is not just nice. It is useful. It's the best way to understand why the program we just wrote doesn't terminate.
Denotational semantics is unrealistic. It is a Platonic model of constructive computation. Alan Turing introduced the notion of an "oracle" to deal with what we are calling bottom. An oracle is a "thing" that (magically) "knows" what a bottom value denotes, without having to wait for an infinite number of steps. Does Haskell offer oracles? If not, we should abandon the use of distinct bottoms. The /defining/ feature of a bottom is that it doesn't have an interpretation.
Huh? I don't see the problem. Introducing bottom as a value is a very practical way to assign a well-defined mathematical object to each expression that you can write down in Haskell. See http://en.wikibooks.org/wiki/Haskell/Denotational_semantics It's irrelevant whether _|_ is "unrealistic", it's just a mathematical model anyway, and a very useful one at that. For instance, we can use it to reason about strictness, which gives us information about lazy evaluation and operational semantics. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com

On Thu, Dec 22, 2011 at 4:19 AM, Heinrich Apfelmus
Alexander Solla wrote:
And denotational semantics is not just nice. It is useful. It's the best way to understand why the program we just wrote doesn't terminate.
Denotational semantics is unrealistic. It is a Platonic model of constructive computation. Alan Turing introduced the notion of an "oracle" to deal with what we are calling bottom. An oracle is a "thing" that (magically) "knows" what a bottom value denotes, without having to wait for an infinite number of steps. Does Haskell offer oracles? If not, we should abandon the use of distinct bottoms. The /defining/ feature of a bottom is that it doesn't have an interpretation.
Huh? I don't see the problem.
Introducing bottom as a value is a very practical way to assign a well-defined mathematical object to each expression that you can write down in Haskell. See
http://en.wikibooks.org/wiki/Haskell/Denotational_semantics
It's irrelevant whether _|_ is "unrealistic", it's just a mathematical model anyway, and a very useful one at that. For instance, we can use it to reason about strictness, which gives us information about lazy evaluation and operational semantics.
As another example.... Not that long ago, Bob Harper was complaining on his blog about how you can't use induction to reason about Haskell functions. But, that's incorrect. What you can't use is induction based on a model where all data types are the expected inductively defined sets, and non-termination is modeled as an effect. This isn't surprising, of course, because Haskell's models (i.e. denotational semantics) aren't like that. But, once you know what Haskell's models are---they model types as domains, and data types are inductively defined _domains_, not sets---then you in fact get induction principles based on those models (see for instance, Fibrational Induction Rules for Initial Algebras). You need to prove two or three additional cases, but it works roughly the same as the plain ol' induction you seem to lose for having non-strict evaluation. And then you have one less excuse for not using a language with lazy evaluation. :) -- Dan * http://personal.cis.strath.ac.uk/~patricia/csl2010.pdf

Alexander Solla wrote:
I happen to only write Haskell programs that terminate. It is not that hard. We must merely restrict ourselves to the total fragment of the language, and there are straight-forward methods to do so.
Do (web/XML-RPC/whatever) server type programs terminate?

On 22 Dec 2011, at 17:49, Bardur Arantsson wrote:
Alexander Solla wrote:
I happen to only write Haskell programs that terminate. It is not that hard. We must merely restrict ourselves to the total fragment of the language, and there are straight-forward methods to do so.
Do (web/XML-RPC/whatever) server type programs terminate?
No, but "total" and "terminating" are not the same. Those *co*programs will, if they're any good, be total by virtue of their productivity rather than their termination. What's slightly controversial is the claim that we "must merely restrict ourselves to the total fragment of the language". It would be more controversial to claim that some new Haskell-like language should restrict us to total programs. I'd be glad if "pure" meant "total", but partiality were an effect supported by the run-time system. Then we could choose to restrict ourselves, but we wouldn't be restricted by the language. This is not the first time the issue has surfaced, nor will it be the last. It's customary at this point to object that one wouldn't want to program with the monadic notation, just for the effect of partiality. I'd counter that I don't want to program with the monadic notation, for any effects: I'd like to program with an applicative notion, but in monadic types. That's what I'd do different, and for me, the subject is not a hypothetical question. All the best Conor

Отправлено с iPad
22.12.2011, в 23:56, Conor McBride
I'd be glad if "pure" meant "total", but partiality were an effect supported by the run-time system. Then we could choose to restrict ourselves, but we wouldn't be restricted by the language.
I second that. Having a special "partiality" monad would be nice. However, I'm not certain as to how it would interact with recursion — if f is a total function, fix f could be (and almost certainly would be) a possibly undiefined value. So, fix should have type "(a -> a) -> Partial a"; that's OK, but implicit uses of fix (I mean let statements) would be quite different.
I'd like to program with an applicative notion, but in monadic types. That's what I'd do different, and for me, the subject is not a hypothetical question.
So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)?

On 22 Dec 2011, at 21:29, MigMit wrote:
Отправлено с iPad
22.12.2011, в 23:56, Conor McBride
написал(а): I'd be glad if "pure" meant "total", but partiality were an effect supported by the run-time system. Then we could choose to restrict ourselves, but we wouldn't be restricted by the language.
I second that. Having a special "partiality" monad would be nice. However, I'm not certain as to how it would interact with recursion — if f is a total function, fix f could be (and almost certainly would be) a possibly undiefined value. So, fix should have type "(a -
a) -> Partial a"; that's OK, but implicit uses of fix (I mean let statements) would be quite different.
Indeed they would, at least to the extent of making clear in the type on what basis recursive calls should be checked.
I'd like to program with an applicative notion, but in monadic types. That's what I'd do different, and for me, the subject is not a hypothetical question.
So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)?
The plan is to make a clearer distinction between "being" and "doing" by splitting types clearly into an effect part and a value part, in a sort of a Levy-style call-by-push-value way. The notation [<list of effects>]<value type> is a computation type whose inhabitants might *do* some of the effects in order to produce a value which *is* of the given value type. But it is always possible to make a value type from a computation type by suspending it (with braces), so that {[<list of effects>]<value type>} is a value type for suspended computations, which *are* things which one could potentially *do* at another time. But we can manipulate suspended computations purely. Haskell doesn't draw a clear line in types between the effect part and the value part, or support easy fluidity of shifting roles between the two. Rather we have two modes: (1) the implicit partiality mode, where the value part is the whole of the type and the notation is applicative; (2) the explicit side-effect mode, where the type is an effect operator applied to the value type and the notation is imperative. It's an awkward split, and it makes it tricky to make clean local renegotiations of effectful capabilities by hiding or handling them. Also, I don't see why partiality deserves an applicative notation where other, perhaps more benign effects (like *handled* exceptions) are forced into an imperative (or clunky Applicative) mode. Maybe this strays too far to classify as "Haskell-like", but it's an attempt to re-rationalise techniques that emerged from Haskell programming. Cheers Conor

On 23 Dec 2011, at 02:11, Conor McBride wrote:
So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)?
The plan is to make a clearer distinction between "being" and "doing" by splitting types clearly into an effect part and a value part, in a sort of a Levy-style call-by-push-value way. The notation
[<list of effects>]<value type>
is a computation type whose inhabitants might *do* some of the effects in order to produce a value which *is* of the given value type.
Oh, so it's not an arbitrary monad, but a single one. That's a bit disappointing.

On 23 Dec 2011, at 02:11, Conor McBride wrote:
So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)?
The plan is to make a clearer distinction between "being" and "doing" by splitting types clearly into an effect part and a value part, in a sort of a Levy-style call-by-push-value way. The notation
[<list of effects>]<value type>
is a computation type whose inhabitants might *do* some of the effects in order to produce a value which *is* of the given value type.
Oh, so it's not an arbitrary monad, but a single one. That's a bit disappointing.

On 23 Dec 2011, at 16:16, MigMit wrote:
On 23 Dec 2011, at 02:11, Conor McBride wrote:
So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)?
The plan is to make a clearer distinction between "being" and "doing" by splitting types clearly into an effect part and a value part, in a sort of a Levy-style call-by-push-value way. The notation
[<list of effects>]<value type>
is a computation type whose inhabitants might *do* some of the effects in order to produce a value which *is* of the given value type.
Oh, so it's not an arbitrary monad, but a single one. That's a bit disappointing.
The list of effects is arbitrary, and localizable, by means of defining handlers. So it's not a single monad. It's probably still disappointing. Cheers Conor

On 2011-12-23 13:46, Conor McBride wrote:
The plan is to make a clearer distinction between "being" and "doing" by splitting types clearly into an effect part and a value part, in a sort of a Levy-style call-by-push-value way. The notation
[<list of effects>]<value type>
is a computation type whose inhabitants might *do* some of the effects in order to produce a value which *is* of the given value type.
The list of effects is arbitrary, and localizable, by means of defining handlers. So it's not a single monad.
It's probably still disappointing.
On the contrary!
Haskell doesn't draw a clear line in types between the effect part and the value part, or support easy fluidity of shifting roles between the two. Rather we have two modes: (1) the implicit partiality mode, where the value part is the whole of the type and the notation is applicative; (2) the explicit side-effect mode, where the type is an effect operator applied to the value type and the notation is imperative.
I was drawn to call-by-push-value a few years ago while attempting to create a language which would support both call-by-value and call-by-name. I haven't had the skill to express what I have felt to be the shortcoming of Haskell, but I believe you've put your finger right on it.
it's an attempt to re-rationalise techniques that emerged from Haskell programming. Exactly.
Haskell has grown a wealth of features/libraries/techniques for combining monads, yet the fundamental monad, evaluation, has a separate place in the language. -- Scott Turner

On Fri, 2011-12-23 at 01:29 +0400, MigMit wrote:
Отправлено с iPad
22.12.2011, в 23:56, Conor McBride
написал(а): I'd be glad if "pure" meant "total", but partiality were an effect supported by the run-time system. Then we could choose to restrict ourselves, but we wouldn't be restricted by the language.
I second that. Having a special "partiality" monad would be nice. However, I'm not certain as to how it would interact with recursion — if f is a total function, fix f could be (and almost certainly would be) a possibly undiefined value. So, fix should have type "(a -> a) -> Partial a"; that's OK, but implicit uses of fix (I mean let statements) would be quite different.
IIRC in ML-derived languages there is difference between let and let rec. All implicit fix can be changed into explicit so I imagine that: let rec f x = x -- a -> Partial a let g x = x -- a -> a Regards

On Dec 21, 2011, at 2:24 PM, Alexander Solla wrote:
I would rather have an incomplete semantic, and have all the incomplete parts collapsed into something we call "bottom". We can then be smart and stay within a total fragment of the language (where bottom is guaranteed to not occur).
But part of the whole point of including bottom in our semantics in the first place is *exactly* to *enable* us to be smart enough to know when we are staying within a total fragment of the language. For example, including bottom in our semantics allows us to make and prove statements like fst (42,_|_) = 42 and fst _|_ = _|_ That is, as long a you know that the pair is total and that the first element is total, calling fst on it is also total regardless of whether the second value is total. Refusing to use bottom in our semantics doesn't make life better by forcing us to stay within a total fragment of the language, it actually makes life harder by removing from us a useful tool for knowing *how* to stay within a total fragment of the language. Cheers, Greg

On Tue, Dec 20, 2011 at 10:30 PM, Gregory Crosswhite
On Dec 21, 2011, at 2:24 PM, Alexander Solla wrote:
I would rather have an incomplete semantic, and have all the incomplete parts collapsed into something we call "bottom". We can then be smart and stay within a total fragment of the language (where bottom is guaranteed to not occur).
But part of the whole point of including bottom in our semantics in the first place is *exactly* to *enable* us to be smart enough to know when we are staying within a total fragment of the language. For example, including bottom in our semantics allows us to make and prove statements like
fst (42,_|_) = 42
A proof: fst :: (a,a) -> a fst (a,b) = a and
fst _|_ = _|_
This expression is basically non-sense. Should we accept straight-forwardly ill-typed expressions like: data Strict a = Strict !a fst (Strict [1..]) just because the argument is "strictly" a bottom? Bottom inhabits every type, but only vacuously. To be generous in my interpretation, I assume you mean something like: fst (_|_, 10) = _|_. That is still proved by fst (x,y) = x Things like seq, unsafeCoerce, and the like, are defined as (functions into) bottom in GHC.Prim, and the real semantic-changing magic they perform is done behind the scenes. It cannot be expressed as Haskell in the same Haskell context it is used. So assuming you mean something like: fst (seq [1..] (1,2)) I must respond that you are using one of these magical keywords which change Haskell's semantics. They should be avoided.
Refusing to use bottom in our semantics doesn't make life better by forcing us to stay within a total fragment of the language, it actually makes life harder by removing from us a useful tool for knowing *how* to stay within a total fragment of the language.
I am collapsing the semantics for "distinct" bottoms into a single bottom and noting that it has no interpretation as a Haskell value. Notice that this brings Haskell's type theory in line with ZF and typed set theories. In those theories, bottom merely exists as a syntactic expression with no interpretation. It is the proto-value which is not equal to itself. We can say that it inhabits every type. But that is only vacuously. Bottom does not exist. We can stay in the total fragment of Haskell very easily, essentially by using my quotient construction for bottom: http://www.cse.chalmers.se/~nad/publications/danielsson-et-al-popl2006.html It requires a shift of point of view, from denotational semantics and "computational effects" (the things we're trying to avoid by going functional and lazy!) to the language of logic, proof, and interpretation. It is possible, consistent, and much simpler conceptually and in use.

I would make the 'type' symbol a single character ala Agda. For example,
a : Int
If your users are writing a lot of types, make it easy!
On Dec 22, 2011 10:42 AM, "Alexander Solla"
On Tue, Dec 20, 2011 at 10:30 PM, Gregory Crosswhite < gcrosswhite@gmail.com> wrote:
On Dec 21, 2011, at 2:24 PM, Alexander Solla wrote:
I would rather have an incomplete semantic, and have all the incomplete parts collapsed into something we call "bottom". We can then be smart and stay within a total fragment of the language (where bottom is guaranteed to not occur).
But part of the whole point of including bottom in our semantics in the first place is *exactly* to *enable* us to be smart enough to know when we are staying within a total fragment of the language. For example, including bottom in our semantics allows us to make and prove statements like
fst (42,_|_) = 42
A proof:
fst :: (a,a) -> a fst (a,b) = a
and
fst _|_ = _|_
This expression is basically non-sense. Should we accept straight-forwardly ill-typed expressions like:
data Strict a = Strict !a fst (Strict [1..])
just because the argument is "strictly" a bottom? Bottom inhabits every type, but only vacuously.
To be generous in my interpretation, I assume you mean something like:
fst (_|_, 10) = _|_.
That is still proved by fst (x,y) = x
Things like seq, unsafeCoerce, and the like, are defined as (functions into) bottom in GHC.Prim, and the real semantic-changing magic they perform is done behind the scenes. It cannot be expressed as Haskell in the same Haskell context it is used. So assuming you mean something like:
fst (seq [1..] (1,2))
I must respond that you are using one of these magical keywords which change Haskell's semantics. They should be avoided.
Refusing to use bottom in our semantics doesn't make life better by forcing us to stay within a total fragment of the language, it actually makes life harder by removing from us a useful tool for knowing *how* to stay within a total fragment of the language.
I am collapsing the semantics for "distinct" bottoms into a single bottom and noting that it has no interpretation as a Haskell value.
Notice that this brings Haskell's type theory in line with ZF and typed set theories. In those theories, bottom merely exists as a syntactic expression with no interpretation. It is the proto-value which is not equal to itself. We can say that it inhabits every type. But that is only vacuously. Bottom does not exist.
We can stay in the total fragment of Haskell very easily, essentially by using my quotient construction for bottom:
http://www.cse.chalmers.se/~nad/publications/danielsson-et-al-popl2006.html
It requires a shift of point of view, from denotational semantics and "computational effects" (the things we're trying to avoid by going functional and lazy!) to the language of logic, proof, and interpretation. It is possible, consistent, and much simpler conceptually and in use.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Dec 22, 2011, at 12:40 PM, Alexander Solla wrote:
fst _|_ = _|_
This expression is basically non-sense.
This is only "nonsense" because you refuse to accept that there are valid formalisms other than your own that contain _|_ as a perfectly valid entity. :-)
Should we accept straight-forwardly ill-typed expressions like:
data Strict a = Strict !a fst (Strict [1..])
just because the argument is "strictly" a bottom? Bottom inhabits every type, but only vacuously.
No, each type has its own value for _|_, and your example demonstrates why this makes more sense than making all _|_'s be equivalent.
Things like seq, unsafeCoerce, and the like, are defined as (functions into) bottom in GHC.Prim, and the real semantic-changing magic they perform is done behind the scenes. It cannot be expressed as Haskell in the same Haskell context it is used. So assuming you mean something like:
fst (seq [1..] (1,2))
I must respond that you are using one of these magical keywords which change Haskell's semantics. They should be avoided.
So... now you want to throw out seq so that we no longer have a way to force the evaluation of values, and the motivation for this is because when we throw out _|_ then we no longer have a formal way to describe the semantics of seq?
Refusing to use bottom in our semantics doesn't make life better by forcing us to stay within a total fragment of the language, it actually makes life harder by removing from us a useful tool for knowing *how* to stay within a total fragment of the language.
I am collapsing the semantics for "distinct" bottoms into a single bottom and noting that it has no interpretation as a Haskell value.
I agree that if you collapse all of the distinct bottoms then you get a mess, but since whenever we are talking about semantics in the Haskell community we give each type has its own _|_ it is an incredibly moot point; it's like saying that the problem with cars is that if you remove all of their wheels then they have a lot of trouble getting anywhere at all. :-) Cheers, Greg

Отправлено с iPhone
Dec 20, 2011, в 7:10, Alexander Solla
* Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
It's definitely a value.

On Mon, Dec 19, 2011 at 7:10 PM, Alexander Solla
* Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it.
The fact that bottom is a value in Haskell is the fundamental thing that differentiates Haskell from other languages and the source of its power. The fact that f _|_ /= _|_ potentially _is_ what it means to be a lazy language. It is what allows us to implement loops and conditionals as normal functions rather than requiring special and limited language contexts. But more importantly, it is required to think about _|_ as a value in order to prove the correctness of many algorithms, it is the base case of your inductive reasoning. A Haskell programmer who cannot think of _|_ in that way will plateau as very useful idioms such as tying the knot, infinite data structures, etc.. are just complicated to think about in operational semantics when they get more interesting than an infinite list. Not treating _|_ as a value would be a huge disservice to anyone learning the language. Sure, it may seem a little strange coming from the imperative world to think of it as a value, but it is by far the oddest concept in Haskell, after all, _functions_ are values in Haskell and people seem to eventually figure that out. John

John Meacham :
The fact that bottom is a value in Haskell is the fundamental thing that differentiates Haskell from other languages and the source of its power. The fact that f _|_ /= _|_ potentially _is_ what it means to be a lazy language. Not treating _|_ as a value would be a huge disservice to anyone learning the language. Sure, it may seem a little strange coming from the imperative world to think of it as a value, but it is by far the oddest concept in Haskell, after all, _functions_ are values in Haskell and people seem to eventually figure that out. Well... Personally I hate thinking about bottom as "value". I don't do this. I NEVER teach that. And, I am a "lazy guy", almost all my Haskell programs are strongly based on laziness.
I'll tell you what I teach, and you might throw some tomatoes... "The fundamental thing that differentiates Haskell from other languages and the source of it power" - if I might cite you - is that we don't see the difference between an object and the process which creates it. (The difference demands that we speak about the call-by-need, etc...) The bottom, as sin (2*pi), or "Text" may be seen as processes. Anyway, a lazy list IS a process /par excellence/. The _|_ entity is a process which refuses to give you a value (or does it in a deadly way). Your program manipulates processes. A process in some computational context must do something - or not. The bottom never does anything useful. All this is probably a question of language, of terminology, of preconceptions (of all that, what for God knows which reasons, Americans call "just semantics"), but I will not forget the day when I thought as you, and I had to explain to 2-nd year students what does it mean: a value which doesn't have a value... Thank you. Jerzy Karczmarczuk

Jerzy Karczmarczuk
and the source of it power" - if I might cite you - is that we don't see the difference between an object and the process which creates it.
Interestingly, according to Wikipedia's article on "type system": A type system associates a type with each computed value. but later cites Pierce: a tractable syntactic framework for classifying phrases according to the kinds of values they compute While the former might be said to avoid _|_ by defining it to not be a "value" that is computed, the latter clearly must include it, as a the computation of a "phrase" might not terminate (as longs as the language is Turing-complete, of course). Anyway, I think also non-lazy languages has bottom inhabiting their types, it's just that since it leads more immediately to failure, it's not usually taken into account. -k -- If I haven't seen further, it is by standing in the footprints of giants

Jerzy Karczmarczuk wrote:
Well... Personally I hate thinking about bottom as "value". I don't do this. I NEVER teach that. And, I am a "lazy guy", almost all my Haskell programs are strongly based on laziness.
I'll tell you what I teach, and you might throw some tomatoes... "The fundamental thing that differentiates Haskell from other languages and the source of it power" - if I might cite you - is that we don't see the difference between an object and the process which creates it. (The difference demands that we speak about the call-by-need, etc...) The bottom, as sin (2*pi), or "Text" may be seen as processes. Anyway, a lazy list IS a process /par excellence/. The _|_ entity is a process which refuses to give you a value (or does it in a deadly way). Your program manipulates processes. A process in some computational context must do something - or not. The bottom never does anything useful.
While it's ultimately an issue of nomenclature, applying the terminus "value" to _|_ is a good idea, because it allows us to answer questions like the following: Question: What is (the denotation of, the value of) x = and $ take 5 $ cycle [True,False] where cycle xs = fix (xs++) Answer: x = False If you treat _|_ as a value, this answer can be obtained by straightforward algebraic manipulation. If you treat _|_ as an operational construct, you will have to perform graph reduction to see the answer, but then you have to worry about the *order* in which you perform your reduction steps. It's not wrong to perform graph reduction, and any student should do it at one point in their lives, but the restriction to operational semantics would miss an important abstraction that is part of the Haskell spirit. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com

Hi, Robert Clausecker wrote:
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
I would try to improve the language's support for the embedding of domain-specific embedded languages (aka. combinator libraries). Such embedding requires the integration of a domain-specific language's syntax, static semantics and dynamic semantics. Some (more or less far fetched) ideas about these three areas follow. To support better syntax for embedded languages, provide more rebindable syntax à la do-notation. For example, (if c then t else e) currently desugars to (case c of False -> e; True -> t). But it could also desugar to (if' c t e) where if' is a method of a type class. For (c : Bool), the standard library would provide an instance of this type class, but for other condition types, third-party libraries could provide it. Alternatively, if-then-else could even desugar to whatever if' is in scope. A similar idea is currently applied to Scala in the scala-virtualized project. A large part of the language should be virtualized this way, including pattern matching, lambda expressions, maybe even type or class declarations. To support better static semantics for embedded languages, provide better type-level computation, including some form of closed-world reasoning (for example, backtracking or closed pattern matching) and a reification of names at the type level, so that type-level computations can reason about the binding structures of expressions-level code. Note that I am interested in the static structure of terms, not their dynamic behavior, so this is different from dependent types. With Haskell being a fine general-purpose programming language, and even having a good foreign language interface, there is already plenty of support for the specification of dynamic semantics. Nevertheless, for domain-specific embedded compilers, it would possibly be nice to access a Haskell compiler at runtime, to compile snippets of Haskell code and dynamically link them into the currently running program. Tillmann

On Dec 20, 2011, at 8:05 PM, Tillmann Rendel wrote:
Hi,
Robert Clausecker wrote:
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
I would try to improve the language's support for the embedding of domain-specific embedded languages (aka. combinator libraries). Such embedding requires the integration of a domain-specific language's syntax, static semantics and dynamic semantics. Some (more or less far fetched) ideas about these three areas follow.
To support better syntax for embedded languages, provide more rebindable syntax à la do-notation. For example, (if c then t else e) currently desugars to (case c of False -> e; True -> t). But it could also desugar to (if' c t e) where if' is a method of a type class. For (c : Bool), the standard library would provide an instance of this type class, but for other condition types, third-party libraries could provide it. Alternatively, if-then-else could even desugar to whatever if' is in scope. A similar idea is currently applied to Scala in the scala-virtualized project. A large part of the language should be virtualized this way, including pattern matching, lambda expressions, maybe even type or class declarations.
To support better static semantics for embedded languages, provide better type-level computation, including some form of closed-world reasoning (for example, backtracking or closed pattern matching) and a reification of names at the type level, so that type-level computations can reason about the binding structures of expressions-level code. Note that I am interested in the static structure of terms, not their dynamic behavior, so this is different from dependent types.
With Haskell being a fine general-purpose programming language, and even having a good foreign language interface, there is already plenty of support for the specification of dynamic semantics. Nevertheless, for domain-specific embedded compilers, it would possibly be nice to access a Haskell compiler at runtime, to compile snippets of Haskell code and dynamically link them into the currently running program.
So in other words, you would like Haskell to simultaneously become more like Lisp and more like Agda? :-) Cheers, Greg

Tillmann Rendel wrote:
Hi,
Robert Clausecker wrote:
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
I would try to improve the language's support for the embedding of domain-specific embedded languages (aka. combinator libraries). Such embedding requires the integration of a domain-specific language's syntax, static semantics and dynamic semantics. Some (more or less far fetched) ideas about these three areas follow.
I think this is a very good point. The things I would like to see: * Better syntax for observable sharing. Doaitse Swierstra proposed a "grammer" construct that is basically a let statement where the binder names can be observed. I'm not entirely sure whether that is the most general or sufficient syntax, but something along these lines. * Meta-programming / partial evaluation. When designing a DSL, it is often the case that you know how to write an optimizing compiler for your DSL because it's usually a first-order language. However, trying to squeeze that into GHC rules is hopeless. Having some way of compiling code at run-time would solve that. Examples: ** Conal Elliott's image description language Pan ** Henning Thielemann's synthesizer-llvm Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com

On Dec 20, 2011, at 11:18 PM, Heinrich Apfelmus wrote:
Tillmann Rendel wrote:
Hi, Robert Clausecker wrote:
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different? I would try to improve the language's support for the embedding of domain-specific embedded languages (aka. combinator libraries). Such embedding requires the integration of a domain-specific language's syntax, static semantics and dynamic semantics. Some (more or less far fetched) ideas about these three areas follow.
I think this is a very good point. The things I would like to see:
* Better syntax for observable sharing. Doaitse Swierstra proposed a "grammer" construct that is basically a let statement where the binder names can be observed. I'm not entirely sure whether that is the most general or sufficient syntax, but something along these lines.
* Meta-programming / partial evaluation. When designing a DSL, it is often the case that you know how to write an optimizing compiler for your DSL because it's usually a first-order language. However, trying to squeeze that into GHC rules is hopeless. Having some way of compiling code at run-time would solve that. Examples: ** Conal Elliott's image description language Pan ** Henning Thielemann's synthesizer-llvm
I am not disagreeing with anything that you have said here, but in a way it seems like the problem is more fundamental than all of these things since metaprogramming and type-programming in Haskell is not first-class, so it really isn't a language that is designed for DSLs even though people get a surprisingly long way abusing it for this purpose. :-) Really what we need is a language built from the ground up for this purpose, such as Lisp, but without the parts of Lisp that cause us to use Haskell instead. :-) Cheers, Greg

On Mon, Dec 19, 2011 at 11:20 AM, Robert Clausecker
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
One thing that concerns me is the use of capital letters to distinguish type and class names and constructors from values. If I was doing it over I would use a typographical distinction like italics for types, bold for classes. That way we could have a constructor named ∅, a function named ∈, a class named ℝ.

One thing that concerns me is the use of capital letters to distinguish type and class names and constructors from values. If I was doing it over I would use a typographical distinction like italics for types, bold for classes. That way we could have a constructor named ∅, a function named ∈, a class named ℝ.
It's Algol all over again! Will we have to typeset our keywords in bold too?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 20 Dec 2011, at 22:51, Chris Wong wrote:
One thing that concerns me is the use of capital letters to distinguish type and class names and constructors from values. If I was doing it over I would use a typographical distinction like italics for types, bold for classes. That way we could have a constructor named ∅, a function named ∈, a class named ℝ.
It's Algol all over again! Will we have to typeset our keywords in bold too?
There are in fact Unicode symbols for that, see http://unicode.org/charts/PDF/U1D400.pdf The monospace characters U+1D670-1D6A3 might be used for keywords. Font: http://www.stixfonts.org/ Not hard to add. Hans

On Tue, Dec 20, 2011 at 11:17:32PM +0100, Hans Aberg wrote:
The monospace characters U+1D670-1D6A3 might be used for keywords. Font: http://www.stixfonts.org/
I feel that monospace fonts should be used for all of programming. A language could use Unicode symbols, but if it enforces typography, it is destined to win an award for being really unusable

On 21 Dec 2011, at 04:27, Ashok Gautham wrote:
On Tue, Dec 20, 2011 at 11:17:32PM +0100, Hans Aberg wrote:
The monospace characters U+1D670-1D6A3 might be used for keywords. Font: http://www.stixfonts.org/
I feel that monospace fonts should be used for all of programming. A language could use Unicode symbols, but if it enforces typography, it is destined to win an award for being really unusable
I have some books from the 1980s which does not use monospace; it seems to be a later movement. Using it for all symbols would be awkward. Hans

IIRC, Scite's default configuration is with non-monospace font. I actually
found it quite appealing, and in fact forgot about it entirely after some
usage. It is much easier on the eyes to read. The difference is really
whether you care about aligning things mid-line or not, not to mention
editor support (i.e. not Vim or other terminal-based editor).
On Wed, Dec 21, 2011 at 8:58 PM, Hans Aberg
On 21 Dec 2011, at 04:27, Ashok Gautham wrote:
On Tue, Dec 20, 2011 at 11:17:32PM +0100, Hans Aberg wrote:
The monospace characters U+1D670-1D6A3 might be used for keywords. Font: http://www.stixfonts.org/
I feel that monospace fonts should be used for all of programming. A language could use Unicode symbols, but if it enforces typography, it is destined to win an award for being really unusable
I have some books from the 1980s which does not use monospace; it seems to be a later movement. Using it for all symbols would be awkward.
Hans
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 21/12/2011 10:09 AM, Jesse Schalken wrote:
IIRC, Scite's default configuration is with non-monospace font. I actually found it quite appealing, and in fact forgot about it entirely after some usage. It is much easier on the eyes to read. The difference is really whether you care about aligning things mid-line or not, not to mention editor support (i.e. not Vim or other terminal-based editor).
SciTE's default binding /is/ proportional, not mono-space. I find it ever so irritating. Then again, I'm sure this one comes squarely down to personal preference, so it seems pointless to debate it. LaTeX offers the possibility of mid-line alignment even if you don't use monospace - a feature which I haven't seen in any other text processing system. It's a pitty this isn't more widespread...

On 21 Dec 2011, at 11:22, Andrew Coppin wrote:
On 21/12/2011 10:09 AM, Jesse Schalken wrote:
IIRC, Scite's default configuration is with non-monospace font. I actually found it quite appealing, and in fact forgot about it entirely after some usage. It is much easier on the eyes to read. The difference is really whether you care about aligning things mid-line or not, not to mention editor support (i.e. not Vim or other terminal-based editor).
SciTE's default binding /is/ proportional, not mono-space. I find it ever so irritating. Then again, I'm sure this one comes squarely down to personal preference, so it seems pointless to debate it.
LaTeX offers the possibility of mid-line alignment even if you don't use monospace - a feature which I haven't seen in any other text processing system. It's a pitty this isn't more widespread...
Monospace isn't used in math (except in the days when one used typewriters), but alignments are of course used in matrices and multiline formulas, for example, which may correspond to uses in computer code. A problem is that LaTeX is used for rendering typeset pages, and not really suitable for writing direct computer code, though Unicode characters can improve the readability of LaTeX source code. The link below shows an example. Hans http://www.charlietanksley.net/philtex/the-unicode-math-package-for-xelatex-...

On 21 Dec 2011, at 11:09, Jesse Schalken wrote:
IIRC, Scite's default configuration is with non-monospace font. I actually found it quite appealing, and in fact forgot about it entirely after some usage. It is much easier on the eyes to read. The difference is really whether you care about aligning things mid-line or not, not to mention editor support (i.e. not Vim or other terminal-based editor).
Right, ASCII code that relies on monospace for such midline alignments can be quite unreadable, but otherwise it has nothing really to contribute. Non-monospace might in fact be more readable, taking away the fact that it may take some time to get used to it, since monospace has been used so much. For such alignments, one would either to write to code so it does not depend on it, or find some other means to do it. Hans
On Wed, Dec 21, 2011 at 8:58 PM, Hans Aberg
wrote: On 21 Dec 2011, at 04:27, Ashok Gautham wrote: On Tue, Dec 20, 2011 at 11:17:32PM +0100, Hans Aberg wrote:
The monospace characters U+1D670-1D6A3 might be used for keywords. Font: http://www.stixfonts.org/
I feel that monospace fonts should be used for all of programming. A language could use Unicode symbols, but if it enforces typography, it is destined to win an award for being really unusable
I have some books from the 1980s which does not use monospace; it seems to be a later movement. Using it for all symbols would be awkward.
Hans

With GHC 7.0.3:
$ cat test.hs
class ℝ a where {
test :: a;
};
(∈) :: Eq a => a -> [a] -> Bool;
x ∈ (y:ys) = x == y || x ∈ ys;
main = putStrLn "Two of three ain't bad (^_~)";
$ runhaskell test.hs
Two of three ain't bad (^_~)
$
On 20/12/2011, David Fox
On Mon, Dec 19, 2011 at 11:20 AM, Robert Clausecker
wrote: Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
One thing that concerns me is the use of capital letters to distinguish type and class names and constructors from values. If I was doing it over I would use a typographical distinction like italics for types, bold for classes. That way we could have a constructor named ∅, a function named ∈, a class named ℝ.
Cheers, Matthew Farkas-Dyck

On Wed, Dec 21, 2011 at 10:53 AM, Matthew Farkas-Dyck
With GHC 7.0.3:
$ cat test.hs class ℝ a where { test :: a; };
(∈) :: Eq a => a -> [a] -> Bool; x ∈ (y:ys) = x == y || x ∈ ys;
main = putStrLn "Two of three ain't bad (^_~)"; $ runhaskell test.hs Two of three ain't bad (^_~) $
Why not expand it even further? class Monoid m where (•) :: m -> m -> m (∅) :: m (∈) :: (Foldable t, Eq a) => a -> t a -> Bool (∘) :: (b -> c) -> (a -> b) -> (a -> c) (∧) :: Bool -> Bool -> Bool etc. We can write a whole Haskell library full of these aliases -- "syntactic-heroin" perhaps? ;) (http://www.haskell.org/haskellwiki/Syntactic_sugar/Cons#Syntactic_heroin)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Dec 20, 2011 at 3:10 PM, Chris Wong
Why not expand it even further?
class Monoid m where (•) :: m -> m -> m (∅) :: m
(∈) :: (Foldable t, Eq a) => a -> t a -> Bool
(∘) :: (b -> c) -> (a -> b) -> (a -> c)
(∧) :: Bool -> Bool -> Bool
etc.
We can write a whole Haskell library full of these aliases -- "syntactic-heroin" perhaps? ;)
Why would you go that far and still not have → ? (: martin

On Tue, Dec 20, 2011 at 3:10 PM, Chris Wong
On Wed, Dec 21, 2011 at 10:53 AM, Matthew Farkas-Dyck
wrote: With GHC 7.0.3:
$ cat test.hs class ℝ a where { test :: a; };
(∈) :: Eq a => a -> [a] -> Bool; x ∈ (y:ys) = x == y || x ∈ ys;
main = putStrLn "Two of three ain't bad (^_~)"; $ runhaskell test.hs Two of three ain't bad (^_~) $
Why not expand it even further?
My experience with Agda makes me think that extending it further can make it painful to program in. Initially I thought that using unicode symbols would just make input a bit slower and that editor support could address that. You know, like writing about math using latex. My actual experience with Agda was different than that. I was using Emacs and I found that I needed to make my font size very large to see all the detail of the unicode symbols clearly enough to distinguish between them fully. The alternative was using the support in emacs for displaying the codepoint, as a number, for any glyph I wanted to distinguish. Perhaps it's still "just an issue of editor support" but it left a sour taste in my mouth. Jason

On Tue, 2011-12-20 at 16:53 -0500, Matthew Farkas-Dyck wrote:
Two of three ain't bad (^_~)
Now we just need λ to replace \, → to replace ->, and ≠ to replace /= (which still looks like division assignment no matter how hard I squint my eyes. 25 years of C and C derived languages is hard to forget). Hey, forget replacing, wouldn't it be wonderful if the compiler would just accept them as synonyms? AfC Sydney

On Tue, Dec 20, 2011 at 21:05, Andrew Cowie
Now we just need λ to replace \, → to replace ->, and ≠ to replace /= (which still looks like division assignment no matter how hard I squint my eyes. 25 years of C and C derived languages is hard to forget).
Some of it is already supported. http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#un... (≠ isn't in that list, but it doesn't need to be; just stick it in the Prelude if it's not there already, since it's not special syntax, it's just another operator. The * *is* a special case, I suspect, though: I bet it's only supported in kinds, not as the multiplication operator.) The problem with λ is that it's a perfectly valid Unicode lowercase letter. There is 𝛌 MATHEMATICAL BOLD SMALL LAMDA U+1D6CC but font support is often lacking, and it looks like Unicode categorizes *that* also as a lowercase letter. If you can convince the Unicode Consortium to add a lambda that is of class symbol, Haskell can then make use of it — once there are fonts that support it. (And then we'll have to deal with folks trying to use the letter, because everyone knows the Roman alphabet is the only one that matters and of *course* Greek letters are symbol characters.... Pfeh.) -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

On 21 Dec 2011, at 04:15, Brandon Allbery wrote:
On Tue, Dec 20, 2011 at 21:05, Andrew Cowie
wrote: Now we just need λ to replace \, → to replace ->, and ≠ to replace /= (which still looks like division assignment no matter how hard I squint my eyes. 25 years of C and C derived languages is hard to forget). Some of it is already supported. http://www.haskell.org/ghc/docs/latest/html/users_guide/syntax-extns.html#un...
(≠ isn't in that list, but it doesn't need to be; just stick it in the Prelude if it's not there already, since it's not special syntax, it's just another operator. The * *is* a special case, I suspect, though: I bet it's only supported in kinds, not as the multiplication operator.)
The one on the list is not a mathematical symbol. It should be ⋆ STAR OPERATOR U+22C6 or ∗ ASTERISK OPERATOR U+2217.
The problem with λ is that it's a perfectly valid Unicode lowercase letter. There is 𝛌 MATHEMATICAL BOLD SMALL LAMDA U+1D6CC but font support is often lacking, and it looks like Unicode categorizes *that* also as a lowercase letter. If you can convince the Unicode Consortium to add a lambda that is of class symbol, Haskell can then make use of it — once there are fonts that support it.
There is http://www.stixfonts.org/ For typesetting with Xe[La]TeX or Lua[La]TeX, use XITS (in the TeXLive package).
(And then we'll have to deal with folks trying to use the letter, because everyone knows the Roman alphabet is the only one that matters and of *course* Greek letters are symbol characters.... Pfeh.)
This is the big problem right now: how to enter these symbols efficiently. Hans

On Wed, Dec 21, 2011 at 04:51, Hans Aberg
The one on the list is not a mathematical symbol. It should be ⋆ STAR OPERATOR U+22C6 or ∗ ASTERISK OPERATOR U+2217.
...except, at least in my current font, the former is microscopic and the latter not a whole lot better. The advantage of the one they chose is it's big and readable. (That's part of why I assume it's intended for kinds, actually.) -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

On 21 Dec 2011, at 11:03, Brandon Allbery wrote:
On Wed, Dec 21, 2011 at 04:51, Hans Aberg
wrote: The one on the list is not a mathematical symbol. It should be ⋆ STAR OPERATOR U+22C6 or ∗ ASTERISK OPERATOR U+2217.
...except, at least in my current font, the former is microscopic and the latter not a whole lot better. The advantage of the one they chose is it's big and readable. (That's part of why I assume it's intended for kinds, actually.)
This problem is in my editor for unstyled text due to that if it a symbol is not available in the font selected, it just chooses a glyph from the next font available in alphabetical order. Hopefully such problem will be resolved in the future. Hans

I'd suggest, in addition to the symbols, renaming some of the fundamental types and concepts, like Monad. I would violently agree that Monad is the correct term, but try to communicate with a commodity software developer sometime (or a government acquisition professional). RWH goes a long way to explaining the concepts, as do the countless Web pages dedicated to explaining the monad concept.
Better examples for SYB and arrows would also help.
Haskell is a great language with solid mathematical underpinnings. I'm a big fan of it. But, adoption is the key to success; need to make the ordinary easy to understand unless the community wants to be relegated to Scala status.
-----Original message-----
From: Andrew Cowie
Two of three ain't bad (^_~)
Now we just need λ to replace \, → to replace ->, and ≠ to replace /= (which still looks like division assignment no matter how hard I squint my eyes. 25 years of C and C derived languages is hard to forget). Hey, forget replacing, wouldn't it be wonderful if the compiler would just accept them as synonyms? AfC Sydney _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I'd suggest, in addition to the symbols, renaming some of the fundamental types and concepts, like Monad. I would violently agree that Monad is the correct term, but try to communicate with a commodity software developer sometime (or a government acquisition professional). RWH goes a long way to explaining the concepts, as do the countless Web pages dedicated to explaining the monad concept.
Better examples for SYB and arrows would also help.
Haskell is a great language with solid mathematical underpinnings. I'm a big fan of it. But, adoption is the key to success; need to make the ordinary easy to understand unless the community wants to be relegated to Scala status.
-----Original message-----
From: Andrew Cowie
Two of three ain't bad (^_~)
Now we just need λ to replace \, → to replace ->, and ≠ to replace /= (which still looks like division assignment no matter how hard I squint my eyes. 25 years of C and C derived languages is hard to forget). Hey, forget replacing, wouldn't it be wonderful if the compiler would just accept them as synonyms? AfC Sydney _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Dec 21, 2011, at 2:14 PM, scooter.phd@gmail.com wrote:
I'd suggest, in addition to the symbols, renaming some of the fundamental types and concepts, like Monad. I would violently agree that Monad is the correct term, but try to communicate with a commodity software developer sometime (or a government acquisition professional). RWH goes a long way to explaining the concepts, as do the countless Web pages dedicated to explaining the monad concept.
I super-agree with you on this point; just because something is modeled by a particular mathematical structure doesn't mean that it should be named *after* that structure unless said structure is a commonly known one. In a programming language names should be chosen to maximize clarity, and the term "Monad" conveys absolutely no sense of what it is to anyone who isn't already well-versed in category theory. I would go further and say that there is a problem in a lot of the documentation surrounding such concepts as Monads which is that they start with a mathematical definition --- i.e., a type with a return and bind method --- and then proceed from there, which obfuscates what Monads are all about. It would be much better of a typical tutorial instead started by describing what problem exists that they solve.
Haskell is a great language with solid mathematical underpinnings. I'm a big fan of it. But, adoption is the key to success; need to make the ordinary easy to understand unless the community wants to be relegated to Scala status.
Honest question here: what exactly do you mean by being "relegated to Scala status"? Scala seems pretty alive and kicking to me, with the added bonus that it runs on the JVM which gives it an advantage in many respects over Haskell. Is there something I am mising? :-) Cheers, Greg

Support for long binary data sections would be nice. -- Jason Dusek () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments

On 19/12/2011 07:20 PM, Robert Clausecker wrote:
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
1. Remove special treatment for lists. That is, remove the standard list type from the Prelude and make it into just another container library. The type is trivially definable; it doesn't need to be hard-wired into the language. I would name it "List" rather than "[]" too, and give its constructors sensible names rather than ":" and "[]". I would also make the current literal list syntax overloaded, so it can be used to construct any type of container. (Much the way numeric literals are overloaded.) 2. Add special treatment for arrays. That is, have strict, unboxed arrays hard-wired into the language. For every user type, not just ones which are members of a special type. Fully parametric. You know, like how they are in almost every other programming language known to Man, except Haskell. Also, the base array library would have zero-based integer indicies, and if you want arbitrary index values, that would be a library on top. Also, primitives for fast copying of array chunks. 3. Fix the Typeclassopedia. By which I mean things like removing "fail" from Monad, making Applicative a superclass of Monad, etc. You know, the usual stuff everybody complains about for being horribly broken. 4. Do something sane with the numerical classes. Unfortunately, this one is a Hard Problem. I doubt you can "solve" this without rampant language changes. But it should be possible to /improve/ on what exists currently. 5. Miscelaneous consistency changes. Haskell has a number of minor glitches which are irritating though not catastrophic. Things like type signatures being allowed in expressions but not patterns. Value constructors can be operators, but type constructors cannot. Negation is the only unary operator. And so forth. Nothing radical here. 6. Experimental stuff. I have a number of other, more radical ideas, but I probably wouldn't put those in, because the resulting language would be so drastically different that it wouldn't be "Haskell" any more. (That said, most of the stuff above would pretty much be hard breaking changes...)

In general, I like haskell the way it is, but there are a few things
that I would like to see:
(I am no language designer, so I don't know about the theoretical
implications that these
might have. Also, let me know if there exists a clean way to do any of
the following):
- Using subranges of instances of Ord in type definitions, a la Ada.
For instance, saying:
type MyNumber = [2..6]
(maybe "data MyNumber" would be more appropriate, I don't know).
I really don't want to have to write
data MyNumber = Two | Three | Four | Five | Six
and implement all the necessary operations for them.
- Guards wherever I want, for instance:
myFunction = do
monadicRes <- monadicOp
| monadicRes == 1 || monadicRes == 7 = doA
| monadicRes == 6 || monadicRes == 8 = doB
| otherwise = doC
This is easily avoidable with an aux function, but sometimes there's just
too many variables in the conditions and you don't want to carry them
all in the aux function's signature. I could (painfully) live with
myFunction = do
monadicRes <- monadicOp
if monadicRes == 1 || monadicRes == 7
then doA
elseif monadicRes == 6 || monadicRes == 8
then doB
else doC
even though it's way too verbose and reminds me of java.
I often end up finding that the following construction is cleaner than
the possible alternatives:
myFunction = do
monadicRes <- monadicOp
case () of
_ | monadicRes == 1 || monadicRes == 7 = doA
| monadicRes == 6 || monadicRes == 8 = doB
| otherwise = doC
even though it's very ugly to have those useless () _ there.
Unfortunately, my proposal could give way to slightly ugly nested guards
(which are cleaner than nested ifs, but not very nice anyway).
myFunction arg
| arg == 1
| arg == 7 = doA
| arg == 8 = doB
| otherwise = doC
- Function overloading without classes. If it's not done, there must
be a good reason for it
(many good reasons, probably), but I really miss it.
Most other features I need, such as easy-to-use nested-record access
methods, my own deriving definitions, compile-time expansion of
expressions, etc. are covered by existing libraries and extensions.
I really have no problems with the monad/functor separation that
somebody mentioned.
Maybe Monad is not the best name for that class if it's not true that
every Monad is a
Functor, but it's not very confusing anyway.
Cheers,
Ivan
On 19 December 2011 20:20, Robert Clausecker
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
Robert Clausecker
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Am 21.12.2011 14:10 schrieb "Ivan Perez"
In general, I like haskell the way it is, but there are a few things that I would like to see: (I am no language designer, so I don't know about the theoretical implications that these might have. Also, let me know if there exists a clean way to do any of the following):
- Using subranges of instances of Ord in type definitions, a la Ada. For instance, saying:
type MyNumber = [2..6]
(maybe "data MyNumber" would be more appropriate, I don't know).
I really don't want to have to write data MyNumber = Two | Three | Four | Five | Six and implement all the necessary operations for them.
- Guards wherever I want, for instance:
myFunction = do monadicRes <- monadicOp | monadicRes == 1 || monadicRes == 7 = doA | monadicRes == 6 || monadicRes == 8 = doB | otherwise = doC
This is easily avoidable with an aux function, but sometimes there's just too many variables in the conditions and you don't want to carry them all in the aux function's signature. I could (painfully) live with
myFunction = do monadicRes <- monadicOp if monadicRes == 1 || monadicRes == 7 then doA elseif monadicRes == 6 || monadicRes == 8 then doB else doC
even though it's way too verbose and reminds me of java.
I often end up finding that the following construction is cleaner than the possible alternatives:
myFunction = do monadicRes <- monadicOp case () of _ | monadicRes == 1 || monadicRes == 7 = doA | monadicRes == 6 || monadicRes == 8 = doB | otherwise = doC
even though it's very ugly to have those useless () _ there.
Unfortunately, my proposal could give way to slightly ugly nested guards (which are cleaner than nested ifs, but not very nice anyway). myFunction arg | arg == 1 | arg == 7 = doA | arg == 8 = doB | otherwise = doC
- Function overloading without classes. If it's not done, there must be a good reason for it (many good reasons, probably), but I really miss it.
That does not play well with type inference. Also, see type-directed name resolution (TDNR)
Most other features I need, such as easy-to-use nested-record access methods, my own deriving definitions, compile-time expansion of expressions, etc. are covered by existing libraries and extensions.
I really have no problems with the monad/functor separation that somebody mentioned. Maybe Monad is not the best name for that class if it's not true that every Monad is a Functor, but it's not very confusing anyway.
Cheers, Ivan
On 19 December 2011 20:20, Robert Clausecker
wrote: Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
Robert Clausecker
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

- Function overloading without classes. If it's not done, there must be a good reason for it (many good reasons, probably), but I really miss it.
That does not play well with type inference.
I understand that. But it may be ok in many simple situations, which is actually where I tend to "need" it. For instance, the following wouldn't pose a problem, would it? f :: Int -> Int -> Int f = (+) f :: String -> String -> String f = (++) (Not that I would use it for exactly that code, but anyway). I know that one can write that with classes and instances, but I just want to save myself the extra-coding when possible. In complex situations (where it doesn't play well with type inference), that's when I'd rather create classes and instances. Not for the type-checker, but for myself.
Also, see type-directed name resolution (TDNR) Thanks for that pointer. I found that proposal a few months ago, but had completely forgotten about it.
I don't really like the proposed syntax; it's very OO-like. I agree with gabrielrf, who said "[...] I wonder if adding an obj.method style will be a hinderance to beginners, as it obscures what type inference can do."

Am Mittwoch, den 21.12.2011, 20:05 +0100 schrieb Ivan Perez:
- Function overloading without classes. If it's not done, there must be a good reason for it (many good reasons, probably), but I really miss it.
That does not play well with type inference.
I understand that. But it may be ok in many simple situations, which is actually where I tend to "need" it. For instance, the following wouldn't pose a problem, would it?
f :: Int -> Int -> Int f = (+) f :: String -> String -> String f = (++)
(Not that I would use it for exactly that code, but anyway). I know that one can write that with classes and instances, but I just want to save myself the extra-coding when possible. In complex situations (where it doesn't play well with type inference), that's when I'd rather create classes and instances. Not for the type-checker, but for myself.
Of course it would! Consider the following code: g = f What is the type of g? Without a typeclass, the compiler is unable to give a proper type to g since it can't decide between the two matching instances of f. Of course this example is ridiculous, but similiar but more convoluted specimens of this problem arise often in production code. You may now think: "Hey, why can't the compiler just automatically generate a typeclass for me in such a case?" This is impossible, since the compiler can't see your intentions. It can't decide what the type-variable should be etc. Consider again your f example. You expect the compiler to generate a typeclass like this: class Foo a where f :: a -> a -> a But how can the compiler see that what you want is actually the above and not the equally sensible typeclass class Bar a where f :: a You see, it's not so easy with type-inference. AFAIK TDNR solves this problem by requiring the first argument to have a monomorphic type. This type is used to pick an appropriate function. I dislike this approach; it is too limited in my opinion. The problem is, that you have much more information about a function in traditional languages, than in Haskell. Consider this slightly trivial example: Java: x = something(a,obj.toString(),2); Haskell: x = something a (show obj) 2 In Java, the compiler knows a lot about the method something from that single line: * something() has three arguments * The type of the first argument is known * The second argument is a String * The third one is an int In Haskell, most of these assumptions are invalid: * something may be curried or member of a strange typeclass (like printf). No assumptions about the number of arguments can be made * It may be possible that we do not yet know the type of a because we can't infer it's type without knowing the type of x * show obj is definitely a String * 2 is of type Num a => a. What if there are two something, one with a parameter of type Int and one with a Float? You see, It's not so easy.
Also, see type-directed name resolution (TDNR) Thanks for that pointer. I found that proposal a few months ago, but had completely forgotten about it.
I don't really like the proposed syntax; it's very OO-like. I agree with gabrielrf, who said "[...] I wonder if adding an obj.method style will be a hinderance to beginners, as it obscures what type inference can do."
Full acknowledge.

In Haskell, most of these assumptions are invalid:
* something may be curried or member of a strange typeclass (like printf). No assumptions about the number of arguments can be made * It may be possible that we do not yet know the type of a because we can't infer it's type without knowing the type of x * show obj is definitely a String * 2 is of type Num a => a. What if there are two something, one with a parameter of type Int and one with a Float?
You see, It's not so easy. I see. Thanks for such a clear explanation.

On Mon, Dec 19, 2011 at 8:20 PM, Robert Clausecker
Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
Robert Clausecker
A whole lot (a surprisingly very large amount) of things don't require breaking compatibility. Also a lot of things are really amazing but need smarter people than myself to invent them (the Constraint kind is a good example). And many things are trivial and superficial. I agree with everyone who mentioned it about giving things user-friendly names and leaving the mathematical connections to the documentation. I'm partial to Mappable/Sequenceable/Runnable for Functor/Applicative/Monad, but doubtless better ones are possible. I would define Monad in terms of join :p, and use (=<<) as the default bind. I also agree with name: Type instead of name :: Type. I would make : bind tighter. I would rename the * kind to Type, because (Type -> Constraint) looks less weird than (* -> Constraint). I would change some things to be just a little bit more C/Unix-like: != for inequality, allow (not require!) opening and closing braces on the same line, * instead of _ as the wildcard. Many things are in the realm of "this could definitely be done better, but I'm not sure how, either": tuples, records, and modules, in ascending order. Records would be lens-based because composability is nice, but that's about as far as I know. The operative principle with modules would be that after 'import Module' you should be good to go: manual intervention to avoid name clashes is a very nice feature, but you should only have to use it rarely. (In other words, much more control would be given to module authors over how things are exported.) Modules would be parametrizable on types - for example, for FRP libraries where every signature includes the Time type. (If I knew more about ML-style modules I might be advocating them.) I would make the whitespace around infix operators (and other punctuation like list literals) mandatory and significant. It's how you write it in most cases anyways, and how you should have in most of the rest. This frees up a lot of "syntax space" which could be used for various things: special built-in syntax, prefix/postfix operators, and you could have normal-looking array[and] record.access like every other language. (To be clear, list literals would still look [like, this], it's just the presence or absence of whitespace in between them and the preceding identifier which would be significant in this case.) Strictness types can be added as a language extension but either way I would add them. I would put greater emphasis on unboxed polymorphism by multiinstantiation over polymorphism by boxing and dictionary passing (it's not nice that abstract code is slower than monotyped code), but that's an implementation issue. I would add language support for mutating values without explicitly using an IORef, provided you're doing it in the right monad and the effects don't "leak", like what Disciple has but with better syntax. I would distinguish things which read from memory in an impure way from things which write to memory from things which Do Things In The Outside World like write a file. (Maybe by lifting Disciple's effect typing wholesale, but I'm attached to monads.)

2011/12/22 Gábor Lehel
On Mon, Dec 19, 2011 at 8:20 PM, Robert Clausecker
wrote: Image you would create your own language with a paradigm similar to Haskell or have to chance to change Haskell without the need to keep any compatibility. What stuff would you add to your language, what stuff would you remove and what problems would you solve completely different?
Thanks in advance for all answers, yours
Robert Clausecker
I would have compose (probably not called '.') read the same way we read this sentence (and unix pipes) ie left to right. I would not overload . to mean compose and module-access. Probably gadt could be the norm and classic 'data' could be removed.

I'd like to make special syntax for folds, so that fold is built in the type definition. Maybe it can be some special braces or just fold(..). So we can write the same function in place of foldr, maybe, either and so on and don't have to define them by hand. Inside special fold-braces one can write functions (separated with | as inside of type declaration) that have apropriate types. Constructors define the order of functions inside fold-braces. Handful of examples: data List a = Nil | Cons a (List a) length :: List a -> List a length = fold( 0 | const (+1) ) (++) :: List a -> List a -> List a a ++ b = fold( b | Cons ) a head :: List a -> a head = fold( undefined | const ) data Maybe a = Nothing | Just a fromJust :: Maybe a -> a fromJust = fold (undefined | id) data Nat = Zero | Succ Nat add :: Nat -> Nat -> Nat add a = fold (a | Succ) mul :: Nat -> Nat -> Nat mul a = fold (Zero | add a) Maybe something similiar for unfolds but I have no syntax here. ---------- I'd like to invent some type-system that can allow me to say that (.), (>>>), (>=>) are the same things just as id and pure I'd like to have in place of Monad-class special case of Category class We can express return and (>>=) with id and (.) in Monad's typing. return = id ma >>= mf = (mf . const ma) () where id and (.) are class Kleisli m where id :: a -> m a (.) :: (b -> m c) -> (a -> m b) -> (a -> m c) I'd like to parametrise it over m so Kleisli can become a special case of Category. And we ?can? define ($) in terms of id, (.), const and (), ($) :: Category cat => cat a b -> ?dom cat a?-> ?cod cat b? f $ a = (f . const a) () so (=<<) becomes just ($) Anton
participants (39)
-
Alexander Solla
-
Andrew Coppin
-
Andrew Cowie
-
Anton Kholomiov
-
Ashok Gautham
-
Bardur Arantsson
-
Ben Lippmeier
-
Brandon Allbery
-
Chris Wong
-
Conor McBride
-
Dan Doel
-
David Fox
-
Erik de Castro Lopo
-
Erik Hesselink
-
Gregory Crosswhite
-
Gábor Lehel
-
Hans Aberg
-
Heinrich Apfelmus
-
Ivan Perez
-
Jason Dagit
-
Jason Dusek
-
Jerzy Karczmarczuk
-
Jesse Schalken
-
John Meacham
-
Ketil Malde
-
Lyndon Maydwell
-
Maciej Marcin Piechotka
-
Martin DeMello
-
Mathijs Kwik
-
Matthew Farkas-Dyck
-
MigMit
-
Miguel Mitrofanov
-
Robert Clausecker
-
Roman Cheplyaka
-
Rustom Mody
-
scooter.phd@gmail.com
-
Scott Turner
-
Thiago Negri
-
Tillmann Rendel