
On Wed, Dec 21, 2011 at 1:09 AM, Gregory Crosswhite
On Dec 20, 2011, at 11:21 PM, Jesse Schalken wrote:
On Tue, Dec 20, 2011 at 10:43 PM, Gregory Crosswhite < gcrosswhite@gmail.com> wrote:
On Dec 20, 2011, at 9:18 PM, Jesse Schalken wrote:
Why do you have to solve the halting problem?
You have to solve the halting problem if you want to replace every place where _|_ could occur with an Error monad (or something similar), because _|_ includes occasions when functions will never terminate.
I think we're talking about different things. By "bottom" I mean the function explicitly returns "error ..." or "undefined". In those cases, it should go in an error monad instead. In cases where there is an infinite loop, the function doesn't return anything because it never finishes, and indeed this separate problem will never be solved while remaining Turing complete because it is the halting problem.
Then honestly you should choose a different term because I am pretty certain that my use of the term "bottom" is the commonly accepted one which (among other places) appears in denotation semantics.
I apologize if I was using the wrong terminology. I've seen "error ..." shown as, and understood it to be, _|_, but it seems _|_ refers to either a value that does not reduce or Haskell's "error" function depending on the context.
Consider integer division by 0. [...] This is all I was talking about.
But imagine there was an occasion where you *knew* that the divisor was never zero --- say, because the divisor was constructed to be a natural number.
Then use a separate type for natural numbers excluding 0. Then you can define a total integer division function on it (although the return value may be zero and so needs a different type).
That would certainly be a lovely idea *if* we were programming in Agda, but I was under the assumption that this conversation was about Haskell. :-)
I don't have experience with proof assistants, but maybe my answer to this thread can be summed up as giving Haskell that kind of capability. ;)
Now there is no point in running in the Error monad because there will never such a runtime error; in fact, it is not clear what you would even *do* with a Left value anyway, short of terminating the program and printing and error, which is what would have happened anyway.
What you do with a Left value is up to you - that's the point, you now have a choice.
Yes, but it is a pointless choice because if you had any reason to believe that your value was an invalid input to a function you would have checked it by now or used an alternative non-partial function that did run in an Error monad for that specific purpose.
In fact, the value might not even be being handled by you, in which case someone else now has a choice. Handling of the error is done in the same place as handling of the result, no IO needed.
Yes, but all that the user of your library knows at this point is that there is a bug somewhere in your library that violated an invariant. Nearly all of the time there is no way to recover from this in a useful way and so all the user will end up doing in response to your Left value is to abort anyway.
What if for example the program is a server which is always running. If you use "error ...", the server will crash, and someone has to go start it up again. The person who wrote the server has to remember to wrap each request in a try...catch block in the IO monad in the outer layer to make sure the server doesn't crash due to errors in pure code. What if they forget? If you use an error monad, they can't forget, because the type system forces them to handle the error. Maybe they will choose to "exit" in the case of an error, but at least then the program is crashing because they've explicitly told it to rather than because they forgot something. More likely they will respond to the request with an "error" response, allowing the server to continue running, but either way the type system has forced them to make a decision.
The point is your program shouldn't be able to make assumptions about values without proving them with types.
I agree but, again, we aren't talking about Agda here, we are talking about Haskell. :-)
Now I really want to look at proof assistants!
The whole term "untrusted data" baffles me. How often can you actually "trust" your data?
All the time! For example, if I create a counter that starts at 1, only increase it, and give nobody else access to it, then I can be as certain as it is possible to be can be that it is not 0.
Start your counter at 0 then. Then if you really don't want to deal with 0, treat 0 as 1 and 1 as 2 etc - hence a type for non-zero naturals. I think structuring code such that the types you use contain no useless or impossible values is often easy with a little thought, and worth it because the compiler is now verifying the assumptions you made. In cases where that isn't possible (or you can't be bothered), I'd rather just propagate an error in a monad than resort to Haskell's "error ..." for above reasons.
Also, there are occasionally times when I essentially check that a Maybe value is Just in one part of the code, and then in another part of the code need to extract the value from the Just; in such cases there is no point in using method *other* than simply fromJust to extract the value.
If you checked that a Maybe is a Just, then continued to pass on the Maybe to a function which assumed it is Just, then you should have just passed on the value in the Just instead. If it's Maybe, you must handle Nothing. If you've already handled Nothing, it doesn't make sense to keep dealing with a Maybe.
(Of course, it would be better to have condensed all of this into a single case statement in the first place, but sometimes --- say, when interfacing with others' libraries --- this ends up not being an available option.)
When you send your software out into the wild, what assumptions can you make about its input?
None, which is why in that case you need to test your input in that case.
Also I would like to think this "wasted overhead" can be optimised away at some stage of compilation, or somehow removed without the programmer needing to think about it.
I was thinking in terms of overhead from the coder's point of view, not from the compiler's point of view.
Cheers, Greg