Use unsafePerformIO to catch Exception?

Hi, I just feel it is not comfortable to deal with exceptions only within IO monad, so I defined
tryArith :: a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate
and it works quite good as
map (tryArith . (div 5)) [2,1,0,5]
evaluates to
[Right 2,Right 5,Left divide by zero,Right 1]
However, I guess unsafePerformIO definitely has a reason for its name. As I read through the document in System.IO.Unsafe, I can't convince myself whether the use of 'tryArith' is indeed safe or unsafe. I know there have been a lot of discussion around unsafePerformIO, but I still can't figure it out by myself. Can someone share some thoughts on this particular use of unsafePerformIO? Is it safe or not? And why? Thanks, Xiao-Yong -- c/* __o/* <\ * (__ */\ <

You should ensure that the result of "evaluate" is in normal form, not just weak head normal form. You can do this with the Control.Parallel.Strategies module:
import Control.Exception(ArithException(..),try,evaluate) import Control.Parallel.Strategies(NFData,using,rnf) import System.IO.Unsafe(unsafePerformIO)
tryArith :: NFData a => a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate . flip using rnf
test :: [Either ArithException Integer] test = map (tryArith . (div 5)) [2,1,0,5]
testResult = [Right 2,Right 5,Left DivideByZero,Right 1]
withPair :: Integer -> (Integer,Integer) withPair x = (x,throw Overflow)
main = do print (test == testResult) print (tryArith (withPair 7)) print (tryArith' (withPair 7))
in ghci
*Main> main main True Left arithmetic overflow Right (7,*** Exception: arithmetic overflow
This "rnf :: Strategy a" ensures that the result of evaluate is in normal form. This means it should not have any embedded lazy thunks, so any errors from such thunks will be forced while in the scope of the "try". Otherwise a complex type like the result of withPair can hide an error. Xiao-Yong Jin wrote:
Hi,
I just feel it is not comfortable to deal with exceptions only within IO monad, so I defined
tryArith :: a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate
and it works quite good as
map (tryArith . (div 5)) [2,1,0,5]
evaluates to
[Right 2,Right 5,Left divide by zero,Right 1]
However, I guess unsafePerformIO definitely has a reason for its name. As I read through the document in System.IO.Unsafe, I can't convince myself whether the use of 'tryArith' is indeed safe or unsafe.
I know there have been a lot of discussion around unsafePerformIO, but I still can't figure it out by myself. Can someone share some thoughts on this particular use of unsafePerformIO? Is it safe or not? And why?
Thanks, Xiao-Yong

ChrisK
You should ensure that the result of "evaluate" is in normal form, not just weak head normal form. You can do this with the Control.Parallel.Strategies module:
import Control.Exception(ArithException(..),try,evaluate) import Control.Parallel.Strategies(NFData,using,rnf) import System.IO.Unsafe(unsafePerformIO)
tryArith :: NFData a => a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate . flip using rnf
test :: [Either ArithException Integer] test = map (tryArith . (div 5)) [2,1,0,5]
testResult = [Right 2,Right 5,Left DivideByZero,Right 1]
withPair :: Integer -> (Integer,Integer) withPair x = (x,throw Overflow)
main = do print (test == testResult) print (tryArith (withPair 7)) print (tryArith' (withPair 7))
in ghci
*Main> main main True Left arithmetic overflow Right (7,*** Exception: arithmetic overflow
This "rnf :: Strategy a" ensures that the result of evaluate is in normal form. This means it should not have any embedded lazy thunks, so any errors from such thunks will be forced while in the scope of the "try".
Otherwise a complex type like the result of withPair can hide an error.
Thanks a lot. I found it is much easier to deal with Exception with this than convert all my code to monadic style. -- c/* __o/* <\ * (__ */\ <

On Mon, 23 Mar 2009, Xiao-Yong Jin wrote:
Hi,
I just feel it is not comfortable to deal with exceptions only within IO monad, so I defined
tryArith :: a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate
and it works quite good as
map (tryArith . (div 5)) [2,1,0,5]
evaluates to
[Right 2,Right 5,Left divide by zero,Right 1]
However, I guess unsafePerformIO definitely has a reason for its name. As I read through the document in System.IO.Unsafe, I can't convince myself whether the use of 'tryArith' is indeed safe or unsafe.
Try to never use exception handling for catching programming errors! Division by zero is undefined, thus a programming error when it occurs. http://www.haskell.org/haskellwiki/Error http://www.haskell.org/haskellwiki/Exception I'm afraid, a Maybe or Either or Exceptional (see explicit-exception package) return value is the only way to handle exceptional return values properly. Maybe in the larger context of your problem zero denominators can be avoided? Then go this way.

Henning Thielemann
Try to never use exception handling for catching programming errors! Division by zero is undefined, thus a programming error when it occurs. http://www.haskell.org/haskellwiki/Error http://www.haskell.org/haskellwiki/Exception I'm afraid, a Maybe or Either or Exceptional (see explicit-exception package) return value is the only way to handle exceptional return values properly. Maybe in the larger context of your problem zero denominators can be avoided? Then go this way.
Using div is just an example I'm testing with what I read in the book Real World Haskell. The real thing I'm trying to do is inverting a matrix. Say, I want to write
invMat :: Matrix -> Matrix
You won't be able to invert all the matrix, mathematically. And computationally, even a larger set of matrix might fail to be inverted because of the finite precision. It is relatively easier and more efficient to spot such a problem within this 'invMat' function. Because testing the singularity of a matrix is equally hard as invert it. So all I can do when 'invMat' spot a singular matrix are a) Return Either/Maybe to signal an error. b) Wrap it in a monad. c) Define a dynamic exception and throw it. The problem is that there will be many functions using such a function to invert a matrix, making this inversion function return Either/Maybe or packing it in a monad is just a big headache. It is impractical to use method (a), because not every function that uses 'invMat' knows how to deal with 'invMat' not giving an answer. So we need to use method (b), to use monad to parse our matrix around.
invMat :: Matrix -> NumericCancerMonad Matrix
It hides the exceptional nature of numerical computations very well, but it is cancer in the code. Whenever any function wants to use invMat, it is mutated. This is just madness. You don't want to make all the code to be monadic just because of singularities in numeric calculation. Therefore, in my opinion, method (c) is my only option. And because I don't always want to deal with such problem in the IO monad, I create this beast 'unsafePerformIO . try . evaluate' to convert some potential disastrous result to 'Either Disaster Result'. You might argue that Haskell actually deals with such numerical problem with 'NaN', 'Infinite'. But, some numerical operations behave weird with these special values, and using 'isNan', 'isInfinite' alike is just another big mess. They are going to be all over the place and not actually better than 'case ... of' and 'fromMaybe'. I can't really think of another option for this kind of situation, apart from letting my code to be infected by NumericCancerMonad. If anyone on this list has some thoughts on this matter, please share them. Many thanks. -- c/* __o/* <\ * (__ */\ <

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Xiao-Yong Jin wrote: | The problem is that there will be many functions using such | a function to invert a matrix, making this inversion | function return Either/Maybe or packing it in a monad is | just a big headache. I disagree. If you try to take the inverse of a noninvertable matrix, this is an *error* in your code. Catching an error you created in pure code and patching it with chewing gum it is just a hack. A monadic approach (I'm putting Either/Maybe under the same umbrella for brevity) is the only solution that makes any sense to me, and I don't think it's ugly as you are making it out to be. | It is impractical to use method (a), | because not every function that uses 'invMat' knows how to | deal with 'invMat' not giving an answer. So we need to use | method (b), to use monad to parse our matrix around. | |> > invMat :: Matrix -> NumericCancerMonad Matrix | | It hides the exceptional nature of numerical computations | very well, but it is cancer in the code. Whenever any | function wants to use invMat, it is mutated. This is just | madness. You don't want to make all the code to be monadic | just because of singularities in numeric calculation. For functions that don't know or don't care about failure, just use fmap or one of its synonyms. ~ scalarMult 2 <$> invMat x See? The scalarMult function is pure, as it should be. There is no madness here. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJEN8ACgkQye5hVyvIUKk3UQCfVDcVzVOKNeWDOozwfMsyQFnP 1XgAoJZSMa/3QEQus79FS2KTqF4DYu8X =BzYt -----END PGP SIGNATURE-----

Jake McArthur
Xiao-Yong Jin wrote: | The problem is that there will be many functions using such | a function to invert a matrix, making this inversion | function return Either/Maybe or packing it in a monad is | just a big headache.
I disagree. If you try to take the inverse of a noninvertable matrix, this is an *error* in your code. Catching an error you created in pure code and patching it with chewing gum it is just a hack. A monadic approach (I'm putting Either/Maybe under the same umbrella for brevity) is the only solution that makes any sense to me, and I don't think it's ugly as you are making it out to be.
Then, why is 'div' not of type 'a -> a -> ArithExceptionMonad a' ? Why does it throws this /ugly/ /error/ when it is applied to 0? Why is it not using some beautiful 'ArithExceptinoMonad'? Is 'Control.Exception' just pure /ugly/ and doesn't make any sense?
| It is impractical to use method (a), | because not every function that uses 'invMat' knows how to | deal with 'invMat' not giving an answer. So we need to use | method (b), to use monad to parse our matrix around. | |> > invMat :: Matrix -> NumericCancerMonad Matrix | | It hides the exceptional nature of numerical computations | very well, but it is cancer in the code. Whenever any | function wants to use invMat, it is mutated. This is just | madness. You don't want to make all the code to be monadic | just because of singularities in numeric calculation.
For functions that don't know or don't care about failure, just use fmap or one of its synonyms.
~ scalarMult 2 <$> invMat x
See? The scalarMult function is pure, as it should be. There is no madness here.
Of course, 'scalarMult' is invulnerable and free of monad. But take a look at the following functions,
f1 = scalarMult 2 . invMat f2 l r = l `multMat` invMat r ff :: Matrix -> Matrix -> YetAnotherBiggerMonad Matrix ff x y = do let ff' = f1 x + f2 y put . (addMat ff') . f1 << get tell $ f2 ff' when (matrixWeDontLike (f1 ff') $ throwError MatrixWeDontLike return $ scalarMult (1/2) ff'
Yes, I know, it's not really complicate to rewrite the above code. But, what do I really gain from this rewrite? -- c/* __o/* <\ * (__ */\ <

On Tue, Mar 24, 2009 at 3:14 PM, Xiao-Yong Jin
Jake McArthur
writes: Xiao-Yong Jin wrote: | The problem is that there will be many functions using such | a function to invert a matrix, making this inversion | function return Either/Maybe or packing it in a monad is | just a big headache.
I disagree. If you try to take the inverse of a noninvertable matrix, this is an *error* in your code. Catching an error you created in pure code and patching it with chewing gum it is just a hack. A monadic approach (I'm putting Either/Maybe under the same umbrella for brevity) is the only solution that makes any sense to me, and I don't think it's ugly as you are making it out to be.
Then, why is 'div' not of type 'a -> a -> ArithExceptionMonad a' ? Why does it throws this /ugly/ /error/ when it is applied to 0? Why is it not using some beautiful 'ArithExceptinoMonad'? Is 'Control.Exception' just pure /ugly/ and doesn't make any sense?
It's a proof obligation, like using unsafePerformIO. It is "okay" to use unsafePerformIO when it exhibits purely functional semantics, but it's possible to use it incorrectly, and there is no ImpureSemanticsException. If you are being rigorous, you simply have to prove that the denominator will not be zero, rather than relying on it to be caught at runtime. You can move the check to runtime easily: safeDiv x 0 = Nothing safeDiv x y = Just (x `div` y) Going the other way, from a runtime check to an obligation, is impossible.
| It is impractical to use method (a), | because not every function that uses 'invMat' knows how to | deal with 'invMat' not giving an answer. So we need to use | method (b), to use monad to parse our matrix around. | |> > invMat :: Matrix -> NumericCancerMonad Matrix | | It hides the exceptional nature of numerical computations | very well, but it is cancer in the code. Whenever any | function wants to use invMat, it is mutated. This is just | madness. You don't want to make all the code to be monadic | just because of singularities in numeric calculation.
For functions that don't know or don't care about failure, just use fmap or one of its synonyms.
~ scalarMult 2 <$> invMat x
See? The scalarMult function is pure, as it should be. There is no madness here.
Of course, 'scalarMult' is invulnerable and free of monad. But take a look at the following functions,
f1 = scalarMult 2 . invMat f2 l r = l `multMat` invMat r ff :: Matrix -> Matrix -> YetAnotherBiggerMonad Matrix ff x y = do let ff' = f1 x + f2 y put . (addMat ff') . f1 << get tell $ f2 ff' when (matrixWeDontLike (f1 ff') $ throwError MatrixWeDontLike return $ scalarMult (1/2) ff'
Yes, I know, it's not really complicate to rewrite the above code. But, what do I really gain from this rewrite? -- c/* __o/* <\ * (__ */\ < _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Mar 24, 2009 at 3:28 PM, Luke Palmer
On Tue, Mar 24, 2009 at 3:14 PM, Xiao-Yong Jin
wrote: Jake McArthur
writes: Xiao-Yong Jin wrote: | The problem is that there will be many functions using such | a function to invert a matrix, making this inversion | function return Either/Maybe or packing it in a monad is | just a big headache.
I disagree. If you try to take the inverse of a noninvertable matrix, this is an *error* in your code. Catching an error you created in pure code and patching it with chewing gum it is just a hack. A monadic approach (I'm putting Either/Maybe under the same umbrella for brevity) is the only solution that makes any sense to me, and I don't think it's ugly as you are making it out to be.
Then, why is 'div' not of type 'a -> a -> ArithExceptionMonad a' ? Why does it throws this /ugly/ /error/ when it is applied to 0? Why is it not using some beautiful 'ArithExceptinoMonad'? Is 'Control.Exception' just pure /ugly/ and doesn't make any sense?
It's a proof obligation, like using unsafePerformIO. It is "okay" to use unsafePerformIO when it exhibits purely functional semantics, but it's possible to use it incorrectly, and there is no ImpureSemanticsException. If you are being rigorous, you simply have to prove that the denominator will not be zero, rather than relying on it to be caught at runtime. You can move the check to runtime easily:
safeDiv x 0 = Nothing safeDiv x y = Just (x `div` y)
Going the other way, from a runtime check to an obligation, is impossible.
(well, except for div x y = fromJust (safeDiv x y).. but the runtime check is still there in terms of operation)

On Tue, 24 Mar 2009, Xiao-Yong Jin wrote:
Jake McArthur
writes: Xiao-Yong Jin wrote: | The problem is that there will be many functions using such | a function to invert a matrix, making this inversion | function return Either/Maybe or packing it in a monad is | just a big headache.
I disagree. If you try to take the inverse of a noninvertable matrix, this is an *error* in your code. Catching an error you created in pure code and patching it with chewing gum it is just a hack. A monadic approach (I'm putting Either/Maybe under the same umbrella for brevity) is the only solution that makes any sense to me, and I don't think it's ugly as you are making it out to be.
Then, why is 'div' not of type 'a -> a -> ArithExceptionMonad a' ? Why does it throws this /ugly/ /error/ when it is applied to 0?
I think "throw" should be reserved to exceptions (although it is still strange English). Actually 'div x 0' is 'undefined', just like in mathematics. This is justified by the fact, that you can easily check whether the denominator is zero or not and it is expected that either you check the denominator before calling 'div' or that you proof that in your application the denominator is non-zero anyway and thus save repeated checks for zero at run-time. The deficiency is not in 'div' but in the type system, which does not allow to declare an Int to be non-zero. In contrast to that, it is not easily checked, whether a matrix is regular. Thus you may prefer a Maybe result.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Xiao-Yong Jin wrote: | Then, why is 'div' not of type 'a -> a -> ArithExceptionMonad a' ? | Why does it throws this /ugly/ /error/ when it is applied to | 0? Why is it not using some beautiful | 'ArithExceptinoMonad'? Is 'Control.Exception' just pure | /ugly/ and doesn't make any sense? 'div' throws an error because dividing by zero is *programmer error*. *You* are supposed to make sure that you aren't dividing by zero. I differ from this decision in your case because, as you said, it is easier to check for the error condition in the function itself than to check it externally. This is fine, but because it's so hard to check externally, you have to tell the outside world whether there was an error or not. A functor/applicative/monad is the pure way to do this. An error is not. | Of course, 'scalarMult' is invulnerable and free of monad. | But take a look at the following functions, | |> f1 = scalarMult 2 . invMat |> f2 l r = l `multMat` invMat r |> ff :: Matrix -> Matrix -> YetAnotherBiggerMonad Matrix |> ff x y = do let ff' = f1 x + f2 y |> put . (addMat ff') . f1 << get |> tell $ f2 ff' |> when (matrixWeDontLike (f1 ff') $ |> throwError MatrixWeDontLike |> return $ scalarMult (1/2) ff' | | Yes, I know, it's not really complicate to rewrite the above | code. But, what do I really gain from this rewrite? Code that is fully documented by its type, no harder to compose, more pure, and does what the programmer expects it to do. - - Jake -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAknJXpcACgkQye5hVyvIUKnTOwCgzqRC4i7eLgbOQW1r+u2NPhAQ 7NkAnRsOFE8uMWrB/TRxTfdP/+x35EZ8 =kCtc -----END PGP SIGNATURE-----

Jake McArthur
Xiao-Yong Jin wrote: | Then, why is 'div' not of type 'a -> a -> ArithExceptionMonad a' ? | Why does it throws this /ugly/ /error/ when it is applied to | 0? Why is it not using some beautiful | 'ArithExceptinoMonad'? Is 'Control.Exception' just pure | /ugly/ and doesn't make any sense?
'div' throws an error because dividing by zero is *programmer error*. *You* are supposed to make sure that you aren't dividing by zero.
I differ from this decision in your case because, as you said, it is easier to check for the error condition in the function itself than to check it externally. This is fine, but because it's so hard to check externally, you have to tell the outside world whether there was an error or not. A functor/applicative/monad is the pure way to do this. An error is not.
| Of course, 'scalarMult' is invulnerable and free of monad. | But take a look at the following functions, | |> f1 = scalarMult 2 . invMat |> f2 l r = l `multMat` invMat r |> ff :: Matrix -> Matrix -> YetAnotherBiggerMonad Matrix |> ff x y = do let ff' = f1 x + f2 y |> put . (addMat ff') . f1 << get |> tell $ f2 ff' |> when (matrixWeDontLike (f1 ff') $ |> throwError MatrixWeDontLike |> return $ scalarMult (1/2) ff' | | Yes, I know, it's not really complicate to rewrite the above | code. But, what do I really gain from this rewrite?
Code that is fully documented by its type, no harder to compose, more pure, and does what the programmer expects it to do.
Thanks for all the replies. Now I understand more about Exceptions and Errors. I guess all I need is to compose a larger monad, after all. I need to learn how to make two different stacks of monad transformers cooperate seamlessly, though. Thanks, Xiao-Yong -- c/* __o/* <\ * (__ */\ <

On Tue, 24 Mar 2009, Xiao-Yong Jin wrote:
Thanks for all the replies. Now I understand more about Exceptions and Errors. I guess all I need is to compose a larger monad, after all. I need to learn how to make two different stacks of monad transformers cooperate seamlessly, though.
Until now it seems you only need Applicative functor. They can be combined in a more general way: http://hackage.haskell.org/packages/archive/TypeCompose/0.6.4/doc/html/Contr... See the instances: (Applicative g, Applicative f) => Applicative (g :. f)

On Tue, Mar 24, 2009 at 1:27 PM, Xiao-Yong Jin
Henning Thielemann
writes: Try to never use exception handling for catching programming errors! Division by zero is undefined, thus a programming error when it occurs. http://www.haskell.org/haskellwiki/Error http://www.haskell.org/haskellwiki/Exception I'm afraid, a Maybe or Either or Exceptional (see explicit-exception package) return value is the only way to handle exceptional return values properly. Maybe in the larger context of your problem zero denominators can be avoided? Then go this way.
Using div is just an example I'm testing with what I read in the book Real World Haskell. The real thing I'm trying to do is inverting a matrix. Say, I want to write
invMat :: Matrix -> Matrix
You won't be able to invert all the matrix, mathematically. And computationally, even a larger set of matrix might fail to be inverted because of the finite precision. It is relatively easier and more efficient to spot such a problem within this 'invMat' function. Because testing the singularity of a matrix is equally hard as invert it. So all I can do when 'invMat' spot a singular matrix are
a) Return Either/Maybe to signal an error. b) Wrap it in a monad. c) Define a dynamic exception and throw it.
In general if a function is partial we can either make it total by extending its range or restricting its domain. Also we can signal it using runtime or compile-time mechanisms. Options a & b are equivalent (i.e. extend the range, compile-time notification) and option c is also another way of extending the range, but using runtime notification. If we try the other approach, we need to express the totality of invMat by restricting its domain, so we can add, for example, a phantom type to Matrix to signal it is invertible. As you need to construct the Matrix before trying to invert it you can always make the constructors smart enough to bundle the Matrix with such properties. Of course there's need to do some runtime verifications earlier, but the clients of invMat are required to do the verification earlier or pass it to their clients (up to the level that can handle with this issue): data Invertible tryInvertible :: Matrix a -> Maybe (Matrix Invertible) invMat :: Matrix Invertible -> Matrix Invertible You could use different forms of evidence (e.g. phantom types, type classes) but the idea is the same.
The problem is that there will be many functions using such a function to invert a matrix, making this inversion function return Either/Maybe or packing it in a monad is just a big headache. It is impractical to use method (a), because not every function that uses 'invMat' knows how to deal with 'invMat' not giving an answer. So we need to use method (b), to use monad to parse our matrix around.
invMat :: Matrix -> NumericCancerMonad Matrix
It hides the exceptional nature of numerical computations very well, but it is cancer in the code. Whenever any function wants to use invMat, it is mutated. This is just madness. You don't want to make all the code to be monadic just because of singularities in numeric calculation. Therefore, in my opinion, method (c) is my only option. And because I don't always want to deal with such problem in the IO monad, I create this beast 'unsafePerformIO . try . evaluate' to convert some potential disastrous result to 'Either Disaster Result'.
You might argue that Haskell actually deals with such numerical problem with 'NaN', 'Infinite'. But, some numerical operations behave weird with these special values, and using 'isNan', 'isInfinite' alike is just another big mess. They are going to be all over the place and not actually better than 'case ... of' and 'fromMaybe'.
I can't really think of another option for this kind of situation, apart from letting my code to be infected by NumericCancerMonad.
If anyone on this list has some thoughts on this matter, please share them. Many thanks. -- c/* __o/* <\ * (__ */\ <
Best regards, Daniel Yokomizo

On Tue, 24 Mar 2009, Daniel Yokomizo wrote:
If we try the other approach, we need to express the totality of invMat by restricting its domain, so we can add, for example, a phantom type to Matrix to signal it is invertible. As you need to construct the Matrix before trying to invert it you can always make the constructors smart enough to bundle the Matrix with such properties. Of course there's need to do some runtime verifications earlier, but the clients of invMat are required to do the verification earlier or pass it to their clients (up to the level that can handle with this issue):
data Invertible tryInvertible :: Matrix a -> Maybe (Matrix Invertible) invMat :: Matrix Invertible -> Matrix Invertible
This would be a very elegant solution. However when it comes to floating point numbers I'm afraid there are no much other ways than inverting a matrix if you want to know if it is invertible. You may however use representations of a matrix (like an LU decomposition or a QR decomposition) internally that are half-way of inversion.

Daniel Yokomizo
On Tue, Mar 24, 2009 at 1:27 PM, Xiao-Yong Jin
wrote: Henning Thielemann
writes: Try to never use exception handling for catching programming errors! Division by zero is undefined, thus a programming error when it occurs. http://www.haskell.org/haskellwiki/Error http://www.haskell.org/haskellwiki/Exception I'm afraid, a Maybe or Either or Exceptional (see explicit-exception package) return value is the only way to handle exceptional return values properly. Maybe in the larger context of your problem zero denominators can be avoided? Then go this way.
Using div is just an example I'm testing with what I read in the book Real World Haskell. The real thing I'm trying to do is inverting a matrix. Say, I want to write
invMat :: Matrix -> Matrix
You won't be able to invert all the matrix, mathematically. And computationally, even a larger set of matrix might fail to be inverted because of the finite precision. It is relatively easier and more efficient to spot such a problem within this 'invMat' function. Because testing the singularity of a matrix is equally hard as invert it. So all I can do when 'invMat' spot a singular matrix are
a) Return Either/Maybe to signal an error. b) Wrap it in a monad. c) Define a dynamic exception and throw it.
In general if a function is partial we can either make it total by extending its range or restricting its domain. Also we can signal it using runtime or compile-time mechanisms. Options a & b are equivalent (i.e. extend the range, compile-time notification) and option c is also another way of extending the range, but using runtime notification.
If we try the other approach, we need to express the totality of invMat by restricting its domain, so we can add, for example, a phantom type to Matrix to signal it is invertible. As you need to construct the Matrix before trying to invert it you can always make the constructors smart enough to bundle the Matrix with such properties. Of course there's need to do some runtime verifications earlier, but the clients of invMat are required to do the verification earlier or pass it to their clients (up to the level that can handle with this issue):
data Invertible tryInvertible :: Matrix a -> Maybe (Matrix Invertible) invMat :: Matrix Invertible -> Matrix Invertible
You could use different forms of evidence (e.g. phantom types, type classes) but the idea is the same.
This is theoretically sound, we can make a type 'Integer Invertible' and make a 'safeDiv' to get rid of one of the ArithException. But as I said above, "testing the singularity of a matrix is equally hard as inverting it", doing it with matrix is more impractical than make 'div' safe. I don't mind my haskell code runs 6 times slower than c++ code. But I'm not ready to make it 10 times slower just because I want to do something twice and it might as well be useless most of the time. -- c/* __o/* <\ * (__ */\ <

On Tue, 24 Mar 2009, Xiao-Yong Jin wrote:
invMat :: Matrix -> Matrix
You won't be able to invert all the matrix, mathematically. And computationally, even a larger set of matrix might fail to be inverted because of the finite precision. It is relatively easier and more efficient to spot such a problem within this 'invMat' function. Because testing the singularity of a matrix is equally hard as invert it. So all I can do when 'invMat' spot a singular matrix are
a) Return Either/Maybe to signal an error.
This is the way to go.
b) Wrap it in a monad.
Either and Maybe are monads. These monads behave like exceptions in other languages. I like to call these exceptions.
c) Define a dynamic exception and throw it.
You cannot throw an exception in code that does not return Maybe, Either, IO or such things. You can only abuse 'undefined' and turn it into a defined value later by a hack. Think of 'undefined' as an infinite loop, that cannot detected in general. GHC is kind enough to detect special cases, in order to simplify debugging. But this should be abused for exceptional return values.
The problem is that there will be many functions using such a function to invert a matrix, making this inversion function return Either/Maybe or packing it in a monad is just a big headache. It is impractical to use method (a), because not every function that uses 'invMat' knows how to deal with 'invMat' not giving an answer.
How shall it deal with 'undefined' then? 'undefined' can only be handled by a hack, so Maybe or Either are clearly better.
invMat :: Matrix -> NumericCancerMonad Matrix
It hides the exceptional nature of numerical computations very well, but it is cancer in the code. Whenever any function wants to use invMat, it is mutated. This is just madness.
No it makes explicit what's going on. This is the idea of functional programming. You have nice Applicative infix operators in order to write everything in a functional look anyway. In contrast, I think it is mad that there is no function of type mulInt :: Int -> Int -> Maybe Int which allows us to catch overflows without using hacks. This function could be easily integrated in a compiler, since the CPUs show by a flag, that an overflow occurs. In most high level languages this is not possible, and thus programming in an overflow-proof way needs additional computations.
You don't want to make all the code to be monadic just because of singularities in numeric calculation. Therefore, in my opinion, method (c) is my only option.
Then better stick to MatLab. :-(

All this is certainly true. But there is still a valid concern: Numerical tasks often allow to predict whether an error might occur or not. Let's say you know that you have a normal matrix N and want to calculate (1+N*N)^-1. Of course the matrix is invertible and therefore it is reasonable to provide two distinct implementations: invMat :: Mat -> Maybe Mat invMat = ... -- |Like 'invMat' but gives an error in case the matrix is not invertible. invMat' :: Mat -> Mat invMat' = fromJust . invMat Kind regards Torsten -----Ursprüngliche Nachricht----- Von: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] Im Auftrag von Henning Thielemann Gesendet: Dienstag, 24. März 2009 21:34 An: Xiao-Yong Jin Cc: haskell-cafe@haskell.org Betreff: Re: Exception handling in numeric computations (was Re: [Haskell-cafe]Use unsafePerformIO to catch Exception?) On Tue, 24 Mar 2009, Xiao-Yong Jin wrote:
invMat :: Matrix -> Matrix
You won't be able to invert all the matrix, mathematically. And computationally, even a larger set of matrix might fail to be inverted because of the finite precision. It is relatively easier and more efficient to spot such a problem within this 'invMat' function. Because testing the singularity of a matrix is equally hard as invert it. So all I can do when 'invMat' spot a singular matrix are
a) Return Either/Maybe to signal an error.
This is the way to go.
b) Wrap it in a monad.
Either and Maybe are monads. These monads behave like exceptions in other languages. I like to call these exceptions.
c) Define a dynamic exception and throw it.
You cannot throw an exception in code that does not return Maybe, Either, IO or such things. You can only abuse 'undefined' and turn it into a defined value later by a hack. Think of 'undefined' as an infinite loop, that cannot detected in general. GHC is kind enough to detect special cases, in order to simplify debugging. But this should be abused for exceptional return values.
The problem is that there will be many functions using such a function to invert a matrix, making this inversion function return Either/Maybe or packing it in a monad is just a big headache. It is impractical to use method (a), because not every function that uses 'invMat' knows how to deal with 'invMat' not giving an answer.
How shall it deal with 'undefined' then? 'undefined' can only be handled by a hack, so Maybe or Either are clearly better.
invMat :: Matrix -> NumericCancerMonad Matrix
It hides the exceptional nature of numerical computations very well, but it is cancer in the code. Whenever any function wants to use invMat, it is mutated. This is just madness.
No it makes explicit what's going on. This is the idea of functional programming. You have nice Applicative infix operators in order to write everything in a functional look anyway. In contrast, I think it is mad that there is no function of type mulInt :: Int -> Int -> Maybe Int which allows us to catch overflows without using hacks. This function could be easily integrated in a compiler, since the CPUs show by a flag, that an overflow occurs. In most high level languages this is not possible, and thus programming in an overflow-proof way needs additional computations.
You don't want to make all the code to be monadic just because of singularities in numeric calculation. Therefore, in my opinion, method (c) is my only option.
Then better stick to MatLab. :-( _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Xiao-Yong Jin wrote:
Hi,
I just feel it is not comfortable to deal with exceptions only within IO monad, so I defined
tryArith :: a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate [...]
However, I guess unsafePerformIO definitely has a reason for its name. As I read through the document in System.IO.Unsafe, I can't convince myself whether the use of 'tryArith' is indeed safe or unsafe.
I know there have been a lot of discussion around unsafePerformIO, but I still can't figure it out by myself. Can someone share some thoughts on this particular use of unsafePerformIO? Is it safe or not? And why?
This use of unsafePerformIO is safe, because the original expression you're given is pure[1]. The evaluate lifts the pure value into IO in order to give evaluation-ordering guarantees, though otherwise has no effects. The unsafePerformIO voids those effects, since it makes the value pure again and thus it does not need to grab the RealWorld baton. Note that the correctness argument assumes the value is indeed pure. Some idiot could have passed in (unsafePerformIO launchTheMissiles), which is not safe and the impurity will taint anything that uses it (tryArith, (+1), whatever). But it's the unsafePerformIO in this expression which is bad, not the one in tryArith. tryArith is basically the same as a function I have in my personal utility code: http://community.haskell.org/~wren/wren-extras/Control/Exception/Extras.hs The safely function is somewhat different in that it's a combinator for making *functions* safe, rather than making *expressions* safe as tryArith does. This is necessary because exceptional control flow (by definition) does not honor the boundaries of expressions, but rather attaches semantics to the evaluation of functions. Thus safely is more safe because it ensures you can't force the exception prematurely via sharing: > let x = 5`div`0 in > ... seq x ... tryArith x -- too late to catch it! oops. Whereas with safely we'd have: > let f y = safely (div y) in > let x = 5 `f` 0 in > ... seq x ... x -- doesn't matter where f or x are used. [1] Ha! If it were _pure_ then it wouldn't be throwing exceptions, now would it :) -- Live well, ~wren

On Mon, 2009-03-23 at 08:11 -0400, Xiao-Yong Jin wrote:
Hi,
I just feel it is not comfortable to deal with exceptions only within IO monad, so I defined
tryArith :: a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate
You must not do this. It breaks the semantics of the language. Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on. It is not "safe". It's almost as bad as a function isBottom, which is the canonical non-continuous function. It's defined by: isBottom _|_ = True isBottom _ = False Of course your tryArith only tests for certain kinds of _|_ value, but in principle the problem is the same. It is not safe because it distinguishes values that are not supposed to be distinguishable. This invalidates many properties and transformations. Duncan

Hi Duncan and all,
On Wed, Mar 25, 2009 at 3:52 AM, Duncan Coutts
On Mon, 2009-03-23 at 08:11 -0400, Xiao-Yong Jin wrote:
tryArith :: a -> Either ArithException a tryArith = unsafePerformIO . try . evaluate
You must not do this. It breaks the semantics of the language.
Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on. ... Of course your tryArith only tests for certain kinds of _|_ value, but in principle the problem is the same.
That's not *quite* how the semantics of Haskell exceptions are defined, actually, unless I'm misunderstanding something or the thinking about it has changed since the original paper on the topic: Simon Peyton Jones, Alastair Reid, Tony Hoare, Simon Marlow, Fergus Henderson: "A semantics for imprecise exceptions." SIGPLAN Conference on Programming Language Design and Implementation, 1999. http://www.haskell.org/~simonmar/papers/except.ps.gz In the semantics given by that paper, the value of an expression of type Int is (a) an actual number, or (b) a set of exceptions that the expression might raise when evaluated, or (c) bottom, which means: when evaluated, the expression may not terminate, or may terminate and raise some arbitrary exception. (The semantic ordering is: everything is larger than bottom; a set of expressions A is larger than a set of expressions B iff A is a proper subset of B; numbers are not comparable to sets.) tryArith is still noncontinuous, though, and nondeterministic, too. Consider: divzero = 1/0 overflow = 2**10000000 loop = loop - 1 * What is (tryArith (divzero + overflow))? The denotational value of (divzero + overflow) is the set {DivideByZero, Overflow}, so tryArith can return either (Left DivideByZero) or (Left Overflow) -- nondeterminism. * What is (tryArith (overflow + loop))? The denotational value of (overflow + loop) is bottom, so tryArith can theoretically return any arithmetic exception, or propagate any non-arithmetic exception, or loop forever. In practice, of course, it will either return (Left Overflow), or loop forever, or error out if the compiler notices the loop. However, this still means that it *may* return (Left Overflow) even though the semantical value of (overflow + loop) is bottom, which means that the function is not monotone and thus not continuous. All the best, - Benja

On Wed, 2009-03-25 at 18:14 +0100, Benja Fallenstein wrote:
On Wed, Mar 25, 2009 at 3:52 AM, Duncan Coutts
Of course your tryArith only tests for certain kinds of _|_ value, but in principle the problem is the same.
That's not *quite* how the semantics of Haskell exceptions are defined, actually, unless I'm misunderstanding something or the thinking about it has changed since the original paper on the topic:
Yep, that's the semantics I've been working from too. I was not being precise when I said "tests for _|_". As you point out, the semantics of imprecise exceptions distinguishes exceptions from bottom, however pure code cannot make that distinction and so that's why I was lumping them together and saying that tryArith tests for certain kinds of _|_ value.
tryArith is still noncontinuous, though, and nondeterministic, too. Consider:
Those are nice examples. Thanks for that. I was too tired to come up with any :-) Anyway, I hope this is enough to dissuade people from using unsafePerformIO to catch exceptions. Duncan

On Thu, Mar 26, 2009 at 2:40 AM, Duncan Coutts
I was not being precise when I said "tests for _|_". As you point out, the semantics of imprecise exceptions distinguishes exceptions from bottom, however pure code cannot make that distinction and so that's why I was lumping them together and saying that tryArith tests for certain kinds of _|_ value.
Right... I had this mental picture of exceptions being values denotationally smaller than any "real" value of the domain before I read the imprecise exceptions paper, so I interpreted your imprecise description that way =)
Anyway, I hope this is enough to dissuade people from using unsafePerformIO to catch exceptions.
Yes, unfortunately you can't even use it to return Nothing on errors (ie without returning the specific exception) because of the (overflow + loop) issue where you have nondeterminism between returning Nothing and looping forever. I have my own history of wondering "why isn't there in Control.Exception a function that..." for a while before figuring out why there isn't :-) - Benja

Quoth Duncan Coutts
You must not do this. It breaks the semantics of the language.
Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on.
Could you elaborate a little, in what sense are we (?) relying on it? I actually can't find any responses that make a case against it on a really practical level - I mean, it seems to be taken for granted that it will work as intended, and we're down to whether we ought to have such intentions, as a matter of principle. If you've identified a problem here with semantics that would break normal evaluation, from the perspective of the programmer's intention, then this would be the first practical reason? Donn
It is not "safe". It's almost as bad as a function isBottom, which is the canonical non-continuous function. It's defined by:
isBottom _|_ = True isBottom _ = False
Of course your tryArith only tests for certain kinds of _|_ value, but in principle the problem is the same.
It is not safe because it distinguishes values that are not supposed to be distinguishable. This invalidates many properties and transformations.
Duncan

Donn Cave wrote:
Could you elaborate a little, in what sense are we (?) relying on it?
I actually can't find any responses that make a case against it on a really practical level - I mean, it seems to be taken for granted that it will work as intended, and we're down to whether we ought to have such intentions, as a matter of principle. If you've identified a problem here with semantics that would break normal evaluation, from the perspective of the programmer's intention, then this would be the first practical reason?
Off the top of my head, here is a possible case: foo :: Int -> Int foo x = ... -- something that might throw an exception bar :: Int -> Blah bar x = ... -- internally use foo and catch the exception baz :: Int -> Blah baz = bar . foo In this case, if the foo in baz throws an exception, I think bar may catch it and attempt to handle it as if the foo in bar had thrown it, but we probably would have expected this exception to go all the way to the top level and halt the program since exceptions are usually due to programmer error. But I didn't test this, and since this isn't something I've ever done before I can't be 100% sure of its behavior. - Jake

On Tue, 2009-03-24 at 23:13 -0700, Donn Cave wrote:
Quoth Duncan Coutts
: You must not do this. It breaks the semantics of the language.
Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on.
Could you elaborate a little, in what sense are we (?) relying on it?
I actually can't find any responses that make a case against it on a really practical level - I mean, it seems to be taken for granted that it will work as intended,
It shouldn't be. Consider: loop = loop blam = error "blam" notReallyTry = unsafePerformIO . try . evaluate Now, normally, we have, for all x, y, x `seq` y `seq` x = y `seq` x But we clearly do *not* have this for x = blam, y = loop, since the equality isn't preserved by notReallyTry: notReallyTry $ blam `seq` loop `seq` blam = Left (ErrorCall "blam") notReallyTry $ loop `seq` blam = loop Now, say a compiler sees the definition foo x y = x `seq` y `seq` x in one module, and then in a later one expectToBeTotal = notReallyTry $ foo blam loop ? What happens if the compiler, while compiling foo, notices that x is going to be evaluated eventually anyway, and decides against forcing it before y? What if foo was written as foo (!x) (!y) = x ? Which order are the evaluations performed in? In a purely functional language, it doesn't matter; but all of a sudden with impure operations like notReallyTry popping up, it does. jcc

Jonathan Cast
On Tue, 2009-03-24 at 23:13 -0700, Donn Cave wrote:
Quoth Duncan Coutts
: You must not do this. It breaks the semantics of the language.
Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on.
Could you elaborate a little, in what sense are we (?) relying on it?
I actually can't find any responses that make a case against it on a really practical level - I mean, it seems to be taken for granted that it will work as intended,
It shouldn't be.
Consider:
loop = loop blam = error "blam" notReallyTry = unsafePerformIO . try . evaluate
Now, normally, we have, for all x, y,
x `seq` y `seq` x = y `seq` x
But we clearly do *not* have this for x = blam, y = loop, since the equality isn't preserved by notReallyTry:
notReallyTry $ blam `seq` loop `seq` blam = Left (ErrorCall "blam") notReallyTry $ loop `seq` blam = loop
Now, say a compiler sees the definition
foo x y = x `seq` y `seq` x
in one module, and then in a later one
expectToBeTotal = notReallyTry $ foo blam loop
? What happens if the compiler, while compiling foo, notices that x is going to be evaluated eventually anyway, and decides against forcing it before y?
What if foo was written as
foo (!x) (!y) = x
? Which order are the evaluations performed in? In a purely functional language, it doesn't matter; but all of a sudden with impure operations like notReallyTry popping up, it does.
Could you elaborate more about why this kind of breakage wouldn't happen if 'try' is used in an IO monad as intended? Thanks. -- c/* __o/* <\ * (__ */\ <

Some examples of what might happen:
If you have more than one possible exception in your code, you don't
know which one you will get.
It can vary between compilers, optimization levels, program runs, or
even evaluating the same expression within one program.
If you have code that have both an infinite loop and an exception you
don't know if you'll get the loop or the exception.
Breaking the deterministic behaviour can lead to strange consequences,
because the compiler relies on it.
For instance, the following code
let x = someExpression
print x
print x
could print different values for x the two times you print it.
(This is somewhat unlikely, but could happen when evaluating in
parallel with ghc, because there is a small window where two threads
might start evaluating x and later update x with two different
values.)
-- Lennart
On Wed, Mar 25, 2009 at 11:39 AM, Xiao-Yong Jin
Jonathan Cast
writes: On Tue, 2009-03-24 at 23:13 -0700, Donn Cave wrote:
Quoth Duncan Coutts
: You must not do this. It breaks the semantics of the language.
Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on.
Could you elaborate a little, in what sense are we (?) relying on it?
I actually can't find any responses that make a case against it on a really practical level - I mean, it seems to be taken for granted that it will work as intended,
It shouldn't be.
Consider:
loop = loop blam = error "blam" notReallyTry = unsafePerformIO . try . evaluate
Now, normally, we have, for all x, y,
x `seq` y `seq` x = y `seq` x
But we clearly do *not* have this for x = blam, y = loop, since the equality isn't preserved by notReallyTry:
notReallyTry $ blam `seq` loop `seq` blam = Left (ErrorCall "blam") notReallyTry $ loop `seq` blam = loop
Now, say a compiler sees the definition
foo x y = x `seq` y `seq` x
in one module, and then in a later one
expectToBeTotal = notReallyTry $ foo blam loop
? What happens if the compiler, while compiling foo, notices that x is going to be evaluated eventually anyway, and decides against forcing it before y?
What if foo was written as
foo (!x) (!y) = x
? Which order are the evaluations performed in? In a purely functional language, it doesn't matter; but all of a sudden with impure operations like notReallyTry popping up, it does.
Could you elaborate more about why this kind of breakage wouldn't happen if 'try' is used in an IO monad as intended?
Thanks. -- c/* __o/* <\ * (__ */\ < _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Wed, 2009-03-25 at 07:39 -0400, Xiao-Yong Jin wrote:
Jonathan Cast
writes: On Tue, 2009-03-24 at 23:13 -0700, Donn Cave wrote:
Quoth Duncan Coutts
: You must not do this. It breaks the semantics of the language.
Other people have given practical reasons why you should not but a theoretical reason is that you've defined a non-continuous function. That is impossible in the normal semantics of pure functional languages. So you're breaking a promise which we rely on.
Could you elaborate a little, in what sense are we (?) relying on it?
I actually can't find any responses that make a case against it on a really practical level - I mean, it seems to be taken for granted that it will work as intended,
It shouldn't be.
Consider:
loop = loop blam = error "blam" notReallyTry = unsafePerformIO . try . evaluate
Now, normally, we have, for all x, y,
x `seq` y `seq` x = y `seq` x
But we clearly do *not* have this for x = blam, y = loop, since the equality isn't preserved by notReallyTry:
notReallyTry $ blam `seq` loop `seq` blam = Left (ErrorCall "blam") notReallyTry $ loop `seq` blam = loop
Now, say a compiler sees the definition
foo x y = x `seq` y `seq` x
in one module, and then in a later one
expectToBeTotal = notReallyTry $ foo blam loop
? What happens if the compiler, while compiling foo, notices that x is going to be evaluated eventually anyway, and decides against forcing it before y?
What if foo was written as
foo (!x) (!y) = x
? Which order are the evaluations performed in? In a purely functional language, it doesn't matter; but all of a sudden with impure operations like notReallyTry popping up, it does.
Could you elaborate more about why this kind of breakage wouldn't happen if 'try' is used in an IO monad as intended?
It would. But it would happen in IO, which is allowed to be non-deterministic. Pure Haskell is not allowed to be non-deterministic. jcc

On Wed, 25 Mar 2009, Jonathan Cast wrote:
On Wed, 2009-03-25 at 07:39 -0400, Xiao-Yong Jin wrote:
Could you elaborate more about why this kind of breakage wouldn't happen if 'try' is used in an IO monad as intended?
It would. But it would happen in IO, which is allowed to be non-deterministic. Pure Haskell is not allowed to be non-deterministic.
In my opinion, 'try' catching 'error's is still a hack, since 'error's aka bottom mean programming error. Thus catching them is debugging, bug hiding or something worse, but not exception handling. 'try' and friends should be limited to exceptional outcomes of IO actions, e.g. "disk full", "file read protected" etc. There might be a variant 'unsafeTry' which can also block 'error's.

On Wed, 2009-03-25 at 22:32 +0100, Henning Thielemann wrote:
On Wed, 25 Mar 2009, Jonathan Cast wrote:
On Wed, 2009-03-25 at 07:39 -0400, Xiao-Yong Jin wrote:
Could you elaborate more about why this kind of breakage wouldn't happen if 'try' is used in an IO monad as intended?
It would. But it would happen in IO, which is allowed to be non-deterministic. Pure Haskell is not allowed to be non-deterministic.
In my opinion, 'try' catching 'error's is still a hack, since 'error's aka bottom mean programming error. Thus catching them is debugging, bug hiding or something worse, but not exception handling.
100% agreed. The nice thing about the extensible exceptions is that you *can* decline to handle ErrorCall `exceptions'; but errors caught by try should be viewed analogously to signals or asynchronous exceptions. The RTS sometimes detects a bug and (sometimes!) stops execution with an exception; the user sometimes detects the bug and (sometimes!) stops execution with SIGINT. The most you can do is try to limit the amount of secondary damage and give the programmer the best clue where to start hunting. jcc

Henning Thielemann wrote:
Jonathan Cast wrote:
Xiao-Yong Jin wrote:
Could you elaborate more about why this kind of breakage wouldn't happen if 'try' is used in an IO monad as intended?
It would. But it would happen in IO, which is allowed to be non-deterministic. Pure Haskell is not allowed to be non-deterministic.
In my opinion, 'try' catching 'error's is still a hack, since 'error's aka bottom mean programming error. Thus catching them is debugging, bug hiding or something worse, but not exception handling. 'try' and friends should be limited to exceptional outcomes of IO actions, e.g. "disk full", "file read protected" etc. There might be a variant 'unsafeTry' which can also block 'error's.
+1. I have long been disappointed by a number of `error`s which shouldn't be. For example, the fact that `head` and `div` are not total strikes me as a (solvable) weakness of type checking, rather than things that should occur as programmer errors/exceptions at runtime. The use of `error` in these cases muddies the waters and leads to a laissez-faire attitude about when it's acceptable to throw one's hands up in despair and use `error` rather than writing a better function. As nice as exceptional control flow is, I can't help but think there was a reason the H98 committee said that `error` should be uncatchable. That we'd like to be able to catch errors in an interpreter or debugger is the exception that proves the rule. Extensible exceptions are impressive, but the existence of exceptions outside of type annotations says something about purity. Let us not err in the direction of Java, but we really should plug that hole. -- Live well, ~wren

wren ng thornton wrote:
I have long been disappointed by a number of `error`s which shouldn't be. For example, the fact that `head` and `div` are not total strikes me as a (solvable) weakness of type checking, rather than things that should occur as programmer errors/exceptions at runtime. The use of `error` in these cases muddies the waters and leads to a laissez-faire attitude about when it's acceptable to throw one's hands up in despair and use `error` rather than writing a better function.
head uses "error" in precisely the correct, intended fashion. head has a precondition (only call on non-empty lists) and the "error" is just there to give you some hint that you made a mistake - got the precondition wrong. It's certainly not intended to be catchable. And that is a correct use of error. There are programming styles which avoid using 'head'. You are free to use those if you don't like it. I myself am content to use 'head' on lists which I know are guaranteed to be non-empty. Jules

On Thu, 26 Mar 2009, Jules Bean wrote:
There are programming styles which avoid using 'head'. You are free to use those if you don't like it. I myself am content to use 'head' on lists which I know are guaranteed to be non-empty.
Since I became aware that viewl (Data.Sequence) and uncons (ByteString) are appropriate in most cases where I used 'head' (and 'tail' and 'null') before, I use 'viewL' from my utility-ht package. Data.List.HT.viewL is total. I also often use Data.List.HT.switchL.

Jules Bean wrote:
wren ng thornton wrote:
I have long been disappointed by a number of `error`s which shouldn't be. For example, the fact that `head` and `div` are not total strikes me as a (solvable) weakness of type checking, rather than things that should occur as programmer errors/exceptions at runtime. The use of `error` in these cases muddies the waters and leads to a laissez-faire attitude about when it's acceptable to throw one's hands up in despair and use `error` rather than writing a better function.
head uses "error" in precisely the correct, intended fashion.
head has a precondition (only call on non-empty lists)
And that is *exactly* my complaint: the precondition is not verified by the compiler. Therefore it does not exist in the semantic system, which is why the error screws up semantic analysis. The type of head should not be [a] -> a + Error, it should be (a:[a]) -> a. With the latter type the compiler can ensure the precondition will be proved before calling head, thus eliminating erroneous calls. It's a static error, detectable statically, and yet it's deferred to the runtime. I'd much rather the compiler catch my errors than needing to create an extensive debugging suite and running it after compilation. Is this not the promise of purity?
There are programming styles which avoid using 'head'. You are free to use those if you don't like it. I myself am content to use 'head' on lists which I know are guaranteed to be non-empty.
Avoiding head is not a tenable solution. The syntax for case matching is insufficiently first-class, so avoiding the function often leads to contortions, illegible code, or reinventing the problem. Moreover, this isn't an isolated case; fromJust is another canonical example, as is any other partial algebra-homomorphism. Many mathematical functions like div are also subject to trivially-verified invariants on their arguments. Functions like uncons and viewL are nicer (because they're safe), but they can have overhead because they're unnecessarily complete (e.g. the Maybe wrapper can be avoided if we know a-priori that Just will be the constructor used). -- Live well, ~wren

On Thu, Mar 26, 2009 at 5:23 PM, wren ng thornton
Jules Bean wrote:
wren ng thornton wrote:
I have long been disappointed by a number of `error`s which shouldn't > be. For example, the fact that `head` and `div` are not total strikes > me as a (solvable) weakness of type checking, rather than things that > should occur as programmer errors/exceptions at runtime. The use of > `error` in these cases muddies the waters and leads to a laissez-faire > attitude about when it's acceptable to throw one's hands up in despair > and use `error` rather than writing a better function.
head uses "error" in precisely the correct, intended fashion.
head has a precondition (only call on non-empty lists)
And that is *exactly* my complaint: the precondition is not verified by the compiler. Therefore it does not exist in the semantic system, which is why the error screws up semantic analysis.
The type of head should not be [a] -> a + Error, it should be (a:[a]) -> a. With the latter type the compiler can ensure the precondition will be proved before calling head, thus eliminating erroneous calls.
It's a static error, detectable statically, and yet it's deferred to the runtime. I'd much rather the compiler catch my errors than needing to create an extensive debugging suite and running it after compilation. Is this not the promise of purity?
Ultimately, it's not detectable statically, is it? Consider import Control.Applicative main = do f <- lines <$> readFile "foobar" print (head (head f)) You can't know whether or not head will crash until runtime. Alex

On Thu, Mar 26, 2009 at 6:42 PM, Alexander Dunlap < alexander.dunlap@gmail.com> wrote:
On Thu, Mar 26, 2009 at 5:23 PM, wren ng thornton
wrote: It's a static error, detectable statically, and yet it's deferred to the runtime. I'd much rather the compiler catch my errors than needing to create an extensive debugging suite and running it after compilation. Is this not the promise of purity?
Ultimately, it's not detectable statically, is it? Consider
import Control.Applicative
main = do f <- lines <$> readFile "foobar" print (head (head f))
You can't know whether or not head will crash until runtime.
Static checkers are usually conservative, so this would be rejected. In fact, it's not always essential to depend on external information; eg. this program: (\x y -> y) (\x -> x x) (\x -> x) Behaves just like the identity function, so surely it should have type a -> a, but it is rejected because type checking is (and must be) conservative. Keeping constraints around that check that head is well-formed is a pretty hard thing to do. Case expressions are easier to check for totality, but have the disadvantages that wren mentions. Much as we would like to pressure the language to support static constraints, I don't think we are yet in a position to. It's not the kind of thing you can just throw on and be done with it; see Conor McBride's current project for an example of the issues involved. Luke

Luke Palmer wrote:
Alexander Dunlap wrote:
Ultimately, it's not detectable statically, is it? Consider
import Control.Applicative
main = do f <- lines <$> readFile "foobar" print (head (head f))
You can't know whether or not head will crash until runtime.
Static checkers are usually conservative, so this would be rejected. In fact, it's not always essential to depend on external information; eg. this program:
(\x y -> y) (\x -> x x) (\x -> x)
Behaves just like the identity function, so surely it should have type a -> a, but it is rejected because type checking is (and must be) conservative.
Keeping constraints around that check that head is well-formed is a pretty hard thing to do. Case expressions are easier to check for totality, but have the disadvantages that wren mentions.
My idea amounts to trying to make case expressions more first-class than they are now. As Luke says, we'd have to be conservative about it (until the dependent-types and total-functional-programming folks do the impossible), but I think there's still plenty of room for biting off a useful chunk of the domain without falling off that cliff. -- Live well, ~wren

Alexander Dunlap wrote:
wren ng thornton wrote:
Jules Bean wrote:
head uses "error" in precisely the correct, intended fashion.
head has a precondition (only call on non-empty lists)
And that is *exactly* my complaint: the precondition is not verified by the compiler. Therefore it does not exist in the semantic system, which is why the error screws up semantic analysis.
The type of head should not be [a] -> a + Error, it should be (a:[a]) -> a. With the latter type the compiler can ensure the precondition will be proved before calling head, thus eliminating erroneous calls.
It's a static error, detectable statically, and yet it's deferred to the runtime. I'd much rather the compiler catch my errors than needing to create an extensive debugging suite and running it after compilation. Is this not the promise of purity?
Ultimately, it's not detectable statically, is it? Consider
import Control.Applicative
main = do f <- lines <$> readFile "foobar" print (head (head f))
You can't know whether or not head will crash until runtime.
The issue is the obligation of proof, not what the actual runtime values are. Types give a coarsening of the actual runtime values, but they're still fine-grained enough to catch many errors (e.g. we know readFile will return some String and not an Int, even if we don't know which particular string it'll return). The proposal is to enhance the vocabulary of types such that we can require certain new proofs (e.g. having head require the proof that its argument is non-empty). The only way to discharge the obligation is to provide a proof. In this case, the way we provide a proof is by using a case expression--- it knows the actual runtime value because it evaluates things enough to match the patterns. For example, consider let x = ... in case x of x1@p1 -> f x1 ... x2@p2 -> g x2 ... Under the current system x, x1, and x2 all have the same type, by definition. We can relax this however since it is guaranteed that by the time x1 enters scope it will be bound to a value that adheres to the pattern p1; and we similarly know that x2 must be bound to a value that adheres to p2 (Removing things that also match p1). With that in mind, if the function f requires a proof of p1 about x1 (or g requires proof of p2 about x2), those obligations are discharged because the calls to f and g are guarded by the case expression. Given such a type system, your example would fail to type check because lines does not offer the guarantee that the return value is of type ((a:[a]):[[a]]). That is, the compiler will tell you that you need to provide proofs, i.e. handle all cases. As always, the devil is in the details; but that's what research is for :) -- Live well, ~wren

On Thu, 26 Mar 2009, wren ng thornton wrote:
Functions like uncons and viewL are nicer (because they're safe), but they can have overhead because they're unnecessarily complete (e.g. the Maybe wrapper can be avoided if we know a-priori that Just will be the constructor used).
If you know, it's always Just, then don't use Maybe. There must be some point in your program, from where it is sure, that it is always Just and that is the point where to leave Maybe. When I searched my old code for fromJust and head and review it carefully, I could eliminate them in most cases. In another thread (Grouping - Map/Reduce) there was a zipWithInf function which needed a lazy pattern match on (a:as). This indicates to me, that lists are the wrong data structure and it would be better to replace it by an always infinite list type with only one 'cons' constructor.

On Fri, 2009-03-27 at 12:26 +0100, Henning Thielemann wrote:
On Thu, 26 Mar 2009, wren ng thornton wrote:
Functions like uncons and viewL are nicer (because they're safe), but they can have overhead because they're unnecessarily complete (e.g. the Maybe wrapper can be avoided if we know a-priori that Just will be the constructor used).
If you know, it's always Just, then don't use Maybe. There must be some point in your program, from where it is sure, that it is always Just and that is the point where to leave Maybe. When I searched my old code for fromJust and head and review it carefully, I could eliminate them in most cases.
In another thread (Grouping - Map/Reduce)
Yes, grouping is the one where I most often find the need for head or partial patterns. The function group produces a list of non-empty lists but that is not reflected in the type. On the other hand, actually reflecting it in the type would make it more awkward: group :: Eq a => [a] -> [(a,[a])] Duncan

Duncan Coutts
Yes, grouping is the one where I most often find the need for head or partial patterns. The function group produces a list of non-empty lists but that is not reflected in the type. On the other hand, actually reflecting it in the type would make it more awkward:
group :: Eq a => [a] -> [(a,[a])]
group :: Eq a => [a] -> [(a,Int)] ? This is often what I want anyway :-) Of course, this only works for sane Eq instances. -k -- If I haven't seen further, it is by standing in the footprints of giants

wren ng thornton wrote:
The type of head should not be [a] -> a + Error, it should be (a:[a]) -> a. With the latter type the compiler can ensure the precondition will be proved before calling head, thus eliminating erroneous calls.
Yes, but you know and I know that's not haskell. I'm talking about haskell. In haskell - a language which does not fully support dependent types - head is both necessary and useful.
It's a static error, detectable statically, and yet it's deferred to the runtime. I'd much rather the compiler catch my errors than needing to create an extensive debugging suite and running it after compilation.
It is not detectable statically. It is only detectable statically for a class for programs. Admittedly, for that class of programs, ndm's fine tool "Catch" is a very clever thing.
Is this not the promise of purity?
No. Purity and partiality are orthogonal. Nobody promised pure languages would be total.
Functions like uncons and viewL are nicer (because they're safe), but they can have overhead because they're unnecessarily complete (e.g. the Maybe wrapper can be avoided if we know a-priori that Just will be the constructor used).
uncons and viewL are totally irrelevant. They're just a convenient syntax around case matching. Jules

Quoth Lennart Augustsson
Some examples of what might happen:
OK, these are interesting phenomena. From a practical point of view, though, I could see someone weighing the potential costs and benefits of a exception handler outside IO like this, and these effects might not even carry all that much weight. Does that make sense, or is the problem really more dire than I understand? I mean, it isn't the first time I've seen this explained in terms of predictability in a case where there are two possible exceptions, so I have to take for granted that this is a compelling argument to some, but it is evidently a matter of perspective. Donn
If you have more than one possible exception in your code, you don't know which one you will get. It can vary between compilers, optimization levels, program runs, or even evaluating the same expression within one program.
If you have code that have both an infinite loop and an exception you don't know if you'll get the loop or the exception.
Breaking the deterministic behaviour can lead to strange consequences, because the compiler relies on it. For instance, the following code let x = someExpression print x print x could print different values for x the two times you print it. (This is somewhat unlikely, but could happen when evaluating in parallel with ghc, because there is a small window where two threads might start evaluating x and later update x with two different values.)
-- Lennart

On Wed, 2009-03-25 at 09:15 -0700, Donn Cave wrote:
Quoth Lennart Augustsson
: Some examples of what might happen:
OK, these are interesting phenomena. From a practical point of view, though, I could see someone weighing the potential costs and benefits of a exception handler outside IO like this, and these effects might not even carry all that much weight.
Well, sure. From a purely `practical' point of view, I don't know why you would even use a purely functional language (as opposed to trying to minimize side effects in an impure language). But if you're not concerned about purity, or ease of equational reasoning, or accuracy of a wide range of compiler transformations/optimizations/because it makes the generated code pretty to sort the formal parameters by name before forcing them-implementation decisions, then please do not use Haskell. There are many other languages that are suitable for what you want to do, and it would be a courtesy to those of us who *do* use Haskell because it is purely functional, not to have to explicitly exclude your library from our picture of the language's capabilities. jcc

If you think non-determinstic programs are acceptable, then you can
use unsafePerformIO this way.
But future versions of compilers may make them even more
unpredictable, so beware.
I think that if the exception handler does not look at exactly what
exception was thrown, then the program will just vary in definedness
(i.e., comparable in the domain approximation ordering).
Many people consider this acceptable.
-- Lennart
On Wed, Mar 25, 2009 at 4:15 PM, Donn Cave
Quoth Lennart Augustsson
: Some examples of what might happen:
OK, these are interesting phenomena. From a practical point of view, though, I could see someone weighing the potential costs and benefits of a exception handler outside IO like this, and these effects might not even carry all that much weight. Does that make sense, or is the problem really more dire than I understand?
I mean, it isn't the first time I've seen this explained in terms of predictability in a case where there are two possible exceptions, so I have to take for granted that this is a compelling argument to some, but it is evidently a matter of perspective.
Donn
If you have more than one possible exception in your code, you don't know which one you will get. It can vary between compilers, optimization levels, program runs, or even evaluating the same expression within one program.
If you have code that have both an infinite loop and an exception you don't know if you'll get the loop or the exception.
Breaking the deterministic behaviour can lead to strange consequences, because the compiler relies on it. For instance, the following code let x = someExpression print x print x could print different values for x the two times you print it. (This is somewhat unlikely, but could happen when evaluating in parallel with ghc, because there is a small window where two threads might start evaluating x and later update x with two different values.)
-- Lennart
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Quoth Jonathan Cast
On Wed, 2009-03-25 at 09:15 -0700, Donn Cave wrote:
OK, these are interesting phenomena. From a practical point of view, though, I could see someone weighing the potential costs and benefits of a exception handler outside IO like this, and these effects might not even carry all that much weight.
Well, sure. From a purely `practical' point of view, I don't know why you would even use a purely functional language (as opposed to trying to minimize side effects in an impure language). But if you're not concerned about purity, or ease of equational reasoning, or accuracy of a wide range of compiler transformations/optimizations/because it makes the generated code pretty to sort the formal parameters by name before forcing them-implementation decisions, then please do not use Haskell. There are many other languages that are suitable for what you want to do, and it would be a courtesy to those of us who *do* use Haskell because it is purely functional, not to have to explicitly exclude your library from our picture of the language's capabilities.
Concerned about purity, ease of equational reasoning, etc.? Sure ... but I guess hoping we can agree on practical reasons for interest in these things, as opposed to, or at least in addition to, their esthetic or religious appeal. I'm guessing you would likewise, if only because a solely esthetic appeal is difficult angle to pursue because people's esthetic sensibilities aren't guaranteed to line up very well. And in fact the way I read the responses so far in this thread, the range of attitudes towards the matter seems pretty wide to me, among people whose views I respect. So I thought it would be interesting to explore statements like "you must not do this", and "pure Haskell is not allowed to be non-deterministic", in terms of practical effects. No one would make a statement like that and not hope to be challenged on it? Donn

On Wed, 2009-03-25 at 10:00 -0700, Donn Cave wrote:
Quoth Jonathan Cast
: On Wed, 2009-03-25 at 09:15 -0700, Donn Cave wrote:
OK, these are interesting phenomena. From a practical point of view, though, I could see someone weighing the potential costs and benefits of a exception handler outside IO like this, and these effects might not even carry all that much weight.
Well, sure. From a purely `practical' point of view, I don't know why you would even use a purely functional language (as opposed to trying to minimize side effects in an impure language). But if you're not concerned about purity, or ease of equational reasoning, or accuracy of a wide range of compiler transformations/optimizations/because it makes the generated code pretty to sort the formal parameters by name before forcing them-implementation decisions, then please do not use Haskell. There are many other languages that are suitable for what you want to do, and it would be a courtesy to those of us who *do* use Haskell because it is purely functional, not to have to explicitly exclude your library from our picture of the language's capabilities.
Concerned about purity, ease of equational reasoning, etc.? Sure ... but I guess hoping we can agree on practical reasons for interest in these things, as opposed to, or at least in addition to, their esthetic or religious appeal. I'm guessing you would likewise,
Nope. You must not have been following my positions in previous discussions. I am committed to functional purity for its own sake (just as I am committed to software development for its own sake; don't you *dare* suggest using Global Script!)
if only because a solely esthetic appeal is difficult angle to pursue because people's esthetic sensibilities aren't guaranteed to line up very well. And in fact the way I read the responses so far in this thread, the range of attitudes towards the matter seems pretty wide to me, among people whose views I respect.
So I thought it would be interesting to explore statements like "you must not do this", and "pure Haskell is not allowed to be non-deterministic", in terms of practical effects. No one would make a statement like that and not hope to be challenged on it?
What? Challenged by people who think Haskell should not be a purely functional language? I mean, that's kind of what it is. Again, if you don't want to use a purely functional language, there are *lots* of impure languages out there. There's no need to turn Haskell into one of them. jcc

Quoth Jonathan Cast
On Wed, 2009-03-25 at 10:00 -0700, Donn Cave wrote:
So I thought it would be interesting to explore statements like "you must not do this", and "pure Haskell is not allowed to be non-deterministic", in terms of practical effects. No one would make a statement like that and not hope to be challenged on it?
What? Challenged by people who think Haskell should not be a purely functional language? I mean, that's kind of what it is. Again, if you don't want to use a purely functional language, there are *lots* of impure languages out there. There's no need to turn Haskell into one of them.
OK. You may note that I didn't actually pose any questions to you until engaged directly, having already surmised that answers were not forthcoming. But for the record, I should note that I haven't proposed any changes to Haskell, recently or that I recall even in the distant past except for one idea I had about pattern matching in a lambda expression. Donn
participants (17)
-
Alexander Dunlap
-
Benja Fallenstein
-
ChrisK
-
Daniel Yokomizo
-
Donn Cave
-
Duncan Coutts
-
Henning Thielemann
-
Jake McArthur
-
Jake McArthur
-
Jonathan Cast
-
Jules Bean
-
Kemps-Benedix Torsten
-
Ketil Malde
-
Lennart Augustsson
-
Luke Palmer
-
wren ng thornton
-
Xiao-Yong Jin