Re: [Haskell-cafe] Are bottoms ever natural?

Siddharth, how would you deal with functions that terminate for some arguments/inputs but do not terminate for the others ? Alexey. On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though? Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"? Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence. From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere. What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know. What are your thoughts on this? Cheers Siddharth
On Tue 19 Dec, 2017, 08:09 Brandon Allbery,
wrote: Define "natural".
You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)
On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
wrote: I've been thinking about the issue of purity and speculation lately, and from what little I have read, it looks like the presence of bottom hiding inside a lazy value is a huge issue.
How "natural" is it for bottoms to exist? If one were to change Haskell and declare that any haskell value can be speculated upon, what ramifications does this have?
Is it totally broken? Is it "correct" but makes programming unpleasant? What's the catch?
Thanks, Siddharth

Pure functions can return "undefined" for some arguments values. So, such function is partially-defined. Its domain has "gaps". This can be coded in math, to avoid "undefined" (bottom), like x = {-100..100, 105, 107..200} It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes and RefTypes ;) IMHO this is the reason to have bottom: possibility to return "undefined" w/ simple correct type in signature (hidden bottom). If you have a way to code arguments domains, no need to have bottom for pure functions to "signal" gaps - gaps happen in run-time :) This is the one of reasons for bottom to exist, as far as I understand. May be there are other reasons :) === Best regards, Paul
Siddharth,
how would you deal with functions that terminate for some arguments/inputs but do not terminate for the others ?
Alexey.
On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though? Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"? Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence. From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere. What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know. What are your thoughts on this? Cheers Siddharth
On Tue 19 Dec, 2017, 08:09 Brandon Allbery,
wrote: Define "natural".
You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by the external entity providing the input.)
On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
wrote: > I've been thinking about the issue of purity and speculation > lately, and from what little I have read, it looks like the > presence of bottom hiding inside a lazy value is a huge > issue. > > How "natural" is it for bottoms to exist? If one were to change > Haskell and declare that any haskell value can be speculated > upon, what ramifications does this have? > > Is it totally broken? Is it "correct" but makes programming > unpleasant? What's the catch? > > Thanks, > Siddharth
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.
Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful.
Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.
On December 19, 2017 8:35:57 AM CST, Baa
Pure functions can return "undefined" for some arguments values. So, such function is partially-defined. Its domain has "gaps". This can be coded in math, to avoid "undefined" (bottom), like
x = {-100..100, 105, 107..200}
It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes and RefTypes ;)
IMHO this is the reason to have bottom: possibility to return "undefined" w/ simple correct type in signature (hidden bottom). If you have a way to code arguments domains, no need to have bottom for pure functions to "signal" gaps - gaps happen in run-time :) This is the one of reasons for bottom to exist, as far as I understand. May be there are other reasons :)
=== Best regards, Paul
Siddharth,
how would you deal with functions that terminate for some arguments/inputs but do not terminate for the others ?
Alexey.
Is that really true, though? Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right? At that point, no computation ever "bottoms out"? Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence. From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere. What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know. What are your thoughts on this? Cheers Siddharth
On Tue 19 Dec, 2017, 08:09 Brandon Allbery,
wrote: Define "natural".
You might want to look into the concept of Turing completeness. One could define a subset of Haskell in which bottoms cannot occur... but it turns out there's a lot of useful things you can't do in such a language. (In strict languages, these often are expressed as infinite loops of one kind or another. Note also that any dependency on external input is an infinite loop from the perspective of the language, since it can only be broken by
On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote: the
external entity providing the input.)
On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat
wrote: > > I've been thinking about the issue of purity and speculation > > lately, and from what little I have read, it looks like the > > presence of bottom hiding inside a lazy value is a huge > > issue. > > > > How "natural" is it for bottoms to exist? If one were to change > > Haskell and declare that any haskell value can be speculated > > upon, what ramifications does this have? > > > > Is it totally broken? Is it "correct" but makes programming > > unpleasant? What's the catch? > > > > Thanks, > > Siddharth _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sent from my Android device with K-9 Mail. Please excuse my brevity.

So, I have two opinions to share on this:
Regarding the plane example, it is wrong to attempt to graph it on a plane,
precisely because the domain is not the plane.
I am confused by your assertion that it is impossible to avoid divergence
in mathematics: we do not define division as a *partial* function. Rather
we define it as a *total* function on a *restricted domain*.
So, what one would need is the ability to create these restricted domains,
which certain languages have as far as I am aware.
At that point, now that we track our domains and codomains correctly, all
should work out?
I would still like an answer to interesting transformations that are
enabled by having purity and laziness and are not encumbered by the
presence of bottoms.
Cheers,
Siddharth
On Tue 19 Dec, 2017, 20:36 Jonathan Paugh,
In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.
Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful. Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.
On December 19, 2017 8:35:57 AM CST, Baa
wrote: Pure functions can return "undefined" for some arguments values. So, such function is partially-defined. Its domain has "gaps". This can be coded in math, to avoid "undefined" (bottom), like
x = {-100..100, 105, 107..200}
It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes and RefTypes ;)
IMHO this is the reason to have bottom: possibility to return "undefined" w/ simple correct type in signature (hidden bottom). If you have a way to code arguments domains, no need to have bottom for pure functions to "signal" gaps - gaps happen in run-time :) This is the one of reasons for bottom to exist, as far as I understand. May be there are other reasons :)
=== Best regards, Paul
Siddharth,
how would you deal with functions that terminate for some arguments/inputs but do not terminate for the others ?
Alexey.
On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right?
At
that point, no computation ever "bottoms out"? Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence. From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere. What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know. What are your thoughts on this? Cheers Siddharth
On Tue 19 Dec, 2017, 08:09 Brandon Allbery,
wrote: Define "natural". > > You might want to look into the concept of Turing > completeness. > One
could define a subset of Haskell in which bottoms cannot > occur... but it turns out there's a lot of useful things you > can't do in such a language. (In strict languages, these often > are expressed > as
infinite loops of one kind or another. Note also that any > dependency on external input is an infinite loop from the > perspective of the language, since it can only be broken by the > external entity providing the input.) > > On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat >
hat@research.iiit.ac.in> wrote: > >> I've been thinking about the issue of purity and speculation >>> lately, and from what little I have read, it looks like the >>> presence of bottom hiding inside a lazy value is a huge >>> issue. >>> >>> How "natural" is it for bottoms to exist? If one were to >>> >> change
> Haskell and declare that any haskell value can be speculated >>> upon, what ramifications does this have? >>> >>> Is it totally broken? Is it "correct" but makes programming >>> unpleasant? What's the catch? >>> >>> Thanks, >>> Siddharth >>> >>
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
------------------------------
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sent from my Android device with K-9 Mail. Please excuse my brevity. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sending this from my phone, please excuse any typos!

Programs interface with the real world if they are to be useful, and the
real world contains bottoms (including literal ones, ultimately: black
holes). Mathematics is our cognitive tool, not the real world, and can
reject bottoms at least up to a point.
On Tue, Dec 19, 2017 at 11:51 AM, Siddharth Bhat
So, I have two opinions to share on this: Regarding the plane example, it is wrong to attempt to graph it on a plane, precisely because the domain is not the plane.
I am confused by your assertion that it is impossible to avoid divergence in mathematics: we do not define division as a *partial* function. Rather we define it as a *total* function on a *restricted domain*.
So, what one would need is the ability to create these restricted domains, which certain languages have as far as I am aware.
At that point, now that we track our domains and codomains correctly, all should work out?
I would still like an answer to interesting transformations that are enabled by having purity and laziness and are not encumbered by the presence of bottoms.
Cheers, Siddharth
On Tue 19 Dec, 2017, 20:36 Jonathan Paugh,
wrote: In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.
Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful. Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.
On December 19, 2017 8:35:57 AM CST, Baa
wrote: Pure functions can return "undefined" for some arguments values. So, such function is partially-defined. Its domain has "gaps". This can be coded in math, to avoid "undefined" (bottom), like
x = {-100..100, 105, 107..200}
It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes and RefTypes ;)
IMHO this is the reason to have bottom: possibility to return "undefined" w/ simple correct type in signature (hidden bottom). If you have a way to code arguments domains, no need to have bottom for pure functions to "signal" gaps - gaps happen in run-time :) This is the one of reasons for bottom to exist, as far as I understand. May be there are other reasons :)
=== Best regards, Paul
Siddharth,
how would you deal with functions that terminate for some arguments/inputs but do not terminate for the others ?
Alexey.
On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right?
At
that point, no computation ever "bottoms out"? Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence. From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere. What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know. What are your thoughts on this? Cheers Siddharth
On Tue 19 Dec, 2017, 08:09 Brandon Allbery,
wrote: > Define "natural". >> >> You might want to look into the concept of Turing >> completeness. >> > One
> could define a subset of Haskell in which bottoms cannot >> occur... but it turns out there's a lot of useful things you >> can't do in such a language. (In strict languages, these often >> are expressed >> > as
> infinite loops of one kind or another. Note also that any >> dependency on external input is an infinite loop from the >> perspective of the language, since it can only be broken by the >> external entity providing the input.) >> >> On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat >> >
> hat@research.iiit.ac.in> wrote: >> >>> I've been thinking about the issue of purity and speculation >>>> lately, and from what little I have read, it looks like the >>>> presence of bottom hiding inside a lazy value is a huge >>>> issue. >>>> >>>> How "natural" is it for bottoms to exist? If one were to >>>> >>> change > >> Haskell and declare that any haskell value can be speculated >>>> upon, what ramifications does this have? >>>> >>>> Is it totally broken? Is it "correct" but makes programming >>>> unpleasant? What's the catch? >>>> >>>> Thanks, >>>> Siddharth >>>> >>>
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
------------------------------
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sent from my Android device with K-9 Mail. Please excuse my brevity. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sending this from my phone, please excuse any typos!
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

Dependently typed languages usually have a totality checker and notion of
codata. Idris seems most likely to have exploited that for optimizations,
but I haven't seen anything about interesting transformations. Its strict
of course but presumably if you're total you could play with that.
On Dec 19, 2017 8:53 AM, "Siddharth Bhat"
So, I have two opinions to share on this: Regarding the plane example, it is wrong to attempt to graph it on a plane, precisely because the domain is not the plane.
I am confused by your assertion that it is impossible to avoid divergence in mathematics: we do not define division as a *partial* function. Rather we define it as a *total* function on a *restricted domain*.
So, what one would need is the ability to create these restricted domains, which certain languages have as far as I am aware.
At that point, now that we track our domains and codomains correctly, all should work out?
I would still like an answer to interesting transformations that are enabled by having purity and laziness and are not encumbered by the presence of bottoms.
Cheers, Siddharth
On Tue 19 Dec, 2017, 20:36 Jonathan Paugh,
wrote: In the strictest sense, the math will bottom in exactly the same scenarios that Haskell will, but a human is unlikely to make that mistake because the result is nonsensical. If the set x you've provided is the domain for a given function g, then it doesn't make sense to reason about g(106): since 106 isn't in the domain, g(106) cannot exist in the codomain. Yet, if you're trying to graph the function on a plane, you have to deal it somehow, because 106 will exist on the plane.
Getting to the original topic, it is impossible to avoid divergence in general, even in mathematics, which is in no way retrained by a compiler (although it may be limited by the operating environment ;). Languages (or maths) which eschew divergence are not very powerful. Consider that arithmetic diverges, because division is partial. But we humans often learn to deal with bottoms iimplicitly, by eliminating them from consideration in the first place. That might be analogous to refactoring the program to remove all offending cases, and recompiling.
On December 19, 2017 8:35:57 AM CST, Baa
wrote: Pure functions can return "undefined" for some arguments values. So, such function is partially-defined. Its domain has "gaps". This can be coded in math, to avoid "undefined" (bottom), like
x = {-100..100, 105, 107..200}
It's impossible in Haskell, but IMHO its possible in F*, due to DepTypes and RefTypes ;)
IMHO this is the reason to have bottom: possibility to return "undefined" w/ simple correct type in signature (hidden bottom). If you have a way to code arguments domains, no need to have bottom for pure functions to "signal" gaps - gaps happen in run-time :) This is the one of reasons for bottom to exist, as far as I understand. May be there are other reasons :)
=== Best regards, Paul
Siddharth,
how would you deal with functions that terminate for some arguments/inputs but do not terminate for the others ?
Alexey.
On Tue, 2017-12-19 at 07:20 +0000, (IIIT) Siddharth Bhat wrote:
Is that really true, though?
Usually when you have an infinite loop, you have progress of some sort. Infinite loops with no side effects can be removed from the program according to the C standard, for example. So, in general, we should allow programmers to express termination / progress, right?
At
that point, no computation ever "bottoms out"? Shouldn't a hypothetical purely functional programming language better control this (by eg. Forcing totality?) It seems like we lose much of the benefits of purity by muddying the waters with divergence. From an optimising compiler perspective, Haskell is on some weird lose-lose space, where you lose out on traditional compiler techniques that work on strict code, but it also does not allow the awesome stuff you could do with "pure" computation because bottom lurks everywhere. What neat optimisations can be done on Haskell that can't be done in a traditional imperative language? I genuinely want to know. What are your thoughts on this? Cheers Siddharth
On Tue 19 Dec, 2017, 08:09 Brandon Allbery,
wrote: > Define "natural". >> >> You might want to look into the concept of Turing >> completeness. >> > One
> could define a subset of Haskell in which bottoms cannot >> occur... but it turns out there's a lot of useful things you >> can't do in such a language. (In strict languages, these often >> are expressed >> > as
> infinite loops of one kind or another. Note also that any >> dependency on external input is an infinite loop from the >> perspective of the language, since it can only be broken by the >> external entity providing the input.) >> >> On Tue, Dec 19, 2017 at 1:47 AM, (IIIT) Siddharth Bhat >> >
> hat@research.iiit.ac.in> wrote: >> >>> I've been thinking about the issue of purity and speculation >>>> lately, and from what little I have read, it looks like the >>>> presence of bottom hiding inside a lazy value is a huge >>>> issue. >>>> >>>> How "natural" is it for bottoms to exist? If one were to >>>> >>> change > >> Haskell and declare that any haskell value can be speculated >>>> upon, what ramifications does this have? >>>> >>>> Is it totally broken? Is it "correct" but makes programming >>>> unpleasant? What's the catch? >>>> >>>> Thanks, >>>> Siddharth >>>> >>>
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
------------------------------
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sent from my Android device with K-9 Mail. Please excuse my brevity. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Sending this from my phone, please excuse any typos!
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 19.12.2017 um 17:51 schrieb Siddharth Bhat:
So, I have two opinions to share on this: Regarding the plane example, it is wrong to attempt to graph it on a plane, precisely because the domain is not the plane.
People graph partial functions all the time, e.g. the cotangent: https://upload.wikimedia.org/wikipedia/commons/b/bf/Cotangent.svg They simply do not graph the places where the function is undefined.
I am confused by your assertion that it is impossible to avoid divergence in mathematics: we do not define division as a *partial* function. Rather we define it as a *total* function on a *restricted domain*.
Which amounts to the same thing in practice. You still need to check that your denominator is non-zero in a division. Whether you do that by a type check or a runtime check amounts to roughly the same amount of work for the programmer.
So, what one would need is the ability to create these restricted domains, which certain languages have as far as I am aware.
Yes, but type systems are usually more clunky than assertions. You can take apart assertions if they are connected by AND, OR, etc. Type systems usually do not have that, and those that do are no more decidable, which is a no-go for most language designers.
At that point, now that we track our domains and codomains correctly, all should work out?
I would still like an answer to interesting transformations that are enabled by having purity and laziness and are not encumbered by the presence of bottoms. If you have laziness, you do not know whether the computation will terminate before you try, and no amount of type checking will change that. Unless your type language is powerful enough to express the difference between a terminating and a nonterminating computation. Which means that now it's the compiler that needs to deal with potentially-bottom values, instead of the runtime. Which is a slightly different point on the
Not much better than what we have right now, I fear. trade-off spectrum, but still just a trade-off. Regards, Jo
participants (7)
-
Alexey Muranov
-
Baa
-
Brandon Allbery
-
Evan Laforge
-
Joachim Durchholz
-
Jonathan Paugh
-
Siddharth Bhat