
Am 25.10.2016 um 09:12 schrieb Rik Howard:
whether it should have opaque types (and why), whether there should be subtypes or not, how the type system is supposed to deal with arithmetic which has almost-compatible integer types and floating-point types. That's just off the top of my head, I am pretty sure that there are other issues. It is hard to discuss merits or problems at this stage, since all of these issues tend to influence each other.
There are gaps in the design. This step is to show that at least those considerations haven't been precluded in some way by the mix of features.
Question is not whether these things are precluded, question is how you want to tackle them. It's not even stating design goals here.
One thing I have heard is that effects, subtypes and type system soundness do not mix well. Subtypes are too useful to ignore, unsound types systems are not worth the effort, so I find it a bit surprising that the paper has nothing to say about the issue.
I'm not sure what you mean by effects (may I ask you to elaborate? Or side effects maybe?)
Yes.
but subtypes would appear to offer an intuitive analogy with set theory.
That's the least interesting part of subtypes, actually. The salient point of this and some other features is that they make it easier to reason about a given program's properties, at the expense of making programming harder. (One of the major design points in designing a new programming language is where exactly to place that trade-off, and a lot of streaks of genius went into easing the burden on the programmer.)
It would mean extra look-ups in the deciding function to check inclusion, possibly using some sort of 'narrowable' type, and that would make showing soundness that much more involved. Are there other beyond-the-norm complications?
Lots. The basic concept of subtypes is simple, but establishing a definition of "subtype" that is both useful and sound is far from trivial. For example. mutable circles and mutable ellipses are not in a subtype relationship to each other if there is an updating "scale" operation with an x and y scaling factor (you cannot guarantee that a scaled circle stays circular). The design space for dealing with this is far from fully explored. Also, subtypes and binary operators do not really mix; google for "parallel type hierarchy". (The core of the problem is that if you make Byte a subtype of Word, declaring the (+) operator in Word as Word -> Word will preclude Byte from being a subtype because you want a covariant signature in Byte but that violates subtyping rules for functions. So you need parametric polymorphism, but now you cannot use the simple methods for subtyping anymore.)
Are you aware how "monadic IO" became the standard in Haskell? It was one of three competing approaches, and AFAIK one turned out to be less useful, and the other simply wasn't ready in time (so it might still be interesting to investigate).
No, I'm not, it sounds fascinating. Thank you for subsequently providing references.
> For IO, ... variable parameters.
What's the advantage here? Given the obvious strong disadvantage that it forces callers into an idiom that uses updatable data structures, the advantage better be compelling.
The out-vars are the same as other variables in terms of updating: they have to be fresh on the way in and can't be modified after coming out -- I should make that more clear
Oh, you don't have in-place updates, you have just initialization? I missed that. The key point to mention is that you want to maintain referential integrity. BTW this still makes loops useless for putting values in variables, because you can't update variables in an iteration; programmers will still have to write recursive functions. BTW nobody who is familiar with functional languages would consider that a disadvantage. Speaking of user groups: I am not sure what crowd you want to attract with your design. It's not necessary to put that into the paper, but one of the things that went "er, what?" in the back of my head was that I could not infer for whom this kind of language would be useful.
-- or was that not what you meant? The difference (I don't know that it can be called an advantage) is that IO can be done pretty much wherever-whenever but the insistence of a try-then-else for penultimate invocations forces that doing not to be unnoticed.
Sounds pretty much like the conseqences of having the IO monad in Haskell. I think you should elaborate similarities and differences with how Haskell does IO, that's a well-known standard it is going to make the paper easier to read. Same goes for Clean&Mercury.
Right now I fail to see what's new&better in this.
Some languages allow IO expressions without any further thought being paid to the matter; some provide explicit mechanisms for dealing with IO. The language in the note takes a mid-way approach, in some sense, that I'm not familiar with from elsewhere. Assuming that this approach isn't in a language that I should know by now, could the approach not count as new? It may be irrelevant on some level, I suppose.
It's hard to tell whether it is actually new, too many details are missing.
I hope that this goes some way towards being an adequate response. Once again, thank you for your invaluable feedback -- much appreciated!
You're welcome :-) Regards Jo