Ord for partially ordered sets

What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered? Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs. What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness? (Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p) [1]: https://github.com/haskell/fgl/pull/11 -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

On Fri, Apr 24, 2015 at 11:06:07PM +1000, Ivan Lazar Miljenovic wrote:
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and `g2` are not comparable according to the partial order?

On 24 April 2015 at 23:17, Tom Ellis
On Fri, Apr 24, 2015 at 11:06:07PM +1000, Ivan Lazar Miljenovic wrote:
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and `g2` are not comparable according to the partial order?
With the proposed patch, it's the result of <= on the underlying [Int]Maps. Does the definition of Ord on Data.Map make sense? e.g. what should be the result of (fromList [(1,'a'), (2,'b'), (3, 'c')]) <= (fromList [(1,'a'), (4,'d')])? What about (fromList [(1,'a'), (2,'b'), (3, 'c')]) <= (fromList [(1,'a'), (2,'e')])? -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

On Fri, Apr 24, 2015 at 11:27:46PM +1000, Ivan Lazar Miljenovic wrote:
On 24 April 2015 at 23:17, Tom Ellis
wrote: On Fri, Apr 24, 2015 at 11:06:07PM +1000, Ivan Lazar Miljenovic wrote:
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and `g2` are not comparable according to the partial order?
With the proposed patch, it's the result of <= on the underlying [Int]Maps.
Ah, so it's a case of adding a valid Ord instance that isn't a natural one for the particular datatype. If you really need something like that, for example to add your graphs to a Data.Set, then I would suggest a newtype might be appropriate. Tom

I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and `g2` are not comparable according to the partial order?
False.
The operators aren't a problem for this reason. The real problem is
what does `compare` return?
On Fri, Apr 24, 2015 at 1:32 PM, Ertugrul Söylemez
I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and `g2` are not comparable according to the partial order?
False.
Greets, Ertugrul
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

On Fri, 24 Apr 2015, Ivan Lazar Miljenovic wrote:
Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances.
In an application we needed to do some combinatorics of graphs and thus needed Set Graph. Nonetheless, I think that graph0 < graph1 should be a type error. We can still have a set of Graphs using a newtype.

On 25 April 2015 at 00:01, Henning Thielemann
On Fri, 24 Apr 2015, Ivan Lazar Miljenovic wrote:
Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances.
In an application we needed to do some combinatorics of graphs and thus needed Set Graph.
Nonetheless, I think that graph0 < graph1 should be a type error. We can still have a set of Graphs using a newtype.
This could work; the possible problem would be one of efficiency: if it's done directly on the graph datatypes they can use the underlying (ordered) data structure; going purely by the Graph API, there's no guarantees of ordering and thus it would be needed to call sort, even though in practice it's redundant. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com

On 04/24/2015 03:06 PM, Ivan Lazar Miljenovic wrote:
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
I'd say this is harmful, as functions like min and max (and others) rely on the totality of the ordering. Partial orderings are useful in itself, I implemented my own library https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-PartialOrd.ht... mainly to use it for maintaining sets of incomparable elements: https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-Favorites.htm...
Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs.
What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness?
(Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p)
-- Andreas Abel <>< Du bist der geliebte Mensch. Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden andreas.abel@gu.se http://www2.tcs.ifi.lmu.de/~abel/

I would be hesitant about adding an Ord instance normally, because there's
no clear semantics for it. If we just pass it through to the underlying
data structure, it might behave differently depending on how you implement
the graph, which is something fgl should ideally abstract over.
Maybe you could provide them in a newtype yourself, in the library? You
could call it something like GrKey to make it clear that it has an Ord
instance for practical reasons rather than because graphs are meaningfully
orderable. This just forces people who need the capability to be a bit more
explicit about it.
On Fri, Apr 24, 2015 at 7:47 AM, Andreas Abel
On 04/24/2015 03:06 PM, Ivan Lazar Miljenovic wrote:
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
I'd say this is harmful, as functions like min and max (and others) rely on the totality of the ordering.
Partial orderings are useful in itself, I implemented my own library
https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-PartialOrd.ht...
mainly to use it for maintaining sets of incomparable elements:
https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-Favorites.htm...
Specifically, I have a pull request for fgl [1] to add Ord instances
for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs.
What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness?
(Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p)
-- Andreas Abel <>< Du bist der geliebte Mensch.
Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden
andreas.abel@gu.se http://www2.tcs.ifi.lmu.de/~abel/
_______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries

I see it as "morally wrong". It's like a Monad instance that doesn't obey the monad laws. The kind of ticking timebomb strong typing is supposed to protect us against. But that only works if we do our part and don't make non-sense instances. Can your consumer get away with using a Hashable instance? (I.e., for use in unordered-containers). This would be morally correct -- the graph could presumably have a valid Hashable instance. On Fri, Apr 24, 2015 at 6:06 AM, Ivan Lazar Miljenovic < ivan.miljenovic@gmail.com> wrote:
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs.
What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness?
(Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p)
[1]: https://github.com/haskell/fgl/pull/11
-- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

On Fri, 24 Apr 2015, Ivan Lazar Miljenovic
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs.
What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness?
(Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p)
[1]: https://github.com/haskell/fgl/pull/11
-- Ivan Lazar Miljenovic
Of course these type-classes (I hope I am using the word correctly) should be standard: 1. Ord, which is the class of all totally ordered set-like things 2. PoSet, which is the class of all partially ordered set-like things 3. NonStrictPoSet, which is the class of all partially ordered set-like things, but without the requirement that a <= b and b <= a implies a Equal b. 4. Things like above, but with the requirement of a Zero, with the requirement of a One, and the requirement fo both a Zero and a One. oo--JS.
Ivan.Miljenovic@gmail.com http://IvanMiljenovic.wordpress.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

3. NonStrictPoSet, which is the class of all partially ordered set-like things, but without the requirement that a <= b and b <= a implies a Equal b.
Those are preorders. An antisymmetric preorder is a non-strict poset. Also it's difficult to capture all of those various order types in Haskell's class system. A type can have many orders and many underlying equivalence relations in the case of partial and total orders, and there are different ways to combine them. For example equality is a partial order, modular equivalence is a preorder, etc. Those denote bags and groups more than sequences or trees. Perhaps it's time to add a type class-like system to Haskell, but for explicitly passed arguments: record Relation a where related :: a -> a -> Bool unrelated :: a -> a -> Bool unrelated x y = not (related x y) func1 :: Relation A -> A -> A -> A func1 _ x y = ... related x y ... func2 :: Relation A -> Relation A -> A -> A -> A func2 r1 r2 x y = ... r1.related x y ... r2.unrelated x y ... In a lot of cases this is much more appropriate than a type class, and it would turn many things that are currently types into regular functions, thus making them a lot more composable: down :: Ord a -> Ord a down o = Ord { compare x y = o.compare y x } -- The remaining Ord functions are defaulted. Perhaps all we need is to generalise default definitions to data types and add module-like dot syntax for records (mainly to preserve infix operators). Formally speaking there is also little that prevents us From having associated types in those records that can be used on the type level. For actual record types (i.e. single-constructor types) we could even have specialisation and get a nice performance boost that way, if we ask for it: {-# SPECIALISE down someOrder :: Ord SomeType #-} This would be extremely useful.
4. Things like above, but with the requirement of a Zero, with the requirement of a One, and the requirement fo both a Zero and a One.
Zero and one as in minBound and maxBound or rather as in Monoid and a hypothetical Semiring? In the latter case I believe they don't really belong into an additional class, unless you have some ordering-related laws for the zeroes and ones. If not, you can always simply use an Ord+Semiring constraint. There may be some motivation to make Bounded a subclass of Ord though. Greets, Ertugrul

hrm, wouldn't your proposed extension be largely accomplished by using
Record pun and Record WildCards?
eg
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE RecordPuns #-}
module Foo where
data Relation a = Rel{related :: a -> a ->Bool,unrelated :: a -> a -> Bool}
foo :: Relation A -> A -> A -> Bool
foo Rel{..} x y = related x y
------
or am i over looking something?
I do realize this may not quite be what youre suggesting, and if so, could
you help me understand better? :)
On Fri, Apr 24, 2015 at 4:26 PM, Ertugrul Söylemez
3. NonStrictPoSet, which is the class of all partially ordered set-like things, but without the requirement that a <= b and b <= a implies a Equal b.
Those are preorders. An antisymmetric preorder is a non-strict poset.
Also it's difficult to capture all of those various order types in Haskell's class system. A type can have many orders and many underlying equivalence relations in the case of partial and total orders, and there are different ways to combine them. For example equality is a partial order, modular equivalence is a preorder, etc. Those denote bags and groups more than sequences or trees.
Perhaps it's time to add a type class-like system to Haskell, but for explicitly passed arguments:
record Relation a where related :: a -> a -> Bool
unrelated :: a -> a -> Bool unrelated x y = not (related x y)
func1 :: Relation A -> A -> A -> A func1 _ x y = ... related x y ...
func2 :: Relation A -> Relation A -> A -> A -> A func2 r1 r2 x y = ... r1.related x y ... r2.unrelated x y ...
In a lot of cases this is much more appropriate than a type class, and it would turn many things that are currently types into regular functions, thus making them a lot more composable:
down :: Ord a -> Ord a down o = Ord { compare x y = o.compare y x } -- The remaining Ord functions are defaulted.
Perhaps all we need is to generalise default definitions to data types and add module-like dot syntax for records (mainly to preserve infix operators). Formally speaking there is also little that prevents us From having associated types in those records that can be used on the type level.
For actual record types (i.e. single-constructor types) we could even have specialisation and get a nice performance boost that way, if we ask for it:
{-# SPECIALISE down someOrder :: Ord SomeType #-}
This would be extremely useful.
4. Things like above, but with the requirement of a Zero, with the requirement of a One, and the requirement fo both a Zero and a One.
Zero and one as in minBound and maxBound or rather as in Monoid and a hypothetical Semiring? In the latter case I believe they don't really belong into an additional class, unless you have some ordering-related laws for the zeroes and ones. If not, you can always simply use an Ord+Semiring constraint.
There may be some motivation to make Bounded a subclass of Ord though.
Greets, Ertugrul
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

hrm, wouldn't your proposed extension be largely accomplished by using Record pun and Record WildCards?
A part of it would, but it wouldn't preserve operators. For example instead of `x r.<> y` you would have to write `(<>) r x y`. Also defaults are not available. Other class features are not accessible, most notably type-level features like associated types. The idea is that a record would be completely equivalent to a class with the only difference being that you define values instead of instances, that there are no constraints on which values can exist and that those values must be passed explicitly to functions as regular arguments.
Perhaps it's time to add a type class-like system to Haskell, but for explicitly passed arguments:
record Relation a where related :: a -> a -> Bool
unrelated :: a -> a -> Bool unrelated x y = not (related x y)
func1 :: Relation A -> A -> A -> A func1 _ x y = ... related x y ...
func2 :: Relation A -> Relation A -> A -> A -> A func2 r1 r2 x y = ... r1.related x y ... r2.unrelated x y ...
In a lot of cases this is much more appropriate than a type class, and it would turn many things that are currently types into regular functions, thus making them a lot more composable:
down :: Ord a -> Ord a down o = Ord { compare x y = o.compare y x } -- The remaining Ord functions are defaulted.
Perhaps all we need is to generalise default definitions to data types and add module-like dot syntax for records (mainly to preserve infix operators). Formally speaking there is also little that prevents us From having associated types in those records that can be used on the type level.
For actual record types (i.e. single-constructor types) we could even have specialisation and get a nice performance boost that way, if we ask for it:
{-# SPECIALISE down someOrder :: Ord SomeType #-}
This would be extremely useful.

On 25/04/15 15:51, Ertugrul Söylemez wrote:
hrm, wouldn't your proposed extension be largely accomplished by using Record pun and Record WildCards?
A part of it would, but it wouldn't preserve operators. For example instead of `x r.<> y` you would have to write `(<>) r x y`.
Not at all. {-# LANGUAGE RecordWildCards #-} import Prelude hiding (sum) data Monoid a = Monoid { empty :: a, (<>) :: a -> a -> a } sum :: Num a => Monoid a sum = Monoid 0 (+) three :: Integer three = let Monoid{..} = sum in 1 <> 2
Other class features are not accessible, most notably type-level features like associated types.
Associated types become additional type variables of the record type. A class class C a where type T a is essentially equivalent to class C a t | a -> t But the functional dependency is not enforceable on the value level (isn't the whole point of this discussion not to restrict what "instances" can be defined), so you end up with class C a t, a simple MPTC.
Also defaults are not available.
Now this is a good point.
The idea is that a record would be completely equivalent to a class with the only difference being that you define values instead of instances, that there are no constraints on which values can exist and that those values must be passed explicitly to functions as regular arguments.
Except we already have regular records (aka data types) which satisfy 90% of the requirements, and adding another language construct to satisfy those remaining 10% feels wrong to me. I'd rather improve the existing construct. Roman

hrm, wouldn't your proposed extension be largely accomplished by using Record pun and Record WildCards?
A part of it would, but it wouldn't preserve operators. For example instead of `x r.<> y` you would have to write `(<>) r x y`.
Not at all.
three :: Integer three = let Monoid{..} = sum in 1 <> 2
Puns become tedious and error-prone as soon as you need to refer to multiple records, when operators are involved. But it's not that important actually. I can live with the current record syntax. The most useful features would be defaults, a more suitable syntax for defining record types and potentially the following:
Other class features are not accessible, most notably type-level features like associated types.
Associated types become additional type variables of the record type.
Indeed. However, when the type follows from other type arguments, it would often be convenient not to spell it out and instead bring an associated type constructor into scope. This is especially true when the AT refers to a type that isn't used very often. record Extractor a where type Elem a extract :: a -> Maybe (Elem a, a) extractTwo :: (e1 : Extractor a) -> (e2 : Extractor a) -> a -> Maybe (e1.Elem a, e2.Elem a, a) extractTwo e1 e2 xs0 = do (x1, xs1) <- e1.extract xs0 (x2, xs2) <- e1.extract xs1 return (x1, x2, xs2)
But the functional dependency is not enforceable on the value level (isn't the whole point of this discussion not to restrict what "instances" can be defined), so you end up with
class C a t,
a simple MPTC.
I don't see a reason to enforce a dependency, since there is no equivalent to instance resolution. Regular unification should cover any ambiguities, and if it doesn't you need ScopedTypeVariables.
The idea is that a record would be completely equivalent to a class with the only difference being that you define values instead of instances, that there are no constraints on which values can exist and that those values must be passed explicitly to functions as regular arguments.
Except we already have regular records (aka data types) which satisfy 90% of the requirements, and adding another language construct to satisfy those remaining 10% feels wrong to me. I'd rather improve the existing construct.
That's actually what I'm proposing. The record syntax would simply be syntactic sugar for single-constructor data types that is more suitable for records, especially when defaults and other class-like features are involved. Most notably it would support layout. There is no reason why you shouldn't be able to use `data` to achieve the same thing, except with a clumsier syntax and the option to have multiple constructors. Greets, Ertugrul

Isn't your associated type here more like a dependent record field/
existential that we can kinda expose?
This does seem to veer into first class module territory. Especially wrt
needing first class types in a fashion.
Have you had a chance to peruse the Andreas Rossberg 1ml paper on embedding
first class modules into f omega that has been circulating? Perhaps there
are ideas There that could be adapted. Especially since core is an
augmented f omega
On Saturday, April 25, 2015, Ertugrul Söylemez
hrm, wouldn't your proposed extension be largely accomplished by using Record pun and Record WildCards?
A part of it would, but it wouldn't preserve operators. For example instead of `x r.<> y` you would have to write `(<>) r x y`.
Not at all.
three :: Integer three = let Monoid{..} = sum in 1 <> 2
Puns become tedious and error-prone as soon as you need to refer to multiple records, when operators are involved. But it's not that important actually. I can live with the current record syntax.
The most useful features would be defaults, a more suitable syntax for defining record types and potentially the following:
Other class features are not accessible, most notably type-level features like associated types.
Associated types become additional type variables of the record type.
Indeed. However, when the type follows from other type arguments, it would often be convenient not to spell it out and instead bring an associated type constructor into scope. This is especially true when the AT refers to a type that isn't used very often.
record Extractor a where type Elem a extract :: a -> Maybe (Elem a, a)
extractTwo :: (e1 : Extractor a) -> (e2 : Extractor a) -> a -> Maybe (e1.Elem a, e2.Elem a, a) extractTwo e1 e2 xs0 = do (x1, xs1) <- e1.extract xs0 (x2, xs2) <- e1.extract xs1 return (x1, x2, xs2)
But the functional dependency is not enforceable on the value level (isn't the whole point of this discussion not to restrict what "instances" can be defined), so you end up with
class C a t,
a simple MPTC.
I don't see a reason to enforce a dependency, since there is no equivalent to instance resolution. Regular unification should cover any ambiguities, and if it doesn't you need ScopedTypeVariables.
The idea is that a record would be completely equivalent to a class with the only difference being that you define values instead of instances, that there are no constraints on which values can exist and that those values must be passed explicitly to functions as regular arguments.
Except we already have regular records (aka data types) which satisfy 90% of the requirements, and adding another language construct to satisfy those remaining 10% feels wrong to me. I'd rather improve the existing construct.
That's actually what I'm proposing. The record syntax would simply be syntactic sugar for single-constructor data types that is more suitable for records, especially when defaults and other class-like features are involved. Most notably it would support layout. There is no reason why you shouldn't be able to use `data` to achieve the same thing, except with a clumsier syntax and the option to have multiple constructors.
Greets, Ertugrul

Isn't your associated type here more like a dependent record field/ existential that we can kinda expose?
Not quite. There is still a clear distinction between type and value level. You cannot refer to an AT on the value level or to a member value on the type level.
This does seem to veer into first class module territory. Especially wrt needing first class types in a fashion.
I think formally there is little difference between a powerful record system and a first-class module system. However, even in a non-dependent language a first class module could still expect a value argument. A record type couldn't do this without essentially making the language dependent on the way.
Have you had a chance to peruse the Andreas Rossberg 1ml paper on embedding first class modules into f omega that has been circulating? Perhaps there are ideas There that could be adapted. Especially since core is an augmented f omega
I haven't read it, sorry. My proposal should conform to the current core language, as it's mostly just a syntax transformation. The only new semantics would be defaults.

You miss apprehend. I'm saying that the first class modules encoding in
that paper is expressible using a subset of ghc core.
There IS the subtle issue that type class dictionaries / type classes have
different sharing and strictness semantics than normal userland records.
There's a space Of designs that could provide what you're asking for, and I
do agree that it's worth exploring. I guess The main question is whether or
not it turns into its own research engineering project.
Likewise, to what extent aside from syntactic nicety does implicit
parameters and the punning extensions not suffice?
On Saturday, April 25, 2015, Ertugrul Söylemez
Isn't your associated type here more like a dependent record field/ existential that we can kinda expose?
Not quite. There is still a clear distinction between type and value level. You cannot refer to an AT on the value level or to a member value on the type level.
This does seem to veer into first class module territory. Especially wrt needing first class types in a fashion.
I think formally there is little difference between a powerful record system and a first-class module system. However, even in a non-dependent language a first class module could still expect a value argument. A record type couldn't do this without essentially making the language dependent on the way.
Have you had a chance to peruse the Andreas Rossberg 1ml paper on embedding first class modules into f omega that has been circulating? Perhaps there are ideas There that could be adapted. Especially since core is an augmented f omega
I haven't read it, sorry. My proposal should conform to the current core language, as it's mostly just a syntax transformation. The only new semantics would be defaults.

Likewise, to what extent aside from syntactic nicety does implicit parameters and the punning extensions not suffice?
At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments: f :: X -> Y -> Z {-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-} This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries. This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature.

Could that specialization be accomplished today using eds reflection pkg? I
guess not quite in terms of that pre apply pattern.
This is interesting. And it's a good example of a larger problem of not
enough support for composable specialization with good sharing across the
use sites that doesn't require egregious Inlining. At least for code that
isn't Type class driven.
On Saturday, April 25, 2015, Ertugrul Söylemez
Likewise, to what extent aside from syntactic nicety does implicit parameters and the punning extensions not suffice?
At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments:
f :: X -> Y -> Z
{-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-}
This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries.
This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature.

Could that specialization be accomplished today using eds reflection pkg? I guess not quite in terms of that pre apply pattern.
I'm not familiar with that package and couldn't find it on Hackage by a quick search. But I believe that it can only be done with compiler support, although with enough hackery you can probably get an ugly version of it using TH.
This is interesting. And it's a good example of a larger problem of not enough support for composable specialization with good sharing across the use sites that doesn't require egregious Inlining. At least for code that isn't Type class driven.
Indeed. Specialisation is a really good way to get very fast code without making your executable size explode. I believe that support for more fine-grained specialisation should and will improve. I'm not sure how to make it more composable though.
At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments:
f :: X -> Y -> Z
{-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-}
This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries.
This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature.

Here yah go https://hackage.haskell.org/package/reflection
It exploits how dictionary passing works in a pretty robust way that ghc is
likely to at some point codify officially.
On Saturday, April 25, 2015, Ertugrul Söylemez
Could that specialization be accomplished today using eds reflection pkg? I guess not quite in terms of that pre apply pattern.
I'm not familiar with that package and couldn't find it on Hackage by a quick search. But I believe that it can only be done with compiler support, although with enough hackery you can probably get an ugly version of it using TH.
This is interesting. And it's a good example of a larger problem of not enough support for composable specialization with good sharing across the use sites that doesn't require egregious Inlining. At least for code that isn't Type class driven.
Indeed. Specialisation is a really good way to get very fast code without making your executable size explode. I believe that support for more fine-grained specialisation should and will improve. I'm not sure how to make it more composable though.
At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments:
f :: X -> Y -> Z
{-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-}
This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries.
This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature.

Here yah go https://hackage.haskell.org/package/reflection It exploits how dictionary passing works in a pretty robust way that ghc is likely to at some point codify officially.
Oh, that one. Of course I'm familiar with it, but it's less an optimisation than a clever way to implement implicit configurations.
Could that specialization be accomplished today using eds reflection pkg? I guess not quite in terms of that pre apply pattern.
I'm not familiar with that package and couldn't find it on Hackage by a quick search. But I believe that it can only be done with compiler support, although with enough hackery you can probably get an ugly version of it using TH.
This is interesting. And it's a good example of a larger problem of not enough support for composable specialization with good sharing across the use sites that doesn't require egregious Inlining. At least for code that isn't Type class driven.
Indeed. Specialisation is a really good way to get very fast code without making your executable size explode. I believe that support for more fine-grained specialisation should and will improve. I'm not sure how to make it more composable though.
At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments:
f :: X -> Y -> Z
{-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-}
This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries.
This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature.

are you sure you've evaluated how it interacts with this sort of
optimization? I think it actually gets you pretty far!
On Sat, Apr 25, 2015 at 4:06 PM, Ertugrul Söylemez
Here yah go https://hackage.haskell.org/package/reflection It exploits how dictionary passing works in a pretty robust way that ghc is likely to at some point codify officially.
Oh, that one. Of course I'm familiar with it, but it's less an optimisation than a clever way to implement implicit configurations.
Could that specialization be accomplished today using eds reflection pkg? I guess not quite in terms of that pre apply pattern.
I'm not familiar with that package and couldn't find it on Hackage by a quick search. But I believe that it can only be done with compiler support, although with enough hackery you can probably get an ugly version of it using TH.
This is interesting. And it's a good example of a larger problem of not enough support for composable specialization with good sharing across the use sites that doesn't require egregious Inlining. At least for code that isn't Type class driven.
Indeed. Specialisation is a really good way to get very fast code without making your executable size explode. I believe that support for more fine-grained specialisation should and will improve. I'm not sure how to make it more composable though.
At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments:
f :: X -> Y -> Z
{-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-}
This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries.
This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature.

are you sure you've evaluated how it interacts with this sort of optimization? I think it actually gets you pretty far!
Most of the magic occurs in reification. The trouble is that it expects a polymorphic function of rank 2 with a nontrivial context (Reifies), a function that by construction cannot be specialised, unless you specialise the receiving function (reify) for every application case. The beauty of reflection is that it's free by virtue of sharing. Unfortunately sharing is the exact opposite of inlining. According to a benchmark I've done a few months ago it behaves exactly as if the reflected value was just a shared argument with no inlining performed; an expected and reasonable result. What I'm really after is a sort of controlled inlining. That's pretty much what instance-based specialisation currently does for dictionaries. Technically a dictionary is just another argument, so there is no fundamental reason why we shouldn't have a more general specialiser.
Here yah go https://hackage.haskell.org/package/reflection It exploits how dictionary passing works in a pretty robust way that ghc is likely to at some point codify officially.
Oh, that one. Of course I'm familiar with it, but it's less an optimisation than a clever way to implement implicit configurations.

Ok cool. Sounds like we agree.
Luite and I talked about something similar a few months ago, I'll try to
dig up my notes and maybe we can turn this into a ghc feature request or
patch
On Saturday, April 25, 2015, Ertugrul Söylemez
are you sure you've evaluated how it interacts with this sort of optimization? I think it actually gets you pretty far!
Most of the magic occurs in reification. The trouble is that it expects a polymorphic function of rank 2 with a nontrivial context (Reifies), a function that by construction cannot be specialised, unless you specialise the receiving function (reify) for every application case. The beauty of reflection is that it's free by virtue of sharing. Unfortunately sharing is the exact opposite of inlining.
According to a benchmark I've done a few months ago it behaves exactly as if the reflected value was just a shared argument with no inlining performed; an expected and reasonable result.
What I'm really after is a sort of controlled inlining. That's pretty much what instance-based specialisation currently does for dictionaries. Technically a dictionary is just another argument, so there is no fundamental reason why we shouldn't have a more general specialiser.
Here yah go https://hackage.haskell.org/package/reflection It exploits how dictionary passing works in a pretty robust way that ghc is likely to at some point codify officially.
Oh, that one. Of course I'm familiar with it, but it's less an optimisation than a clever way to implement implicit configurations.

Ok cool. Sounds like we agree. Luite and I talked about something similar a few months ago, I'll try to dig up my notes and maybe we can turn this into a ghc feature request or patch
Great, I would definitely help with that!
are you sure you've evaluated how it interacts with this sort of optimization? I think it actually gets you pretty far!
Most of the magic occurs in reification. The trouble is that it expects a polymorphic function of rank 2 with a nontrivial context (Reifies), a function that by construction cannot be specialised, unless you specialise the receiving function (reify) for every application case. The beauty of reflection is that it's free by virtue of sharing. Unfortunately sharing is the exact opposite of inlining.
According to a benchmark I've done a few months ago it behaves exactly as if the reflected value was just a shared argument with no inlining performed; an expected and reasonable result.
What I'm really after is a sort of controlled inlining. That's pretty much what instance-based specialisation currently does for dictionaries. Technically a dictionary is just another argument, so there is no fundamental reason why we shouldn't have a more general specialiser.

On Fri, Apr 24, 2015 at 9:06 AM, Ivan Lazar Miljenovic
What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
Defining Ord instances for types which are not totally ordered is *wrong*. For example, due to the existence of NaN values, Double/Float are not totally ordered and therefore their Ord instances are buggy. In my logfloat package I have to explicitly add checks to work around the issues introduced by the buggy Ord Double instance. This is why I introduced the PartialOrd class, and I'm not the first one to create such a class. We really ought to have an official PartialOrd class as part of base/Prelude. The only question is whether to use Maybe Ordering or a specially defined PartialOrdering type (the latter optimizing for space and pointer indirection; the former optimizing for reducing code duplication for manipulating the Ordering/PartialOrdering types). -- Live well, ~wren

While this is a bit off-topic, I'd like to add my 5 cents that often adding
instances for common type-classes might be "bad" even when it's totally
defined for all values, one example is a Monoid instance for HashMap.
So, I'd say that if you might be in doubt -- it's better to not add
instance at all, since your users have no ability to remove it from their
projects (or redefine).
26 квіт. 2015 04:10 "wren romano"
On Fri, Apr 24, 2015 at 9:06 AM, Ivan Lazar Miljenovic
wrote: What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered?
Defining Ord instances for types which are not totally ordered is *wrong*.
For example, due to the existence of NaN values, Double/Float are not totally ordered and therefore their Ord instances are buggy. In my logfloat package I have to explicitly add checks to work around the issues introduced by the buggy Ord Double instance. This is why I introduced the PartialOrd class, and I'm not the first one to create such a class. We really ought to have an official PartialOrd class as part of base/Prelude. The only question is whether to use Maybe Ordering or a specially defined PartialOrdering type (the latter optimizing for space and pointer indirection; the former optimizing for reducing code duplication for manipulating the Ordering/PartialOrdering types).
-- Live well, ~wren _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
participants (13)
-
Alexander Solla
-
Andreas Abel
-
Carter Schonwald
-
Ertugrul Söylemez
-
Henning Thielemann
-
Ivan Lazar Miljenovic
-
Jay Sulzberger
-
Kostiantyn Rybnikov
-
Mike Izbicki
-
Roman Cheplyaka
-
Tikhon Jelvis
-
Tom Ellis
-
wren romano