Base classes can be _ELIMINATED_ with interfaces

I was correct before, except I conflated the word "extended" with "eliminated" in my mind: http://lambda-the-ultimate.org/node/1277#comment-51723 The most robust solution to Tim Sweeney's problem is to rethink what a "class" should be: http://www.haskell.org/haskellwiki/?title=Why_Haskell_matters&oldid=24286#Haskell_vs_OOP In Haskel, subtyping is done with Module. A "type class" is a polymorphic (relative to data type) interface, and the polymorphism is strictly parameterized for the client/consumer of the interface, i.e. the data type is known to the function that inputs the interface AT COMPILE TIME: http://www.haskell.org/~pairwise/intro/section2.html#part3 If all the functions/methods of package Engine input exclusively interfaces (i.e. Engine.IActor), then client package GearsOfWar can create a new subtype GearsOfWar.Actor which also implements the interface Engine.IActor (and perhaps even more interfaces). A problem with virtual (runtime pointer) inheritance is that it hides the subclass from the compiler. Polymorphic interfaces are missing from C++ and one can sort of hack an emulation with abstract classes, private delegates, and templates: http://lambda-the-ultimate.org/node/1277#comment-51806 In Haskell, parameterized polymorphic interfaces can be inherited, and this is knowable to the client function at compile time. Fundamental theorems tell us that any membership rule for a set can not be an absolute point, thus we should strive for typing and classing architecture which is the most granular (these are my summaries): * Russell's Paradox: there is no rule for a set that does not cause it to contain itself, thus all sets are infinitely recursive. * Liskov Substition Principle: it is an undecidable problem that subsets inherit. * Linsky Referencing: it is undecidable what something is when it is described or perceived. * Coase Theorem: there is no external reference point, any such barrier will fail. * Godel's Theorem: any formal theory, in which all arithmetic truths can be proved, is inconsistent. * 1856 Thermo Law: entire universe (a closed system, i.e. everything) trends to maximum disorder.

The "style" of OOP is irrelevant, and if one means by "style" the conflation of the interface with the data and/or use of virtual (runtime) base class inheritance and the style of that induces, then it is an architectural mistake: http://www.haskell.org/pipermail/haskell-cafe/2009-November/068433.html

Shelby Moore wrote:
...A "type class" is a polymorphic (relative to data type) interface, and the polymorphism is strictly parameterized for the client/consumer of the interface, i.e. the data type is known to the function that inputs the interface AT COMPILE TIME.
...A problem with virtual (runtime pointer) inheritance is that it hides the subclass from the compiler.
I emphasize that in Haskell, the consuming function knows the interface at compile time (or it can allow the compiler to infer it, if no type class restriction is specified).
...if one means by "style" the conflation of the interface with the data and/or use of virtual (runtime) base class inheritance and the style of that induces, then it is an architectural mistake...
One explanation of how "extends" (base class) does impact composability: http://www.haskell.org/pipermail/haskell-cafe/2009-October/068337.html Instead encapsulate ("inherit") data type orthogonal to "type class" (interface) in Haskell: http://www.haskell.org/pipermail/haskell-cafe/2009-October/068328.html
The point of that whole rant is that extending data-bearing classes isn't necessarily a good idea, so before trying to find a way to do it with haskell, it may be better to just encapsulate another data type, which is trivial:
data InnerThing = A | B | C
data OuterThing = Outer { innerThing :: InnerThing, otherField :: Int }
Additionally note: Shelby Moore wrote:
In Haskel, subtyping is done with Module...
data is analogous to a public class members. Use Module to make some implementation private: http://www.cs.auckland.ac.nz/references/haskell/haskell-intro-html/modules.h... Thus use type classes (interfaces) when consuming the public Module interfaces in functions (i.e. only consume the exported Module interfaces in the sense they implement a type class, never directly consume the Module name space). I realize some examples would make the above easier to visualize.

Shelby Moore wrote:
...A "type class" is a polymorphic (relative to data type) interface, and the polymorphism is strictly parameterized for the client/consumer of the interface, i.e. the data type is known to the function that inputs the interface AT COMPILE TIME.
...A problem with virtual (runtime pointer) inheritance is that it hides the subclass from the compiler.
I emphasize that in Haskell, the consuming function knows the interface at compile time (or it can allow the compiler to infer it, if no type class restriction is specified).
Caveat follows. The fundamental theorems I mentioned ( http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html ), can not be voided by any programming language that is Turing complete. Thus, it is no surprise that dynamic run-time typing is achievable in Haskell[1], basically a punt of the attempt of strict typing exponential local order (which the theorems predict _MUST_ happen in some cases, unless there is no state diagram), to run-time nondeterminism due to Liskov Substitution Principle (and Linsky Referencing). Alas (good news!), local exponential order wins in vast majority of common use cases in Haskell because it is also possible to use static compile-time typing[1], which I assert is because the static typing architecture is granular and orthogonal. [1] http://research.microsoft.com/en-us/um/people/simonpj/papers/papers.html#lan... http://research.microsoft.com/en-us/um/people/simonpj/papers/hmap/gmap3.pdf Ralf Laemmel and Simon Peyton Jones. "Scrap your boilerplate: a practical approach to generic programming", Proc ACM SIGPLAN Workshop on Types in Language Design and Implementation (TLDI 2003), New Orleans, pp26-37, Jan 2003

Shelby Moore wrote:
In Haskel, subtyping is done with Module...
data is analogous to a public class members. Use Module to make some implementation private:
http://www.cs.auckland.ac.nz/references/haskell/haskell-intro-html/modules.h...
Thus use type classes (interfaces) when consuming the public Module interfaces in functions (i.e. only consume the exported Module interfaces in the sense they implement a type class, never directly consume the Module name space).
A strong argument against hiding internal functionality of an ADT (against using Module), and favoring fully composable functional programming (i.e Haskell without Module) for GUIs: http://lambda-the-ultimate.org/node/3668#comment-51993 (concise and sweet)

Shelby Moore
* 1856 Thermo Law: entire universe (a closed system, i.e. everything) trends to maximum disorder.
On the very, *very*, VERY long timescale. In the meantime, chaos creates clashes of matter, which cause local energy outbursts (i.e. galaxies), which pump their immediate surroundings, where natural selection in presence of energy influx leads to increasing complexity. To persist for a long, *long*, LONG time.

Shelby Moore writes:
* 1856 Thermo Law: entire universe (a closed system, i.e. everything) trends to maximum disorder.
Will Ness wrote:
On the very, *very*, VERY long timescale.
I love your ascii art :) Note I put it last in the list for reason. Not to be combative, but your statement encapsulates a common fallacy (even though you have not stated an error). Although it is true the exponential local orders are randomly[1] created while (actually necessary ingredient in[2]) breaking down of global order on the universal trend towards maximum disorder, these are occurring simultaneously with infinite cases of exponential decay[1]. Haskell may end up being a good example of success (rising up out of the OOP "failures"): http://www.haskell.org/pipermail/haskell-cafe/2009-November/068481.html The fallacy is rooted in the concept that time is an absolute reference point, but remember that Eistein ignored the complex of the Lorentz Equations. [1] Doesn't seem like random until you consider all the failures you didn't read about. [2] I have a good theory about what knowledge is, and it is precisely the exponential deviation from the Bell Curve (yet even any knowledge succumbs to the Theorems and exponentially decays eventually[3])-- just deviating is failure, but deviating is success if also suck the Bell Curve towards you at exponential rate: http://www.coolpage.com/commentary/economic/shelby/Bell%20Curve%20Economics.... (Theory of Everything is near the end, and I have since refined it at my blog, but I have not published a coherent paper yet). I am just sharing, not professing, so flame if you want, and I won't be offended. [3]Time is shared reality, that is why we can say "eventually" or long, *long*, LONG time, if the shared experience is stable, then the knowledge has a long exponential stability. But there is new knowledge competing all the time. Small things can grow faster-- oak trees don't grow to the moon. There are infinite simultaneous realities going on right now out there (due to permutations of interactions) between billions of humans. Thus we can say thus far that Bill Gates was much more knowledgeable (if our shared reality is the mainstream one) than proponents of pure FP, because he moved more shared reality exponentially. I am trying to bring a key piece of knowledge to the mix that might change that: http://www.haskell.org/pipermail/haskell-cafe/2009-November/068436.html http://www.haskell.org/pipermail/haskell-cafe/2009-November/068479.html
In the meantime, chaos creates clashes of matter, which cause local energy outbursts (i.e. galaxies), which pump their immediate surroundings, where natural selection in presence of energy influx leads to increasing complexity.
Agreed as I described above.
To persist for a long, *long*, LONG time.
It depends what you mean by persist. For example the a fiat world reserve currency (first ever in history of world, i.e. dollar) persisted since 1971, but probably won't exist beyond 2020 at best: http://goldwetrust.up-with.com/economics-f4/what-is-money-t44-15.htm#2177 While one widely shared perception is peaking and rotting, another one is smoldering and ready to do the 50% pond covered in the 30th day of the Lily's month maturity. And besides, there isn't just one reality at any time. Time is itself just an arbitrary perception. This is why Bible's logarithmic time scale can jive with carbon dating, ... (move it to my blog if you want to discuss further)... off topic of this list...
participants (2)
-
Shelby Moore
-
Will Ness