
On Wed, Jan 9, 2013 at 1:45 PM, Aleksandar Dimitrov < aleks.dimitrov@gmail.com> wrote:
On Sun, Jan 06, 2013 at 11:22:51PM -0800, Darren Grant wrote:
I've seen some cases where Haskell examples dive into existential types instead of using natural higher order functions. For instance, the "Existential type" wiki page [1] section 2.2 proposes existentials as the solution to a heterogeneous list problem where a ray-tracer must evaluate a trace function for different types of objects. Is there a good reason for this, or is it just a case of prior language assumptions leading to unnatural code? Why could I not just make the list homogeneous by supplying a list of partially evaluated trace functions instead?
I have never written a ray-tracer, so I don't know about the exact requirements, but depending on the code in question, partial evaluation might not be possible.
Parametric types can only be instantiated with *one* specific type. A list [a] can only be either [Int] or [Bool], but not [Int,Bool] or so. [1]
So whenever you'd like to have a collection of things that won't have the same type, but have a certain operation defined on them, existential types come in handy. A partial application only makes sense when you have a function of higher arity, but `hits` in the example on the Wiki has only one argument, namely the list. At some point, a *collection* of Renderables needs to pass by the function `hit`. This collection probably needs to be stored somewhere, presumably a list, that is then fed to `hits`.
Of course, you might set up your types differently, so that everything that will need to pass by the function `hits` would also be of one certain type. You could, for example, instead of storing the heterogeneous Renderables in a list, just reduce the different shapes to a Renderable data type that contains the important fields. Then, whenever a geometric object gets created, a Rendereable also gets created, then stored in that list. But how *viable* that is, I don't know. My gut feeling says that it won't be viable at all.
Consider the following: If I have a traceable data type like data Sphere, I can define it and a trace function as follows: data Sphere = Sphere { spherepos :: (Double,Double,Double), sphereradius :: Double } traceSphere :: Sphere -> Ray -> [Hit] Where Ray is self-explanatory and [Hit] is a resulting list of hits where ray intersects sphere. I can then define a list of traceable objects thus, [Ray -> [Hit]] Which can also be thought of as, type Trace = Ray -> [Hit] [Trace] Now I am able to define a homogeneous list [Trace] of partially applied functions like so, [traceSphere $ Sphere (0,0,0) 20, traceSphere $ Sphere (10,20,30) 40, traceBox $ Box (0,0,0) (10,10,10) -- where traceBox is also partially applied ] You see what I mean? This is very natural in Haskell. I'm sure there must be counter-examples, but I haven't the comprehension for these. What do you think?
Sometimes I read an argument that existentials are required due to unknown
constraints at compile time, but do not have an intuition for this
situation yet. For instance, I read that IO requires existentials. Still working on that one. :)
Where did you read that? The *definition* of the IO type does not use existentials [2]. But I'm not very familiar with the internals.
You're right! I'm not sure what I saw, but I would guess that I conflated some application of IO with the definition. Cheers, Darren