
Hello Cafe, So I've been learning a bit about the GHC extensions from https://github.com/i-am-tom/haskell-exercises. But the lessons say nothing about how the extensions work behind the scenes. For example, I've assumed that RankNTypes would require something similar to dynamic dispatch in imperative languages in order to work. But this is just an assumption. So I was wondering where I can get a quick succinct read (or a summary at least) about how these extensions are implemented, as well as their performance cost (or gain!). Also if someone knows other tutorials and lessons for these, and other extensions, please feel free to share them :p Thank you, Michel :)

I would not expect type system extensions to have any effect on the performance of generated code. Cheers, Vanessa McHale On 4/5/19 10:19 AM, Michel Haber wrote:
Hello Cafe,
So I've been learning a bit about the GHC extensions from https://github.com/i-am-tom/haskell-exercises. But the lessons say nothing about how the extensions work behind the scenes.
For example, I've assumed that RankNTypes would require something similar to dynamic dispatch in imperative languages in order to work. But this is just an assumption.
So I was wondering where I can get a quick succinct read (or a summary at least) about how these extensions are implemented, as well as their performance cost (or gain!).
Also if someone knows other tutorials and lessons for these, and other extensions, please feel free to share them :p
Thank you, Michel :)
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hello,
Thanks for your answer Vanessa.
So you wouldn't say that, for instance, some of these extensions might
provide room for more extensive
optimizations, seeing as they provide stronger guarantees (Non empty
vectors and such) ?
Michel :)
On Fri, Apr 5, 2019 at 6:07 PM Vanessa McHale
I would not expect type system extensions to have any effect on the performance of generated code.
Cheers, Vanessa McHale On 4/5/19 10:19 AM, Michel Haber wrote:
Hello Cafe,
So I've been learning a bit about the GHC extensions from https://github.com/i-am-tom/haskell-exercises. But the lessons say nothing about how the extensions work behind the scenes.
For example, I've assumed that RankNTypes would require something similar to dynamic dispatch in imperative languages in order to work. But this is just an assumption.
So I was wondering where I can get a quick succinct read (or a summary at least) about how these extensions are implemented, as well as their performance cost (or gain!).
Also if someone knows other tutorials and lessons for these, and other extensions, please feel free to share them :p
Thank you, Michel :)
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Fri, Apr 5, 2019 at 11:19 PM Michel Haber
Hello Cafe,
So I've been learning a bit about the GHC extensions from https://github.com/i-am-tom/haskell-exercises. But the lessons say nothing about how the extensions work behind the scenes.
For example, I've assumed that RankNTypes would require something similar to dynamic dispatch in imperative languages in order to work. But this is just an assumption.
Sometimes, when you use a universally quantified type (forall a . ...), the only way to construct it is to use some kind of dynamic dispatch. For example, if you have a GADT data SomeShow where SomeShow :: forall a . Show a => a -> SomeShow In the absence of extremely aggressive inlining (which GHC might be able to do, I'm not sure), the only way to operate on such an object is to pass a vtable full of dynamic function pointers for the "Show" typeclass along with the "a" value. However, if you have a function like foo :: Bar c => (forall a . Bar a => a -> b) -> c -> b foo = id Even though there's a rank-2 type here, I would reasonably expect GHC to be able to construct this function with no additional overhead whatsoever. After all, it's just the identity function. There are a few things to think about that might help reveal why these sorts of abstractions can be free. * A "=>" arrow is just like a "->" arrow except it's the compiler's job to pass in the argument to a "=>" function. If the compiler could optimize a "->" function, it could probably optimize a similar "=>" function. * Existential quantification *restricts* the behavior of a value, rather than expanding it. * The semantics of the program are the same after you erase all the type information. * Existential quantification doesn't stop the inliner from working.
So I was wondering where I can get a quick succinct read (or a summary at least) about how these extensions are implemented, as well as their performance cost (or gain!).
Also if someone knows other tutorials and lessons for these, and other extensions, please feel free to share them :p
As a heuristic, I would hand-wavily claim that GHC typically tries its hardest to avoid type-level abstractions introducing value-level performance overhead, but it also doesn't try especially hard to use type-level abstractions to inform the optimizer. I think it's unlikely that turning on fancy type-system features will make your code go *faster*, but as long as you're careful it probably also won't go slower. My guess is that this stuff changes so fast that the only practical place to write this stuff down is the GHC source code. Cheers, Will
Thank you, Michel :) _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am Mo., 8. Apr. 2019 um 16:14 Uhr schrieb William Yager < will.yager@gmail.com>:
[...] As a heuristic, I would hand-wavily claim that GHC typically tries its hardest to avoid type-level abstractions introducing value-level performance overhead, but it also doesn't try especially hard to use type-level abstractions to inform the optimizer. I think it's unlikely that turning on fancy type-system features will make your code go *faster*, but as long as you're careful it probably also won't go slower. [...]
This is basically my view on all the extensions, too. In my POV, the primary goal of all those fancy typing extensions is to make the set of programs which can statically be proven "to not go wrong" larger and has nothing to do with performance per se. There are of course extensions which can be used to improve performance, like the relatively recent stuff which allows you to coerce different things in a safe way, but AFAIK you have to give GHC a helping hand here by stating your intentions. All this typing magic is somehow the opposite of what a JIT does. An optimizing JIT basically says: "Hey, from what I've seen so far by executing your program, I can do the FOOBAR optimization. Of course this is wrong in the general case, but let's nevertheless use FOOBAR until I'm proven wrong by further executing your program. Then I can undo FOOBAR, anyway..." In other words: A traditional optimizing compiler statically proves things, an optimizing JIT dynamically assumes things. What is better for performance very much depends on what you're doing.,,, </HandWaving> ;-)
participants (4)
-
Michel Haber
-
Sven Panne
-
Vanessa McHale
-
William Yager