
I have read that using overloaded functions is expensive because overloading is implemented by passing dictionaries around. These various sources seem to agree that this in necessary to allow separate compilation. I don't understand why this is... Couldn't the functions be specialized at link time? -- /Times-Bold 40 selectfont/n{moveto}def/m{gsave true charpath clip 72 400 n 300 -4 1{dup 160 300 3 -1 roll 0 360 arc 300 div 1 1 sethsbcolor fill}for grestore 0 -60 rmoveto}def 72 500 n(This message has been)m (brought to you by the)m(letter alpha and the number pi.)m(David Feuer) m(David_Feuer@brown.edu)m showpage

Hi David, | I have read that using overloaded functions is expensive because | overloading is implemented by passing dictionaries around. These | various sources seem to agree that this in necessary to allow separate | compilation. I don't understand why this is... Couldn't the functions | be specialized at link time? Not in general, because there are some examples that would require specialized versions of a single overloaded function at a countably infinite set of types. The standard example would be something like the following: f :: Eq a => [a] -> Bool -- NB Polymorphic recursion!!! f xs = (xs==[]) && f [xs] Starting with a call f [1::Int], you'd need versions of f for each of the types a in { Int, [Int], [[Int]], [[[Int]]], ...}. Even in cases where only finitely many specializations are needed, you're not really going to get the full benefit of generating all those extra functions at link time unless you follow that with a fairly aggressive optimization phase. And at that point, you're doing whole-program analysis rather than separate compilation. These points mean that it's hard to a good job, not that it's impossible. It's not too hard to imagine an analysis that would allow most of the opportunities for (finite) specialization to be detected statically, coupled with a backend that allowed dictionaries and specialized code to be mixed. During development, you'd turn off the analysis and post-link optimizer and use dictionaries throughout. For the final product, you'd use the analysis to detect as many opportunities as possible for removing dictionaries so that, ideally, only very odd code like the definition of "f" above would still require dictionary translation. Then you'd run the optimizer, let it crunch away on the whole program (perhaps for a long time :-) and get more efficient, mostly dictionary-free code out as a result. You didn't mention what you've been reading, but for more background you might want to look at Lennart Augustsson's FPCA 93 paper on the implementation of type classes and at my PEPM 94 paper on dictionary- free overloading using partial evaluation. Hope this helps! All the best, Mark
participants (2)
-
David Feuer
-
Mark P Jones