
| >> Do any of you have insight into why GHC uses GMP as opposed to | >> another library for arbitrary precision numbers? | > ... | | Right - that's three reasons to use it. Some reasons *not* to use it | are: it has an awkward license, it's big, it needs updating, and we run | into problems when the Haskell program wants to use GMP itself via the | FFI (it's possible by essentially renaming everything in our local copy | of GMP so it doesn't conflict, but we haven't done that). In fact, we have long wanted to replace GMP with another library for exactly these reasons. It's a nice, well-specified, self-contained project, which is just waiting for someone to step up and do it. Of course, we'd only want to replace GMP if the alternative was also fast and reliable. If anyone is interested in tackling this, let us know! Simon

On Tue, Jun 21, 2005 at 03:48:44PM +0100, Simon Peyton-Jones wrote:
| >> Do any of you have insight into why GHC uses GMP as opposed to | >> another library for arbitrary precision numbers? | > ... | | Right - that's three reasons to use it. Some reasons *not* to use it | are: it has an awkward license, it's big, it needs updating, and we run | into problems when the Haskell program wants to use GMP itself via the | FFI (it's possible by essentially renaming everything in our local copy | of GMP so it doesn't conflict, but we haven't done that).
In fact, we have long wanted to replace GMP with another library for exactly these reasons. It's a nice, well-specified, self-contained project, which is just waiting for someone to step up and do it. Of course, we'd only want to replace GMP if the alternative was also fast and reliable.
If anyone is interested in tackling this, let us know!
I wonder if it would be feasable to implement arbitrary precision integers in pure haskell. unboxed values would probably want to be used in some places for speed and it would be very motivating to improve ghc's optimizer. There should be no reason manually unboxed haskell code should compile slower than C. John -- John Meacham - ⑆repetae.net⑆john⑈

John Meacham writes:
I wonder if it would be feasable to implement arbitrary precision integers in pure haskell. unboxed values would probably want to be used in some places for speed and it would be very motivating to improve ghc's optimizer. There should be no reason manually unboxed haskell code should compile slower than C.
Of course, bignums (integers) can be quite nicely implemented in Haskell, this is btw. a known student exercise. The implementation is nice, elegant, etc., but the efficiency... This is not only having unboxed chunks, but also the global policy of memory allocation. Putting number chunks in lazy lists involves a substantial overhead. I am not sure how slow it will be, since our pedagogic games didn't care at all, but we can try to benchmark it one day... Jerzy Karczmarczuk

On Wed, Jun 22, 2005 at 12:47:02AM +0200, karczma@info.unicaen.fr wrote:
John Meacham writes:
I wonder if it would be feasable to implement arbitrary precision integers in pure haskell. unboxed values would probably want to be used in some places for speed and it would be very motivating to improve ghc's optimizer. There should be no reason manually unboxed haskell code should compile slower than C.
Of course, bignums (integers) can be quite nicely implemented in Haskell, this is btw. a known student exercise. The implementation is nice, elegant, etc., but the efficiency... This is not only having unboxed chunks, but also the global policy of memory allocation. Putting number chunks in lazy lists involves a substantial overhead. I am not sure how slow it will be, since our pedagogic games didn't care at all, but we can try to benchmark it one day...
I should mention I have an ulterior motive for encouraging this. jhc currently has no bignum support. (Integer is the same size as the native intmax_t) However, I'd like to support them by implementing them in haskell directly and then attempting to improve jhc to the point where they run fast enough. If the work can be shared with ghc then so much the better. John -- John Meacham - ⑆repetae.net⑆john⑈

I believe that native support for bignums is a rather interesting and
good idea for Haskell and for other functional languages. Developing
native support provides the opportunity to implement many
optimizations for a specific language (without, of course, sacrificing
the usefulness of the design for other languages). In addition, bignum
arithmetic has been well studied and the implementation can result in
fast and efficient bignums operators.
We are currently implementing bignums for a pure subset of Scheme (we
call MT-Scheme) using different representations (e.g. lists and
arrays) with interesting results. We will, hopefully, present some
results at TFP 2005 that demonstrate impressive reductions in memory
allocation can be achieved with native support. As Simon P-J
suggested, we are looking into a tree representation to simplify
divide-and-conquer algorithms. This last line of work in just in its
preliminary stage. As they say, one step at a time.
Best wishes,
Marco
On 6/21/05, John Meacham
I should mention I have an ulterior motive for encouraging this. jhc currently has no bignum support. (Integer is the same size as the native intmax_t) However, I'd like to support them by implementing them in haskell directly and then attempting to improve jhc to the point where they run fast enough. If the work can be shared with ghc then so much the better.
John
-- John Meacham - ⑆repetae.net⑆john⑈ _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

This is a rather interesting question. Most "efficient"
implementations use array-based representations for bignums that are
mutable. The use of mutable arrays appears to be justified, because of
divide-and-conquer multiplication and division algorithms (e.g.
Karatsuba) that perform these operations in place. The problem,
however, is not this straighforward. Array-based implementations
require copying bigits into the array. A list-based implementation
simply points to the rest of the bignum avoiding any copying.
If bignums are immutable structures (as in a true functional data
structure), then we should not be surprised if list-based
representations are as good or better than array-based
implementations. In this case, arrays are forced to copy bigits to new
memory locations while lists can continue to point to pieces of a
bignum. To avoid excessive memory allocation for pure bignums
real-time garbage collection on bignum operations can be used.
It is certainly a rich and interesting line of research. It also
appears to me that having native support for bignums is desirable. It
would certainly make using bignum libraries seamless.
Best wishes,
Marco
On 6/21/05, karczma@info.unicaen.fr
John Meacham writes:
I wonder if it would be feasable to implement arbitrary precision integers in pure haskell. unboxed values would probably want to be used in some places for speed and it would be very motivating to improve ghc's optimizer. There should be no reason manually unboxed haskell code should compile slower than C.
Of course, bignums (integers) can be quite nicely implemented in Haskell, this is btw. a known student exercise. The implementation is nice, elegant, etc., but the efficiency... This is not only having unboxed chunks, but also the global policy of memory allocation. Putting number chunks in lazy lists involves a substantial overhead. I am not sure how slow it will be, since our pedagogic games didn't care at all, but we can try to benchmark it one day...
Jerzy Karczmarczuk
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
participants (4)
-
John Meacham
-
karczma@info.unicaen.fr
-
Marco Morazan
-
Simon Peyton-Jones