
Hi I've been running Hugs for a couple of weeks now and I'm slowly starting to get the Haskell idea, and I certanly like it! But there are functions that I cant find and that I assume others before me must have missed and then perhaps also implemented them. Is there any standard library with functions like: binomial isCatalan nthCatalan nextCatalan isPrime nthPrime nextPrime factorize isFibonacci nthFibonacci nextFibonacci and possibly/hopefully other essential functions as well? (and of course better naming...) /Bo

The list found at http://haskell.org/libraries/#numerics might be a good starting point for finding what you need. I can recommend the "DoCon" library, which is pretty sophisticated. Another good choice might be the crypto library available at: http://www.haskell.org/crypto/ It also includes several number theory modules and is arguably somewhat simpler to use than DoCon is. Peter

Peter Simons wrote:
The list found at
http://haskell.org/libraries/#numerics
might be a good starting point for finding what you need. I can recommend the "DoCon" library, which is pretty sophisticated.
Yes, very sophisticated, and once I have the time to dig deep ill look into it...
Another good choice might be the crypto library available at:
http://www.haskell.org/crypto/
It also includes several number theory modules and is arguably somewhat simpler to use than DoCon is.
Not much here. It seems like an easy-to-use standard library is missing?

G'day all.
Quoting Bo Herlin
But there are functions that I cant find and that I assume others before me must have missed and then perhaps also implemented them. Is there any standard library with functions like:
binomial isCatalan nthCatalan nextCatalan isPrime nthPrime nextPrime factorize isFibonacci nthFibonacci nextFibonacci
You might want to check out the archives of the mailing list, too. These sorts of problems occasionally get solved. For the record, here are a few of my attempts: http://andrew.bromage.org/Fib.hs (Fairly fast Fibonacci numbers) http://andrew.bromage.org/WheelPrime.hs (Fast factorisation) http://andrew.bromage.org/Prime.hs (AKS primality testing) Cheers, Andrew Bromage

You might want to check out the archives of the mailing list, too. These sorts of problems occasionally get solved. For the record, here are a few of my attempts:
http://andrew.bromage.org/Fib.hs (Fairly fast Fibonacci numbers) http://andrew.bromage.org/WheelPrime.hs (Fast factorisation) http://andrew.bromage.org/Prime.hs (AKS primality testing)
Great, why not put these together in a first attempt of making a standard library? Or is the road to creating standard libraries starting somewhere else? I actually dont know.

G'day all.
Quoting Bo Herlin
Great, why not put these together in a first attempt of making a standard library?
As promised, here's the first attempt: darcs get http://andrew.bromage.org/darcs/numbertheory/ Cheers, Andrew Bromage

On May 9, 2005, at 8:34 PM, ajb@spamcop.net wrote:
G'day all.
Quoting Bo Herlin
: Great, why not put these together in a first attempt of making a standard library?
As promised, here's the first attempt:
How about one that's actually H98? The types here aren't *that* fiddly... :-) -Jan-Willem Maessen
Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

G'day all.
Quoting Jan-Willem Maessen
How about one that's actually H98? The types here aren't *that* fiddly... :-)
Well, part of what I was doing was experimenting with what a library like this should look like, even more than what it should do. For some reason, I kind of like writing this: *Math.Prime> is Prime 42 False instead of this: *Math.Prime> isPrime 42 False Possibly unnecessary, but it appeals to my inner generic programmer. If you want to refactor the library so that it's a glasgow-exts interface on top of a H98 core, by all means do so. That's what darcs is for. :-) Cheers, Andrew Bromage

Well, part of what I was doing was experimenting with what a library like this should look like, even more than what it should do. For some reason, I kind of like writing this:
*Math.Prime> is Prime 42 False
instead of this:
*Math.Prime> isPrime 42 False
Great! I like this a LOT. Im working on a framework for ranking and unranking things where primes are just a tiny part:
data DCountable = Countable Integer | Uncountable deriving (Eq,Show)
class CRankable a b where rank :: a -> b -> Maybe Integer unrank :: a -> Integer -> Maybe b count :: a -> b -> DCountable
So using the instance for Prime on Integer looks like: Cafe> unrank Prime 23 ::Maybe Integer Just 89 Cafe> rank Prime (89::Integer) Just 23 I have also for trees and lists and other things with defined parameters like length, or not. Using my framework they are made quite simple. RankIL is for IntegerList with any length (the bool is if the integers start on 1 or not, might look irrelevant here): Cafe> map (unrank (RankIL True)) [0..10] ::[Maybe [Integer]] [Just [1],Just [2],Just [1,1],Just [3],Just [1,2],Just [2,1],Just [1,1,1],Just [4],Just [1,3],Just [2,2],Just [3,1]] RankIL is for IntegerTree Cafe> map (unrank RankIT) [0..3] ::[Maybe (Tree Integer)] [Just Node {rootLabel = 1, subForest = []},Just Node {rootLabel = 1, subForest = [Node {rootLabel = 1, subForest = []}]}... ..and so on.. Its not very pretty yet since ive only been a Haskeller for some weeks now, and I dont know about darcs (i run cvs as i have the book), but maybe my framework could be included once a Haskell-guru brushes it up a bit?

G'day all.
Quoting Bo Herlin
Great! I like this a LOT.
Thanks!
Im working on a framework for ranking and unranking things where primes are just a tiny part:
data DCountable = Countable Integer | Uncountable deriving (Eq,Show)
class CRankable a b where rank :: a -> b -> Maybe Integer unrank :: a -> Integer -> Maybe b count :: a -> b -> DCountable
That's very nice stuff. I haven't implemented primes-as-sequences yet because I couldn't be bothered finding an efficient algorithm. Looks like you might have already done the leg work. :-)
I have also for trees and lists and other things with defined parameters like length, or not. Using my framework they are made quite simple.
Very nice.
Its not very pretty yet since ive only been a Haskeller for some weeks now, and I dont know about darcs (i run cvs as i have the book), but maybe my framework could be included once a Haskell-guru brushes it up a bit?
I don't know if I count as a guru or not, but by all means. Want to discuss this on-list or off-list? Cheers, Andrew Bromage

Hi ajb@spamcop.net wrote:
G'day all.
Quoting Bo Herlin
: Great! I like this a LOT. ...
I don't know if I count as a guru or not, but by all means. Want to discuss this on-list or off-list?
The code kinda big already, 700 lines + an util-file, so i'll put it on-line at http://www.gcab.net/haskell-cafe-table so that we can diskuss it on-list. /Bo

Hi I have put some comments and examples in the code now, and will continue to do so as soon as i get time for it. Please look at the code, and tell me if it is worth making into a part of an official library. Or, even better, if someone has already done this code, and in a better way. You find the code at http://www.gcab.net/haskell-cafe-table /Bo

On May 10, 2005, at 4:14 AM, Bo Herlin wrote:
Well, part of what I was doing was experimenting with what a library like this should look like, even more than what it should do. For some reason, I kind of like writing this:
*Math.Prime> is Prime 42 False
instead of this:
*Math.Prime> isPrime 42 False
Great! I like this a LOT.
Why not use a function? What's wrong with a function? There no need to go leaping for a multiparameter type class with a functional dependency! Just use a function. [With apologies to John Cleese] In all seriousness, I loathe this style. Let me tell you why: * We are introducing types whose only purpose is to stand for a single function. * Declaring a property used to take one line---a function declaration. Now it takes a minimum of three: - a bogus type declaration for Primes - an instance declaration - the actual declaration for "is". * The use of multiparameter type classes leads to relatively more obscure error messages and clunky type signatures with bogus type parameters. * It's not Haskell 98, and an acceptable (or even preferable) solution exists which is. * There's no equivalent of lambda here---we can't write anonymous properties. I list this last, as some may consider it a disadvantage. When using fancy type-system footwork, I suggest the following questions (which I ask myself, too): * Can I replace one or more parameters to my methods with a suitably-typed call to "undefined"? This often means I'm being to clever for my own good. Perhaps that argument could be eg. a phantom parameter of another type. * What would happen if I used a record instead of a class declaration? The result might look pretty similar on the page, but it might be H98. If the class has 0 or 1 method, you don't even need a record---you can pass things directly. * Could I use a newtype rather than a separate type parameter? Sometimes this is acceptable, sometimes it's better, and sometimes (in the case of the NumberTheory library, I suspect) it's not what you actually want. * How much worse would it look if I just used H98? Sometimes it won't be possible at all. Sometimes it just requires a change of perspective to make it look very pretty. And often a bunch of fancy footwork at the type level leads to a solution that I didn't anticipate---and which fits nicely in H98. I also have an informal mental hierarchy of complexity of types---though others may find this more controversial: * Simple algebraic types * H98 type classes * Fancier instance heads (but still single-parameter, non-overlapping) * Existential and local universal quantification * GADTs (I'm still uncertain about this judgement, but they seem to be less troublesome than...) * Multiparameter type classes -Jan-Willem Maessen

Thank you for saying what I was too lazy to say myself. :) -- Lennart Jan-Willem Maessen wrote:
On May 10, 2005, at 4:14 AM, Bo Herlin wrote:
Well, part of what I was doing was experimenting with what a library like this should look like, even more than what it should do. For some reason, I kind of like writing this:
*Math.Prime> is Prime 42 False
instead of this:
*Math.Prime> isPrime 42 False
Great! I like this a LOT.
Why not use a function?
What's wrong with a function?
There no need to go leaping for a multiparameter type class with a functional dependency! Just use a function.
[With apologies to John Cleese]

G'day all.
Quoting Jan-Willem Maessen
Why not use a function?
What's wrong with a function?
There no need to go leaping for a multiparameter type class with a functional dependency! Just use a function.
[With apologies to John Cleese]
A reasonable question, and one that deserves a better answer than "I prefer it". Some background: I've been reading a lot of category theory stuff lately. I just finished reading "Generative Programming". (Well, reading half of it, skimming the rest. Anyone who has read the book knows exactly what I mean.) I was probably unduly influenced by both while writing the library.
* We are introducing types whose only purpose is to stand for a single function.
You mean like the Eq typeclass? (Technically, Eq has two functions in it, but one can be trivially obtained from the other; indeed (/=) could have just as easily been defined as an INLINEd function outside the typeclass, with practically no difference.)
* Declaring a property used to take one line---a function declaration. Now it takes a minimum of three: - a bogus type declaration for Primes - an instance declaration - the actual declaration for "is".
On the other hand, under my scheme, passing a property to a function requires one modification (to the type signature) which isn't even mandatory (because type signatures usually aren't mandatory). Under the traditional scheme, you have to thread the property through code, passing it as a parameter. More on this later.
* The use of multiparameter type classes leads to relatively more obscure error messages and clunky type signatures with bogus type parameters.
A reasonable objection.
* It's not Haskell 98,
Another reasonable objection.
and an acceptable (or even preferable) solution exists which is.
If your entire purpose is to expose functionality, then you're right. Simply exposing a single function is acceptable. However, you don't learn anything that way. My purpose is a bit grander. I'm taking the opportunity to try to understand what the Haskell libraries of the future might look like, starting with a simple example. See below for details.
* There's no equivalent of lambda here---we can't write anonymous properties. I list this last, as some may consider it a disadvantage.
Eq has the same "disadvantage".
When using fancy type-system footwork, I suggest the following questions (which I ask myself, too):
* Can I replace one or more parameters to my methods with a suitably-typed call to "undefined"?
To which the answer, in this case, is: For this example, yes. But not always...
* What would happen if I used a record instead of a class declaration? The result might look pretty similar on the page, but it might be H98.
When producing libraries that are designed for generic programming, then more often than not, the "record" solution is at best H98 + higher ranked types, which isn't necessarily an improvement over H98 + MPTCs + fundeps.
If the class has 0 or 1 method, you don't even need a record---you can pass things directly.
Again, this applies to Eq. There is a reason why we have typeclasses in Haskell. Well, several reasons actually. There's the historical reason (it made ad hoc overloading less ad hoc, as one paper put it), and the collection of mini-reasons developed in retrospect. Taking Eq as the example: 1. "Equality testing" is a pattern that keeps turning up in real code. The category theory philosophy (as well as the "design pattern" philosophy) is that if a pattern keeps turning up, it's interesting in its own right. In particular, it makes sense to give it a name so you can talk about it independently of any specific objects that exhibit the pattern. "Eq" is not merely a function of type a -> a -> Bool. It's a concept with semantics. It must be an equivalence relation, and it also must mean semantic equality. Functions that respect semantic equality have the property that x == y implies f x == f y. (No, none of these restrictions are in the language report. But I'd be annoyed if someone defined a version of Eq that didn't have these properties.) 2. The generic programming philosophy is that generic algorithms should only require precisely those concepts that they need in order to do need do their job. So, for example, isPrefixOf :: (Eq a) -> [a] -> [a] -> Bool doesn't need to know that a is Hashable. This goes double for generic algorithms which need to know about more than one concept (e.g. Eq and Show). Now let's consider these two classes, one from the number theory library, and one inspired by Bo Herlin's code: -- |Class for sequences where the position of an element can be -- identified. class RankableSequence seq a | seq -> a where -- |Find the rank of an element. rank :: seq -> a -> Maybe Integer -- |Class for descending numeric sequences. class DescendingSequence seq a | seq -> a where -- |Enumerate the sequence from the nth element down. enumDown :: seq -> Integer -> [a] RankableSequence and DescendingSequence are related but essentially orthogonal properties. There are some types of sequence where it's very hard to work out where an element falls in the sequence. For example, finding out that 65537 is a prime is straightforward. Finding out that it's the 6543rd prime is difficult by comparison, and it only gets harder the larger the primes get. So the "rank" function only makes sense for some kinds of sequence. Similarly, there are some types of sequence where it's harder to produce the elements in descending order than it is in ascending order. (Primes are another example.) I can envisage semi-generic functions which want these properties together. It makes sense to locate an element in the sequence, and then descend from it. If I represented the properties as single functions, I'd have to pass them both explicitly. It gets worse if I have more than two interesting properties, or properties that depend on each other (e.g. Ord depends on Eq, so as soon as I discover that I need Ord, I don't have to pass Eq explicitly any more). Using the typeclass scheme, all you have to do is give the "sequence" a name, and pass only that. The type system takes care of everything else. I think part of the problem here is that we have different goals. You're thinking that you might need to write some code which needs to test whether numbers are prime or not some day, and you'd like to use a library such as mine, but it looks like overkill. And if that's all you want to do, you're right. I'm looking at a broader picture (though not the broadest possible), at generic algorithms which can work on any type of sequence and for any type of property. The objections sound to me like someone saying: "What would you need a state monad transformer for? All you need to do is thread an extra piece of data through your code!" In some sense, they would be right. That really is all you _need_ to do. And it's often the right thing to do, if all you need to do is get a certain job done. But would you be without monad transformers now that you know their advantages? What are the advantages of doing it this way? I don't know. Maybe they are significant, maybe they are nonexistent. Either way, it's going to be interesting to find out. Cheers, Andrew Bromage

Wonderful post! I just want to throw a minor wrench in: ajb@spamcop.net wrote:
"Eq" is not merely a function of type a -> a -> Bool. It's a concept with semantics. It must be an equivalence relation, and it also must mean semantic equality. Functions that respect semantic equality have the property that x == y implies f x == f y.
(No, none of these restrictions are in the language report. But I'd be annoyed if someone defined a version of Eq that didn't have these properties.)
From what you argue above, and reading the IEEE 754 standard correctly, instance Eq Float and instance Eq Double should be *removed* ! That is because +0. == -0. is true (according to IEEE 754) but it is legitimate to have functions f where f +0. <> f -0. [this mostly shows up for complex rather than real functions, but this is still in IEEE 754]. The easiest interpretation is that == as defined in IEEE 754 is not in fact a semantic equality predicate. Very messy. But having a good language (Haskell) that does not respect established standards (IEEE 754) when it claims to implement 'floating point' computations is a real drawback. Jacques

"Jacques Carette"
ajb@spamcop.net wrote:
"Eq" is not merely a function of type a -> a -> Bool. It's a concept with semantics. It must be an equivalence relation, and it also must mean semantic equality. Functions that respect semantic equality have the property that x == y implies f x == f y. (No, none of these restrictions are in the language report. But I'd be annoyed if someone defined a version of Eq that didn't have these properties.)
From what you argue above, and reading the IEEE 754 standard correctly, instance Eq Float and instance Eq Double should be *removed* !
For me this indicates that the above is false. It's almost true but not strictly true. It's like the fact that "m >>= return" is the same as m. This almost holds, except for some monads when m is bottom. It doesn't mean that instance Monad IO should be removed, but that the real world is not as ideal as our models suggest. Removing instance Eq Double would force removing instances for Ord, Num and all other numeric classes, which is unnacceptable. There would be no instances of Floating left. BTW, OCaml recently removed pointer equality check from its implementation of structural equality, because this caused inconsistent results for floats which depended on optimization (a NaN was equal to itself or not depending on whether the compiler knew the type statically or called the general polymorphic equality function). -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

"Marcin 'Qrczak' Kowalczyk"
"Jacques Carette"
writes: ajb@spamcop.net wrote:
"Eq" is not merely a function of type a -> a -> Bool. It's a concept with semantics. It must be an equivalence relation, and it also must mean semantic equality. Functions that respect semantic equality have the property that x == y implies f x == f y. (No, none of these restrictions are in the language report. But I'd be annoyed if someone defined a version of Eq that didn't have these properties.)
From what you argue above, and reading the IEEE 754 standard correctly, instance Eq Float and instance Eq Double should be *removed* !
For me this indicates that the above is false. It's almost true but not strictly true.
It is deeper than that. It is the difference between extensional equality (which is what the property x == y implies f x == f y is about) versus intensional equality (forall p. p x == p y implies x == y). +0 and -0 are extensionally equal, but not intensionally equal. Similarly, 0.0 and sin(0.0) are extensionally equal, but not intensionally equal. A few programming languages (and a few logics) can handle this difference, but they turn out to be quite rare!
Removing instance Eq Double would force removing instances for Ord, Num and all other numeric classes, which is unnacceptable. There would be no instances of Floating left.
:-). Floating point computations are indeed THAT EVIL! They should NOT be considered to be in all those classes! Whoever put them there was not a numerical analyst...
BTW, OCaml recently removed pointer equality check from its implementation of structural equality, because this caused inconsistent results for floats which depended on optimization (a NaN was equal to itself or not depending on whether the compiler knew the type statically or called the general polymorphic equality function).
And there have been complaints about it on the caml-list. Rightfully so, in fact. NaN is *defined* to be an 'object' x such that not (x == x). In almost all logics or type theories, there are no such objects. It definitely breaks most arguments based on Domains, CPOs, etc. And I say 'objects' because IEEE 754 specifies that there are a whole lot of *different* NaNs, meaning that there are different hardware-level representations. Jacques PS: I was manager of the team that implemented a fully IEEE-754/854 compliant numeric sub-system in Maple [Dave Hare was the numerical analyst in charge of that sub-project]. We wrestled with the specification of that sub-system until William Kahan (the Grand Guru of floating point) agreed that it was 'right'. It was a very intense learning experience!

G'day all.
Quoting Jacques Carette
From what you argue above, and reading the IEEE 754 standard correctly, instance Eq Float and instance Eq Double should be *removed* !
You could also argue that a function which distinguishes between +0 and -0 doesn't respect "semantic equality" of Float and Double. Your task, should you choose to accept it, is to define what the semantics of equality actually are for IEEE 754 floats. :-) Cheers, Andrew Bromage

ajb@spamcop.net wrote:
You could also argue that a function which distinguishes between +0 and -0 doesn't respect "semantic equality" of Float and Double. Your task, should you choose to accept it, is to define what the semantics of equality actually are for IEEE 754 floats. :-)
Anyone who thinks that +0 = -0 has never wrestled with a branch cut (and lost...). Such people have the nasty habit of also thinking that ALL functions are continuous! You might think they were constructivists or something. Since +0 and -0 both exist as separate 'entities' (as computer bits), which are their own normal form, the real question to ask is, why would they be equal? What kind of 'equivalence relation' is == if functions are allowed to differentiate between those two representations? If course, if you manage to deal with that, then you might explain to mean what 1/(1/(-0)) = -0 really means in a denotational semantics universe which is compositional... [the above identity is true in IEEE floats]. Jacques

Jacques Carette writes:
Anyone who thinks that +0 = -0 has never wrestled with a branch cut (and lost...). Such people have the nasty habit of also thinking that ALL functions are continuous! You might think they were constructivists or something.
HmHmHm... Why do you think that constructivists are against +0 /= -0? Or that they think that all functions are continuous? Do you think that Per Martin-Löf would a priori reject -0 as a typed object ? There are some physicists which have some bias towards constructivism, since for them the Mathematical Nature of the Nature is constructive, the sense of the word 'exist' is *not* so abstract... Yet, they know that physical quantum amplitudes *must* have branch cuts because of unitarity... Yours, as always off-topically, Jerzy Karczmarczuk

karczma@info.unicaen.fr wrote:
Why do you think that constructivists are against +0 /= -0?
Because they can't prove it - either way.
Or that they think that all functions are continuous?
[See previous email] Because all constructible functions are provably continuous.
Do you think that Per Martin-Löf would a priori reject -0 as a typed object ?
I would most definitely not want to put words in someone else's mouth! However, if -0 exists, then it is likely that you'll be forced to have a `left' 1 and a `right' 1 (as well as every other number) too.
There are some physicists which have some bias towards constructivism, since for them the Mathematical Nature of the Nature is constructive, the sense of the word 'exist' is *not* so abstract... Yet, they know that physical quantum amplitudes *must* have branch cuts because of unitarity...
"branch cuts" require one to decide equality of reals - which constructivists reject, as that is a non-terminating process. Having fun being off-topic, Jacques

Jacques Carette wrote:
Anyone who thinks that +0 = -0 has never wrestled with a branch cut (and lost...). Such people have the nasty habit of also thinking that ALL functions are continuous! You might think they were constructivists or something. Why would a constructivist think that all functions are continuous? It makes no sense.
-- Lennart

Lennart Augustsson
Jacques Carette wrote:
Such people have the nasty habit of also thinking that ALL functions are continuous! You might think they were constructivists or something. Why would a constructivist think that all functions are continuous? It makes no sense.
That would be a theorem of construtive mathematics! All *constructible* functions are continuous. See http://plato.stanford.edu/entries/mathematics-constructive/ for some of these ideas. For a lot more details, I recommend the book of Weihrauch, "Computable Analysis". Others have recommended the books of Bishop and of Beeson, but I have not read them (yet). Jacques

Jacques Carette answer to Lennart Augustsson's comment of:
Such people have the nasty habit of also thinking that ALL functions are continuous! You might think they were constructivists or something.
Why would a constructivist think that all functions are continuous?
That would be a theorem of construtive mathematics! All *constructible* functions are continuous. See http://plato.stanford.edu/entries/mathematics-constructive/
J. referenced this in his answer to my own doubts, where I confessed that I knew physicists who sympathise with constructivists, yet they 'somehow' accept that quantum scattering amplitudes must have branch cuts... According to Jacques this cannot hold, since a True Constructivist rejects the equality of reals. == Well, I read once in Drexler Mathematical Forum (was it Barr?) that Bishop himself didn't consider the claim that all functions are continuous as theorem. This is not provable. A kind of auto-trap? I am lost. In Recursive Constructive Math there is a theorem of Kreisel-Lacombe- Shoenfield-Tsejtin that total function within separable metric spaces are continuous, but the assumptions are quite strong... All this is very far from my everyday soup, and I respect True Mathematicians who split the hair infinitely very much (and also constructivists and intuitionists who refuse this splitting as a non-constructive, infinite process...) But, since the notion of function and of reals differs somehow from the classical counterparts, as far as a poor physicist speaks about entities which belong to the REAL WORLD, the problem remains, since even Master Yoda doesn't know which should be the structure of math relevant to the reality. I mean the 'true' math. And of course, with True meaning of the word 'true', provided we know what does it mean the word 'meaning'. Thank you truly. Jerzy K. (Kafka created at least two heroes named K., in The Trial, and in The Castle. I feel like both of them together...)

Lennart Augustsson wrote (on Sat, 14 May 2005 at 16:01):
> Why would a constructivist think that all functions are continuous? > It makes no sense. How would you define a non-continuous function (on reals, say) without (something like) definition by undecidable cases? Formal systems for constructive mathematics usually have models in which all functions |R -> |R are continuous. For a long time, constructive mathematics lacked an analogue of classical point-set topology. (Bishop et al dealt mainly with metric spaces.) Nowadays, this seems to have been (crudely speaking) "fixed". Basically, one starts with the structure of neighbourhoods (inclusion and covering), and analyses notions like point and continuous function in those terms. Some of the major contributions to the subject have been made by people working in Sweden, at least one in your own department. Peter Hancock

"Jacques Carette"
Since +0 and -0 both exist as separate 'entities' (as computer bits), which are their own normal form, the real question to ask is, why would they be equal?
Because (1.0 - 1.0) == -(1.0 - 1.0) should be true. In general arithmetic on integers which are representable in floating point is exact, including checking for equality. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Am Donnerstag, 5. Mai 2005 06:20 schrieb ajb@spamcop.net:
G'day all.
Quoting Bo Herlin
: But there are functions that I cant find and that I assume others before me must have missed and then perhaps also implemented them. Is there any standard library with functions like:
binomial isCatalan nthCatalan nextCatalan isPrime nthPrime nextPrime factorize isFibonacci nthFibonacci nextFibonacci
You might want to check out the archives of the mailing list, too. These sorts of problems occasionally get solved. For the record, here are a few of my attempts:
http://andrew.bromage.org/Fib.hs (Fairly fast Fibonacci numbers) http://andrew.bromage.org/WheelPrime.hs (Fast factorisation) http://andrew.bromage.org/Prime.hs (AKS primality testing)
Cheers, Andrew Bromage
The fibonacci numbers are really cool, for fib 100000 and fib 200000 I got the same performance as with William Lee Irwins 'fastest fibonacci numbers in the west' from a couple of months ago. The AKS primality testing is definitely not for Haskell, I'm afraid - or somebody would have to come up with a better way to model polynomials. Trying relatively small numbers: Prelude Prime> isPrime (2^19-1) True (2.76 secs, 154748560 bytes) Prelude Prime> isPrime (2^29-1) <interactive>: out of memory (requested 277872640 bytes) after a lot of swapping. I attempted to implement that algorithm, too, a while ago, your version is much better, but still not really useful, it seems. Thanks for WheelPrime, it's a good idea which I didn't know before. I incorporated it into my Primality-module, elementary factorization is now about twice as fast. However, if you want fast factorization, I recommend Pollard's Rho-Method - it's not guaranteed to succeed, but usually it does and it's pretty fast: Prelude Primality> factor (2^73-13) [23,337,238815641,5102337469] (42.75 secs, -2382486596 bytes) Prelude Primality> pollFact (2^73-13) [23,337,238815641,5102337469] (1.64 secs, 186986044 bytes) and easily implemented. But don't expect too much - factoring 2^137-1 took over 51 hours on my (old and slow) machine. For all who want it, the module is below. One odd thing, though. It was about 20% faster when compiled with ghc-6.2.2 rather than with ghc-6.4. Any Guru know why? If anybody has better algorithms, I'd be happy to know about them. Cheers, Daniel -- | This module provides tests for primality of Integers, safe and unsafe. -- Also some factorization methods and a process calculating all positive -- primes are given. module Primality ( -- * Primality tests -- ** Safe but slow isPrime, -- :: Integer -> Bool -- ** Unsafe but much faster isProbablePrime, -- :: Int -> Integer -> Bool isRatherProbablePrime, -- :: Integer -> Bool isVeryProbablePrime, -- :: Integer -> Bool -- ** Tests for (strong) pseudoprimality isPseudo, -- :: Integer -> Integer -> Bool isStrongPseudo, -- :: Integer -> Integer -> Bool -- * Factorization methods -- ** Safe but slow factor, -- :: Integer -> [Integer] -- ** Pollards rho-method, faster but it may fail pollFact, -- :: Integer -> [Integer] -- * List of primes primes -- :: [Integer] ) where import Data.List (insert) ---------------------------------------------------------------------- -- Exported functions -- ---------------------------------------------------------------------- -- A) Unsafe but relatively fast -- | This function checks if the second argument is a pseudoprime -- for the first argument (or indeed a prime). isPseudo :: Integer -> Integer -> Bool isPseudo a n = powerWithModulus n a (abs n - 1) == 1 -- | This function checks if the second argument is a prime or a -- strong pseudoprime for the first argument. isStrongPseudo :: Integer -> Integer -> Bool isStrongPseudo a n | n < 0 = isStrongPseudo a (-n) | n < 2 = False | n == 2 = True | even n = False | a < 0 = isStrongPseudo (-a) n | a < 2 = error "isStrongPseudo: illegitimate base" | otherwise = fst (checkStrong a n ((n-1) `quot` 2)) -- | The list of all positive primes. primes :: [Integer] primes = 2:3:5:7:11:13:17:19:23:[n | n <- [29,31 .. ], and [n `rem` p /= 0 | p <- takeWhile (squareLess n) primes]] -- | @isProbablePrime k n@ tests whether @n@ is (if not prime) a strong -- pseudoprime for the first @k@ primes. isProbablePrime :: Int -> Integer -> Bool isProbablePrime k n = and [isStrongPseudo p n | p <- take k (takeWhile (squareLess n) primes)] -- | Testing strong pseudoprimality for the first 200 primes. isRatherProbablePrime :: Integer -> Bool isRatherProbablePrime = isProbablePrime 200 -- | Testing for the first 2000 primes. isVeryProbablePrime :: Integer -> Bool isVeryProbablePrime = isProbablePrime 2000 -- B) Safe and slow -- | Safe primality test, based on trial division. isPrime :: Integer -> Bool isPrime n | n < 0 = isPrime (-n) | n < 2 = False | n < 4 = True | even n = False | otherwise = head (factor n) == n -- | Elementary factorization by trial division, tuned by wheel-technique. factor :: Integer -> [Integer] factor 0 = [0] factor 1 = [1] factor n | n < 0 = -1: factor' (-n) 0 smallPrimes | otherwise = factor' n 0 smallPrimes where factor' :: Integer -> Integer -> [Integer] -> [Integer] factor' 1 _ _ = [] factor' n w [] = let w' = wheelModulus + w in if w'^2 > n then [n] else factor' n w' $ map (w' +) wheelSettings factor' n w ps'@(p:ps) = case n `quotRem` p of (q,0) -> p:factor' q w ps' _ -> factor' n w ps -- C) Pollard's rho-method -- | Factorization by Pollard's rho-method after eliminating small -- prime factors. A \'veryProbablePrime\' is treated as prime, -- failure to factor a number known as composite is signalled by -- including a @1@ in the list of factors. pollFact :: Integer -> [Integer] pollFact n | n < 0 = (-1):pollFact (-n) | n == 0 = [0] | n == 1 = [] | even n = 2:pollFact (n `quot` 2) | otherwise = smallFacts n 0 smallPrimes ---------------------------------------------------------------------- -- Helper functions -- ---------------------------------------------------------------------- -- calculate a power with respect to a modulus, first tests and -- forwarding to another helper powerWithModulus :: Integer -> Integer -> Integer -> Integer powerWithModulus mo n k | mo < 0 = powerWithModulus (-mo) n k | mo == 0 = n^k | k < 0 = error "powerWithModulus: negative exponent" | k == 0 = 1 | k == 1 = n `mod` mo | mo == 1 = 0 | odd k = powerMod mo n n (k-1) | otherwise = powerMod mo 1 n k -- calculate the power with auxiliary value to account for -- odd exponents on the way, we have @mo >= 2@ and @k >= 1@ powerMod :: Integer -> Integer -> Integer -> Integer -> Integer powerMod mo aux val k | k == 1 = (aux * val) `mod` mo | odd k = ((powerMod mo $! ((aux*val) `mod` mo)) $! (val^2 `mod` mo)) $! (k `quot` 2) | even k = (powerMod mo aux $! (val^2 `mod` mo)) $! (k `quot` 2) -- If a prime @p@ divides @a^(2*e)-1@, @p@ divides exactly one of the -- factors @a^e+1@ and @a^e-1@. We check the analogous property for @n@. -- Say, @m = 2^u*v@ with odd @v@. We recursively check whether @n@ -- divides one of the factors @a^(2^j*v)+1@ for @0 <= j <= m@ or -- the factor @a^v-1@. Success is propagated without further calculation, -- failure also returns the value @a^(2^j*v) `rem` n@ to the caller. checkStrong :: Integer -> Integer -> Integer -> (Bool,Integer) checkStrong a n m | even m = case checkStrong a n (m `quot` 2) of (True,_) -> (True,0) (False,k) -> (b,r) where r = (k^2) `rem` n b = (r+1) `rem` n == 0 | otherwise = ((k-1) `rem` n == 0 || (k+1) `rem` n == 0, k) where k = powerWithModulus n a m -- condition used for various tests squareLess :: Integer -> Integer -> Bool squareLess n k = k^2 <= n ---------------------------------------------------------------------- -- Wheel Factoring -- ---------------------------------------------------------------------- -- wheel size is set so that factoring is relatively fast, -- but finding smallPrimes and wheelSettings does not take too long wheelSize :: Int wheelSize = 7 verySmallPrimes :: [Integer] verySmallPrimes = take wheelSize primes wheelModulus :: Integer wheelModulus = product verySmallPrimes smallPrimes :: [Integer] smallPrimes = takeWhile (<= wheelModulus) primes wheelSettings :: [Integer] wheelSettings = [f | f <- [1 .. wheelModulus-1], null [p | p <- verySmallPrimes, f `rem` p == 0]] ---------------------------------------------------------------------- -- Helpers for Pollard's rho-method -- ---------------------------------------------------------------------- -- small factors by wheel-factoring, then pass to rho-method smallFacts :: Integer -> Integer -> [Integer] -> [Integer] smallFacts 1 _ _ = [] smallFacts n w [] | w'^2 > n = [n] | w' > 5*10^6 = rhoFact n | otherwise = smallFacts n w' $ map (w' +) wheelSettings where w' = wheelModulus + w smallFacts n w ps'@(p:ps) = case n `quotRem` p of (q,0) -> p:smallFacts q w ps' _ -> smallFacts n w ps -- Factorization by the rho-method, first we try the function -- @\x -> x^2+1@ with starting value @x1 = 2@. If that fails, -- we pass the number to the rho-method using @\x -> x^2+3@. -- If two nontrivial factors are found, they are factored and -- the lists of factors are merged. rhoFact :: Integer -> [Integer] rhoFact n | isVeryProbablePrime n = [n] | otherwise = case rho1 n of [k] -> rhoKFact 1 k [a,b] -> merge (rhoFact a) (rhoFact b) -- Factorization using @\x -> x^2+2*k+1@, if that fails, we try the -- next k until we give up when @k > 100@. rhoKFact :: Integer -> Integer -> [Integer] rhoKFact k n | isVeryProbablePrime n = [n] | k > 100 = [1,n] | otherwise = case rhoK k n of [m] -> rhoKFact (k+1) m [a,b] -> merge (rhoFact a) (rhoFact b) -- start the search for factors with @x1 = 2@ and @x2 = 5@ rho1 :: Integer -> [Integer] rho1 m = find1 m 2 5 -- With @x = xk@ and @y = x2k@, if @m@ divides @x2k-xk@, we can't find -- a nontrivial factor. If @m@ and @x2k-xk@ are coprime, we check the -- next k, otherwise we have found two nontrivial factors. find1 :: Integer -> Integer -> Integer -> [Integer] find1 m x y | g == m = [m] | g > 1 = [g, m `quot` g] | otherwise = find1 m (s1 x) (s1 (s1 y)) where g = gcd m (y-x) s1 :: Integer -> Integer s1 x = (x^2+1) `rem` m -- merge two sorted lists to one sorted list merge :: [Integer] -> [Integer] -> [Integer] merge = foldr insert -- start the search for factors using @\x -> x^2+2*k+1@ with @x1 = 2@ rhoK :: Integer -> Integer -> [Integer] rhoK k n = kFind s n 2 (4+s) where s = 2*k+1 -- analogous to find1 kFind :: Integer -> Integer -> Integer -> Integer -> [Integer] kFind k m x y | g == m = [m] | g > 1 = [g, m `quot` g] | otherwise = kFind k m (s x) (s (s y)) where g = gcd m (y-x) s :: Integer -> Integer s x = (x^2+k) `rem` m

G'day all.
Quoting Daniel Fischer
The fibonacci numbers are really cool, for fib 100000 and fib 200000 I got the same performance as with William Lee Irwins 'fastest fibonacci numbers in the west' from a couple of months ago.
He reported that mine was a little slower, but it might have been for smaller n. By the way, I believe that my specific algorithm is a new invention. I certainly haven't seen it before, though there are no doubt algorithms that are very much like it out there.
The AKS primality testing is definitely not for Haskell, I'm afraid - or somebody would have to come up with a better way to model polynomials.
Unsurprising, for two reasons: 1. AKS is mostly of theoretical interest. 2. I didn't try very hard to optimise it. IMO, reason 1 dominates reason 2.
However, if you want fast factorization, I recommend Pollard's Rho-Method - it's not guaranteed to succeed, but usually it does and it's pretty fast:
A proper library would have both, and let users either pick which algorithm to use, or to use a combination of techniques depending on whatever criteria make sense to try. Oh, I've also got code for fast combinatorial calculations (e.g. factorial and binomial coefficients). I'll have a go at setting up a darcs repo this weekend. Cheers, Andrew Bromage
participants (10)
-
ajb@spamcop.net
-
Bo Herlin
-
Daniel Fischer
-
hancock@spamcop.net
-
Jacques Carette
-
Jan-Willem Maessen
-
karczma@info.unicaen.fr
-
Lennart Augustsson
-
Marcin 'Qrczak' Kowalczyk
-
Peter Simons