
On Jul 13, 2014 5:35 PM, "John Meacham"
Infinite bit sets I guess, I don't think it is that unreasonable to exist, were it not for that pesky bitSize.
No, that use *is* unreasonable. Infinite bitsets are just optimized unboxed expandable Boolean vectors, and it makes more sense to have one type that fills with False and another that fills with True than a notion that a bitset is "signed". IntN and WordN are special for two reasons: 1. Their sizes are especially fast. 2. They are numbers, and those bitwise operations can do some pretty cool things fast—the specific thing that I was looking at just now was a Java implementation of isSquare that maaartinus wrote on StackOverflow that uses a masked shift to index into a (logical) bitvector by the six low-order bits of a (logical) integer to see if those bits can be the low-order bits of a perfect square. When I went to do that in Haskell, I ran into all sorts of unpleasant limitations of Data.Bits and some very odd types in Data.Bits.Extras.
FiniteBits and that deprecation are GHC specific. Though, it would make sense to port to jhc, it's fairly annoying for portable code to rely on ad-hoc changes like this.
Looks like some work went into removing the Num superclass in ghc's base. Hmm... I think type class aliases are needed to actually make it backwards compatible though. Since bit is a primitive, you can get zero from the somewhat awkward 'clearBit 0 (bit 0)' and one from 'bit 1' and -1 from complement zero so the defaults that were dropped can be added back in using those.
John -- John Meacham - http://notanumber.net/