16 bit floating point data in Haskell?

Hi guys, Do we have anything like half precision floats in Haskell? Maybe in some non standard libraries? Or I have to use FFI + OpenEXR library to achieve this? Cheers, Oleksandr.

What about the built-in Float type? Prelude Foreign.Storable> sizeOf (undefined :: Float) 4 Prelude Foreign.Storable> sizeOf (undefined :: Double) 8 Or maybe you mean something that can be used with FFI calls to C, in which case Foreign.C.Types (CFloat). Both instance the Floating, RealFloat, RealFrac, etc, classes so should operate largely the same as (modulo precision) a Double. -Ross On Sep 27, 2009, at 2:42 PM, Olex P wrote:
Hi guys,
Do we have anything like half precision floats in Haskell? Maybe in some non standard libraries? Or I have to use FFI + OpenEXR library to achieve this?
Cheers, Oleksandr. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

He meant 16-bit floats, which have sizeOf 2
On GPUs this is common and implemented in hardware (at least on the old
GPUs).
On DPSs you commonly had 24-bit floats too.
But these days I guess 32-bit is the minimum one would want to use? Most of
the time I just use double anyway :)
On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren
What about the built-in Float type?
Prelude Foreign.Storable> sizeOf (undefined :: Float) 4 Prelude Foreign.Storable> sizeOf (undefined :: Double) 8
Or maybe you mean something that can be used with FFI calls to C, in which case Foreign.C.Types (CFloat).
Both instance the Floating, RealFloat, RealFrac, etc, classes so should operate largely the same as (modulo precision) a Double.
-Ross
On Sep 27, 2009, at 2:42 PM, Olex P wrote:
Hi guys,
Do we have anything like half precision floats in Haskell? Maybe in some non standard libraries? Or I have to use FFI + OpenEXR library to achieve this?
Cheers, Oleksandr. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Oh sorry, I misread the original question. I take it all back! -Ross On Sep 27, 2009, at 4:19 PM, Peter Verswyvelen wrote:
He meant 16-bit floats, which have sizeOf 2
On GPUs this is common and implemented in hardware (at least on the old GPUs).
On DPSs you commonly had 24-bit floats too.
But these days I guess 32-bit is the minimum one would want to use? Most of the time I just use double anyway :)
On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren
wrote: What about the built-in Float type? Prelude Foreign.Storable> sizeOf (undefined :: Float) 4 Prelude Foreign.Storable> sizeOf (undefined :: Double) 8
Or maybe you mean something that can be used with FFI calls to C, in which case Foreign.C.Types (CFloat).
Both instance the Floating, RealFloat, RealFrac, etc, classes so should operate largely the same as (modulo precision) a Double.
-Ross
On Sep 27, 2009, at 2:42 PM, Olex P wrote:
Hi guys,
Do we have anything like half precision floats in Haskell? Maybe in some non standard libraries? Or I have to use FFI + OpenEXR library to achieve this?
Cheers, Oleksandr. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi,
Yes, I mean "sizeOf 2". It's useful not only on GPUs but also in "normal"
software. Think of huge data sets in computer graphics (particle clouds,
volumetric data, images etc.) Some data (normals, density, temperature and
so on) can be easily represented as float 16 making files 200 GB instead of
300 GB. Good benefits.
Cheers,
Oleksandr.
On Sun, Sep 27, 2009 at 9:19 PM, Peter Verswyvelen
He meant 16-bit floats, which have sizeOf 2 On GPUs this is common and implemented in hardware (at least on the old GPUs).
On DPSs you commonly had 24-bit floats too.
But these days I guess 32-bit is the minimum one would want to use? Most of the time I just use double anyway :)
On Sun, Sep 27, 2009 at 9:47 PM, Ross Mellgren
wrote: What about the built-in Float type?
Prelude Foreign.Storable> sizeOf (undefined :: Float) 4 Prelude Foreign.Storable> sizeOf (undefined :: Double) 8
Or maybe you mean something that can be used with FFI calls to C, in which case Foreign.C.Types (CFloat).
Both instance the Floating, RealFloat, RealFrac, etc, classes so should operate largely the same as (modulo precision) a Double.
-Ross
On Sep 27, 2009, at 2:42 PM, Olex P wrote:
Hi guys,
Do we have anything like half precision floats in Haskell? Maybe in some non standard libraries? Or I have to use FFI + OpenEXR library to achieve this?
Cheers, Oleksandr. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sep 28, 2009, at 9:40 AM, Olex P wrote:
Hi,
Yes, I mean "sizeOf 2". It's useful not only on GPUs but also in "normal" software. Think of huge data sets in computer graphics (particle clouds, volumetric data, images etc.) Some data (normals, density, temperature and so on) can be easily represented as float 16 making files 200 GB instead of 300 GB. Good benefits.
From the OpenEXR technical introduction: half numbers have 1 sign bit, 5 exponent bits, and 10 mantissa bits. The interpretation of the sign, exponent and mantissa is analogous to IEEE-754 floating-point numbers. half supports normalized and denormalized numbers, infinities and NANs (Not A Number). The range of representable numbers is roughly 6.0E-8 to 6.5E4; numbers smaller than 6.1E-5 are denormalized. Single-precision floats are already dangerously short for many computations. (Oh the dear old B6700 with 39 bits of precision in single-precision floats...) Half-precision floats actually have less than half the precision of singles (11 bits instead of 23). It's probably best to think of binary 16 as a form of compression for Float, and to write stuff that will read half-precision from a binary stream as single-precision, and conversely stuff that will accept single-precision values and write them to a binary stream in half-precision form.

On Mon, 28 Sep 2009 12:06:47 +1300, you wrote:
On Sep 28, 2009, at 9:40 AM, Olex P wrote:
Hi,
Yes, I mean "sizeOf 2". It's useful not only on GPUs but also in "normal" software. Think of huge data sets in computer graphics (particle clouds, volumetric data, images etc.) Some data (normals, density, temperature and so on) can be easily represented as float 16 making files 200 GB instead of 300 GB. Good benefits.
From the OpenEXR technical introduction:
half numbers have 1 sign bit, 5 exponent bits, and 10 mantissa bits. The interpretation of the sign, exponent and mantissa is analogous to IEEE-754 floating-point numbers. half supports normalized and denormalized numbers, infinities and NANs (Not A Number). The range of representable numbers is roughly 6.0E-8 to 6.5E4; numbers smaller than 6.1E-5 are denormalized.
Single-precision floats are already dangerously short for many computations. (Oh the dear old B6700 with 39 bits of precision in single-precision floats...) Half-precision floats actually have less than half the precision of singles (11 bits instead of 23). It's probably best to think of binary 16 as a form of compression for Float, and to write stuff that will read half-precision from a binary stream as single-precision, and conversely stuff that will accept single-precision values and write them to a binary stream in half-precision form.
I agree with the above. I hadn't realized how dangerously short for many computations single-precision is. So, as he says, for computing, you do want to convert half-precision to single-precision, if not double-precision. If you want to save storage space, then some sort of compression scheme might be better on secondary storage. As for the video card, some sort of fast decompression scheme would be necessary for the half-precision numbers coming in. You are probably in the realm of DSP. -- Regards, Casey

2009/9/28 Casey Hawthorne
On Mon, 28 Sep 2009 12:06:47 +1300, you wrote:
On Sep 28, 2009, at 9:40 AM, Olex P wrote:
Hi,
Yes, I mean "sizeOf 2". It's useful not only on GPUs but also in "normal" software. Think of huge data sets in computer graphics (particle clouds, volumetric data, images etc.) Some data (normals, density, temperature and so on) can be easily represented as float 16 making files 200 GB instead of 300 GB. Good benefits.
From the OpenEXR technical introduction:
half numbers have 1 sign bit, 5 exponent bits, and 10 mantissa bits. The interpretation of the sign, exponent and mantissa is analogous to IEEE-754 floating-point numbers. half supports normalized and denormalized numbers, infinities and NANs (Not A Number). The range of representable numbers is roughly 6.0E-8 to 6.5E4; numbers smaller than 6.1E-5 are denormalized.
Single-precision floats are already dangerously short for many computations. (Oh the dear old B6700 with 39 bits of precision in single-precision floats...) Half-precision floats actually have less than half the precision of singles (11 bits instead of 23). It's probably best to think of binary 16 as a form of compression for Float, and to write stuff that will read half-precision from a binary stream as single-precision, and conversely stuff that will accept single-precision values and write them to a binary stream in half-precision form.
I agree with the above.
I hadn't realized how dangerously short for many computations single-precision is.
So, as he says, for computing, you do want to convert half-precision to single-precision, if not double-precision.
If you want to save storage space, then some sort of compression scheme might be better on secondary storage.
As for the video card, some sort of fast decompression scheme would be necessary for the half-precision numbers coming in.
'Half', as they are called, are supported in GPU. The half-precision floating point is a core feature in OpenGL 3.0. As said above, they are merely a data storage format, which should be translated to floats or doubles before any computation. Cheers, Thu

I think a 16-bit float type would require compiler revisions as opposed to doing something within the present type classes. This is similar to how Java would benefit from an unsigned byte primitive type for processing images, etc., whereas Haskell already has Word8. -- Regards, Casey

Ross Mellgren wrote:
What about the built-in Float type?
Prelude Foreign.Storable> sizeOf (undefined :: Float) 4 Prelude Foreign.Storable> sizeOf (undefined :: Double) 8
While we're on the subject... I understand that certain FPUs support 80-bit precision. Is there any way to get at that? Or is this going to require FFI too?

On Sun, 27 Sep 2009, Olex P wrote:
Hi guys,
Do we have anything like half precision floats in Haskell? Maybe in some non standard libraries? Or I have to use FFI + OpenEXR library to achieve this?
If you only want to save storage, you may define newtype Float16 = Float16 Int16 and write Num, Fractional and Floating instances that convert operands to Float, perform operations on Float and put the results back to Int16. By some fusion you may save conversions, but you would also get different results due to higher precision.
participants (8)
-
Andrew Coppin
-
Casey Hawthorne
-
Henning Thielemann
-
minh thu
-
Olex P
-
Peter Verswyvelen
-
Richard O'Keefe
-
Ross Mellgren