
And so, inspired by the marketing litrature, I just spent £££ on a very expensive new GPU that supports CUDA. The only problem is... I can't seem to get any software to use it. Does anybody know how to make this stuff actually work? (Also... Haskell on the GPU. It's been talked about for years, but will it ever actually happen?)

Hello Andrew, Thursday, February 5, 2009, 11:10:42 AM, you wrote:
Does anybody know how to make this stuff actually work?
nvidia has cuda site where you can download sdk. afair, dr dobbs journal has (online) series of arcticles which describes how to program it step-by-step
(Also... Haskell on the GPU. It's been talked about for years, but will it ever actually happen?)
gpu is just set of simd-like instructions. so the reason why you will never see haskell on gpu is the same as why you will never see it implemented via simd instructions :D -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

(Also... Haskell on the GPU. It's been talked about for years, but will it ever actually happen?)
gpu is just set of simd-like instructions. so the reason why you will never see haskell on gpu is the same as why you will never see it implemented via simd instructions :D
Because SIMD/GPU deals only with numbers, not pointers, you will not see much _symbolic_ computation being offloaded to these arithmetic units. But there are still great opportunities to improve Haskell's speed at numerics using them. And some symbolic problems can be encoded using integers. There are at least two current (but incomplete) projects in this area: Sean Lee at UNSW has targetted Data Parallel Haskell for an Nvidia GPGPU, and Joel Svensson at Chalmers is developing a Haskell-embedded language for GPU programming called Obsidian. Regards, Malcolm

Hello Malcolm, Friday, February 6, 2009, 11:49:56 AM, you wrote:
gpu is just set of simd-like instructions. so the reason why you will never see haskell on gpu is the same as why you will never see it implemented via simd instructions :D
Because SIMD/GPU deals only with numbers, not pointers, you will not see much _symbolic_ computation being offloaded to these arithmetic units. But there are still great opportunities to improve Haskell's speed at numerics using them. And some symbolic problems can be encoded using integers.
are you learned gpu asm? the *only* type of problems it can effectively run is massive-parallel computations. you can run anything on it, but much slower that on cpu
There are at least two current (but incomplete) projects in this area: Sean Lee at UNSW has targetted Data Parallel Haskell for an Nvidia GPGPU, and Joel Svensson at Chalmers is developing a Haskell-embedded language for GPU programming called Obsidian.
key word here *parallel*, i.e. simd computations -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Malcolm Wallace:
(Also... Haskell on the GPU. It's been talked about for years, but will it ever actually happen?)
gpu is just set of simd-like instructions. so the reason why you will never see haskell on gpu is the same as why you will never see it implemented via simd instructions :D
Because SIMD/GPU deals only with numbers, not pointers, you will not see much _symbolic_ computation being offloaded to these arithmetic units. But there are still great opportunities to improve Haskell's speed at numerics using them. And some symbolic problems can be encoded using integers.
There are at least two current (but incomplete) projects in this area: Sean Lee at UNSW has targetted Data Parallel Haskell for an Nvidia GPGPU, and Joel Svensson at Chalmers is developing a Haskell- embedded language for GPU programming called Obsidian.
We have a paper about the UNSW project now. It is rather high-level, but has some performance figures of preliminary benchmarks: http://www.cse.unsw.edu.au/~chak/papers/LCGK09.html BTW, this is currently independent of Data Parallel Haskell. It is a flat data-parallel array language embedded in Haskell. The language is restricted in a manner that we can generate GPU code (CUDA to be precise) from it. In the longer run, we want to turn this into a backend of Data Parallel Haskell, but that will require quite a bit more work. Manuel

Bulat Ziganshin wrote:
Hello Andrew,
Thursday, February 5, 2009, 11:10:42 AM, you wrote:
Does anybody know how to make this stuff actually work?
nvidia has cuda site where you can download sdk. afair, dr dobbs journal has (online) series of arcticles which describes how to program it step-by-step
OK. It's just that I downloaded a CUDA-enabled program, and it's not using CUDA, and I can't figure out why. :-(
(Also... Haskell on the GPU. It's been talked about for years, but will it ever actually happen?)
gpu is just set of simd-like instructions. so the reason why you will never see haskell on gpu is the same as why you will never see it implemented via simd instructions :D
Heh, fair enough.

Am 05.02.2009 um 09:10 schrieb Andrew Coppin:
And so, inspired by the marketing litrature, I just spent £££ on a very expensive new GPU that supports CUDA. The only problem is... I can't seem to get any software to use it.
Does anybody know how to make this stuff actually work?
(Also... Haskell on the GPU. It's been talked about for years, but will it ever actually happen?)
Have a look at Obsidian
and ask Mr Svenson if there is anything working.
participants (5)
-
Adrian Neumann
-
Andrew Coppin
-
Bulat Ziganshin
-
Malcolm Wallace
-
Manuel M T Chakravarty