E@H on a GPU

Steven Winfield
Steven Winfield
Joined: 10 Nov 04
Posts: 7
Credit: 262816
RAC: 0
Topic 187116

Modern graphics cards contain GPUs with programmable vertex and fragment processors which can effectively be used as a set of parallel CPUs (See www.gpgpu.org for papers on the subject).

Could E@H make use of this parallelism? Or even use the GPU as a just a single separate processor?

High level programming languages, such as Cg developed by NVidia (www.nvidia.com), are now available for porting code from regular CPU programs to GPU vertex and fragment programs. These programs can be invoked from OpenGL calls which are supported on a wide range of operating systems.

I realise that not all computational algorithms can be recast in a form suitable for the GPU's streaming capabilities, but surely any increase in processing power would be a bonus.

Steve

Doris and Jens
Doris and Jens
Joined: 30 Oct 04
Posts: 30
Credit: 2688588
RAC: 0

E@H on a GPU

SETI@home project was together with NVIDIA working on this feature, but the progress didn't look good.

They got the FFT to run on the NVIDIA chip, but not very fast: only about 1/4 the speed of the CPU. Problem seems to be the slow speed of moving data from GPU memory back to main memory.

So to use the GPU efficiently they need to do ALL the analysis (not just the FFT) on the GPU, but this will require better high-level-language support.

Possible the 'Brook' (a GPU language being developed at Stanford) project will help here in the future.

Greetings from Bremen/Germany

Jens Seidler (TheBigJens)


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.