CUDA, Stream Computing and Ct

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4267
Credit: 244931393
RAC: 16426

RE: I understand that even

Message 82447 in response to message 82446

Quote:
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.


That's quite amazing. I've been told that this is impossible.

Actually running a second Application (and Workunits) on the same project is quite possible on BOINC, though I don't know how many projects actually do this (I could imagine Leiden Classical). Erik Korpela is visiting the AEI this week, he told us that SETI@home will run Astropulse as a second Application some time soon. We're currently looking into implementing it, it might become an option for Einstein@home, too. This way we could actually run a "stream computing" search in parallel.

BM

BM

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686138752
RAC: 558524

RE: RE: I understand that

Message 82448 in response to message 82447

Quote:
Quote:
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.

That's quite amazing. I've been told that this is impossible.

At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:

http://folding.stanford.edu/English/FAQ-ATI2#ntoc23

CU
Bikeman

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4267
Credit: 244931393
RAC: 16426

RE: RE: RE: I

Message 82449 in response to message 82448

Quote:
Quote:
Quote:
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.

That's quite amazing. I've been told that this is impossible.

At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:

http://folding.stanford.edu/English/FAQ-ATI2#ntoc23

CU
Bikeman


I see. The information I got apparently was bound to CUDA / NVidia.

BM

BM

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 686138752
RAC: 558524

RE: RE: RE: RE: I

Message 82450 in response to message 82449

Quote:
Quote:
Quote:
Quote:
I understand that even for Folding@Home, the workunits crunched by the GPU beta clients are different from those for the other platforms. But they did manage to do visualization and GPU processing at the same time now, so that you can still use your PC's video capabilities while crunching, which should improve acceptance.

That's quite amazing. I've been told that this is impossible.

At least for the ATI variant. Seems to be a recent change tho, after Folding@Home's GPU client switched from a DirectX driven API to the "CAL" abstraction layer:

http://folding.stanford.edu/English/FAQ-ATI2#ntoc23

CU
Bikeman


I see. The information I got apparently was bound to CUDA / NVidia.

BM

Apparently it works for NVidia as well, see this FAQ entry http://folding.stanford.edu/English/FAQ-ATI2#ntoc10 that refers explicitly to both ATI and NVidia visualizatons.

CU
Bikeman

|MatMan|
|MatMan|
Joined: 22 Jan 05
Posts: 24
Credit: 249005261
RAC: 0

RE: There is no standard

Message 82451 in response to message 82444

Quote:
There is no standard for GPU computing (yet). Picking one particular model: how many Einstein@home participants do have an NVidia Quadro card that they want to actually use for crunching?


You don't need the expensive Quadro cards. Any nVidia card utilizing a G80, G92, G94 or G200 supports Cuda (in slightly different versions though). Even the onboard graphic chips (GeForce 8200 IGPs) should support Cuda but I'm not sure about that (wouldn't make much sense anyway I think). So you have about 70-80 million Cuda enabled GPUs out there (according to nVidia presentations).

Quote:
As far as I understand the Folding@home application is based on Brook or some similar higher level language, the Einstein@home application is (currently) not. Our "Fstat engine" could be thought of as an FFT for narrow frequency bands. It's actually possible to use standard FFT implementations to calculate it, but in the current framework this would be rather inefficient.


You don't have to use fft. But you have to recode and optimize you application for Cuda, e.g. you should have a certain number of stream processors busy at a time (parallelism), don't use to much branching, ...
Cuda is not a high level language with a certain number of fixed functions but rather a C interface with some GPU specific synchronisation routines. I don't say it's easy to program but it should be possible to run any algorithm on a GPU. The question is if it makes sense (can the algorithm be parallelized enough and is the algorithm computation bound, not I/O bound).

Stranger7777
Stranger7777
Joined: 17 Mar 05
Posts: 436
Credit: 417522221
RAC: 33778

Apparently, I'm now running

Apparently, I'm now running Einstein on BOINC on QUAD-CORE and Folding on GPU of GeForce 7500 with CUDA enabled drivers (there are a lot of GPUs supporting CUDA even from 7200). And I can watch movies and play 3D games (but folding works slower) and run Einstein (again, Folding is running slower, because it needs one CPU to feed the GPU with data and it works faster when BOINC is paused).

The difference in Einstein and Folding is in computation model. Folding chooses SMP model as basic platform and it makes it easier to scale work between CPUs or kernels in GPUs or even different machines in a clusters. Besides, it relyes on standard SMP libraries, that is common for *nix OSes. But it is not common for Windows and new for GPUs. But it works. I even can see 3D model of what I'm working on. The only confusing factor is that GPU core is working yet on beta-workunits that are produced only to test the core and to compare results between GPU-WUs and SMP-WUs.

I think we should not brake any computation models now, at least until S5R4 ends. We can parallelize our work between cores in CPU and that's enough for now. BOINC is more stable platform as I see. And we should look what will happen with Folding (will it be useful?) and only then try to think about a new programming model.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.