Is there a GPU version of the app in the works?

Gerry Rough
Gerry Rough
Joined: 1 Mar 05
Posts: 102
Credit: 1847066
RAC: 0

RE: RE: The announcement

Message 87120 in response to message 87119

Quote:
Quote:
The announcement of the GPU change in BOINC says SaH uses the GPU but as far as I know the SaH app is still in Beta test so really there is only one project that is capable of using the GPU.

The Seti app became official today.

A few days ago Lattice mentioned on their boards that, "Yes, we will be releasing a GPU-accelerated version of GARLI in the near future!" IIRC, I've heard rumblings at Rosetta as well, but the plans were much further down the road.


(Click for detailed stats)

j2satx
j2satx
Joined: 22 Jan 05
Posts: 46
Credit: 1650297
RAC: 0

RE: RE: I think I read

Message 87121 in response to message 87117

Quote:
Quote:
I think I read somewhere that either BOINC or E@h will automatically check for a GPU, and then go to work, meaning that crunchers who join a project will not have to do anything different if one or more of their projects are doing GPU. Is this true?

If you have the latest Nvidia drivers installed and the correct version of BOINC, it will detect the GPU. I have only run two tasks for GPU Grid and since I cannot get into the web site (and the two e-mails I have sent to the two addresses I could find have so far gone unanswered) so I do not know yet if I was successful with those two tasks.

The announcement of the GPU change in BOINC says SaH uses the GPU but as far as I know the SaH app is still in Beta test so really there is only one project that is capable of using the GPU.

For those that subscribe to the BOINC Dev list can see the questions I have asked about what is really going on with the GPU task. Again, since I cannot get into GPU Grid I cannot ask my questions there ... and I am tired of the flack on SaH so have not bothered to subject myself again to the SaH forums ...

But the two tasks that ran "Successfully" (as far as I know) took about 10 hours on my new i7 (4 core with HT giving 8 virtual CPUs) with 0.90 CPU use with one GPU ...

which gave rise to my question if I added a GPU card (the MB I have can take up to 4 or 5 at various bus speeds) will that make things faster and will it be 1.8 CPU and 2 GPU? or 0.90 with 2 GPU?

Anyway it is a new technology and there are more questions yet than answers. I will say that it would be a relatively cheap way to add processing capability to a box, but the projects that can take advantage of the capability are limited which means you will have to ask if you really want to do more for those projects or not.

One more note, this is going to "break" the resource share model that is implicit in BOINC's design in that the computing capability of the machine is no longer comprised of a uniform pool of resources to apply to tasks. Though my expectation is that the BOINC developers will ignore the implications and do nothing to address the issues raised by this change.

My understanding is that it would be 2*0.9 CPU and 2 GPU. I believe each GPU requires a core, whether or not it is fully utilized.

j2satx
j2satx
Joined: 22 Jan 05
Posts: 46
Credit: 1650297
RAC: 0

RE: RE: I think I read

Message 87122 in response to message 87117

Quote:
Quote:
I think I read somewhere that either BOINC or E@h will automatically check for a GPU, and then go to work, meaning that crunchers who join a project will not have to do anything different if one or more of their projects are doing GPU. Is this true?

If you have the latest Nvidia drivers installed and the correct version of BOINC, it will detect the GPU. I have only run two tasks for GPU Grid and since I cannot get into the web site (and the two e-mails I have sent to the two addresses I could find have so far gone unanswered) so I do not know yet if I was successful with those two tasks.

The announcement of the GPU change in BOINC says SaH uses the GPU but as far as I know the SaH app is still in Beta test so really there is only one project that is capable of using the GPU.

For those that subscribe to the BOINC Dev list can see the questions I have asked about what is really going on with the GPU task. Again, since I cannot get into GPU Grid I cannot ask my questions there ... and I am tired of the flack on SaH so have not bothered to subject myself again to the SaH forums ...

But the two tasks that ran "Successfully" (as far as I know) took about 10 hours on my new i7 (4 core with HT giving 8 virtual CPUs) with 0.90 CPU use with one GPU ...

which gave rise to my question if I added a GPU card (the MB I have can take up to 4 or 5 at various bus speeds) will that make things faster and will it be 1.8 CPU and 2 GPU? or 0.90 with 2 GPU?

Anyway it is a new technology and there are more questions yet than answers. I will say that it would be a relatively cheap way to add processing capability to a box, but the projects that can take advantage of the capability are limited which means you will have to ask if you really want to do more for those projects or not.

One more note, this is going to "break" the resource share model that is implicit in BOINC's design in that the computing capability of the machine is no longer comprised of a uniform pool of resources to apply to tasks. Though my expectation is that the BOINC developers will ignore the implications and do nothing to address the issues raised by this change.

It is not enough to have the latest Nvidia drivers.......they must be specifically CUDA drivers.

Gerry Rough
Gerry Rough
Joined: 1 Mar 05
Posts: 102
Credit: 1847066
RAC: 0

RE: My understanding is

Message 87123 in response to message 87121

Quote:

My understanding is that it would be 2*0.9 CPU and 2 GPU. I believe each GPU requires a core, whether or not it is fully utilized.

Not sure what you mean here. Do you mean that on my quad core host, three of my cores will belong to other projects, and use 100% of those CPUs, and that one core will be used by the GPU at 90% usage?


(Click for detailed stats)

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2980344047
RAC: 767449

RE: RE: My understanding

Message 87124 in response to message 87123

Quote:
Quote:

My understanding is that it would be 2*0.9 CPU and 2 GPU. I believe each GPU requires a core, whether or not it is fully utilized.

Not sure what you mean here. Do you mean that on my quad core host, three of my cores will belong to other projects, and use 100% of those CPUs, and that one core will be used by the GPU at 90% usage?


Yes, based on the *ahem* project and the lively debate in the forums there, that would seem to be the case.

The only nuance would seem to be the proportion of the CPU required to keep the GPU fed and fully occupied. At GPUGRID, it seems to be fairly high - like the ~90% suggested here. At SETI, the proportion seems to be much lower - ~5%. One would perhaps hope that as GPU programming matures, the %age CPU requirement would decrease (SETI may have got a head start because of the direct involvement of NVidia staff).

The next question becomes: what to do with the remaining ~10% of a CPU (not a big deal), or ~95% of a CPU (rather more significant). As things stand, BOINC doesn't handle this well: it runs '3+1' - 3 CPU apps, and one GPU+CPU feeder. The more efficient the GPU programming, the less efficient the BOINC/CPU usage.

There have been some reports of successful running of '4+1' mode on a quad - 4 CPU intensive apps, and a fifth task running on the GPU plus whatever it can scavenge from the other CPU cores. Apparently this leads to low GPU efficiency (the GPU app spends a lot of time waiting to attract the attention of a CPU to feed it more data), but careful manipulation of the CPU process thread priority can get thing running well. Definitely geek territory, until BOINC can be fine-tuned to manipulate the threads to best advantage.

mikey
mikey
Joined: 22 Jan 05
Posts: 12779
Credit: 1866388624
RAC: 1785655

RE: RE: RE: My

Message 87125 in response to message 87124

Quote:
Quote:
Quote:

My understanding is that it would be 2*0.9 CPU and 2 GPU. I believe each GPU requires a core, whether or not it is fully utilized.

Not sure what you mean here. Do you mean that on my quad core host, three of my cores will belong to other projects, and use 100% of those CPUs, and that one core will be used by the GPU at 90% usage?

Yes, based on the *ahem* project and the lively debate in the forums there, that would seem to be the case.

The only nuance would seem to be the proportion of the CPU required to keep the GPU fed and fully occupied. At GPUGRID, it seems to be fairly high - like the ~90% suggested here. At SETI, the proportion seems to be much lower - ~5%. One would perhaps hope that as GPU programming matures, the %age CPU requirement would decrease (SETI may have got a head start because of the direct involvement of NVidia staff).

The next question becomes: what to do with the remaining ~10% of a CPU (not a big deal), or ~95% of a CPU (rather more significant). As things stand, BOINC doesn't handle this well: it runs '3+1' - 3 CPU apps, and one GPU+CPU feeder. The more efficient the GPU programming, the less efficient the BOINC/CPU usage.

There have been some reports of successful running of '4+1' mode on a quad - 4 CPU intensive apps, and a fifth task running on the GPU plus whatever it can scavenge from the other CPU cores. Apparently this leads to low GPU efficiency (the GPU app spends a lot of time waiting to attract the attention of a CPU to feed it more data), but careful manipulation of the CPU process thread priority can get thing running well. Definitely geek territory, until BOINC can be fine-tuned to manipulate the threads to best advantage.

BUT in some cases the GPU is MUCH faster than the CPU feeding it, ie if the CPU only was used processing would actually be slower. Now to be able to use BOTH the GPU AND the CPU would be an ideal World, if they can do that efficiently it would be sweet.

koschi
koschi
Joined: 17 Mar 05
Posts: 86
Credit: 1710060300
RAC: 604509

RE: There have been some

Message 87126 in response to message 87124

Quote:

There have been some reports of successful running of '4+1' mode on a quad - 4 CPU intensive apps, and a fifth task running on the GPU plus whatever it can scavenge from the other CPU cores. Apparently this leads to low GPU efficiency (the GPU app spends a lot of time waiting to attract the attention of a CPU to feed it more data), but careful manipulation of the CPU process thread priority can get thing running well. Definitely geek territory, until BOINC can be fine-tuned to manipulate the threads to best advantage.

Under Linux the 4+1 or 2+1 config runs without problems and is the default in GPUGRID. Under Windows 3+1 or 1+1 is the default. If one forces 4+1 under Windows, GPU performance decreases a lot, because its not feed fast enough. This is a windows scheduling problem, as far as I understood. Under Linux there is no such issue, hence the 4+1 default. The only problem under Linux is that the acemd process (which feeds the GPU) is started with a nice value of 0, which really affects desktop performance. If one sets it back to 19, the desktop isn't choppy and the GPU looses only very little performance.

I'm really looking forward to the Einstein CUDA app. If any beta testers are needed, I would love to volunteer :)

Rakarin
Rakarin
Joined: 10 Dec 05
Posts: 9
Credit: 163914
RAC: 0

RE: I'm really looking

Message 87127 in response to message 87126

Quote:
I'm really looking forward to the Einstein CUDA app. If any beta testers are needed, I would love to volunteer :)

Same here.

I unexpectedly started receiving SETI CUDA units earlier this week. I have also been running the Folding @ Home GPU client.

The two play well together. I don't know the percentage of task sharing on the GPU, but the SETI units usually run in 5-8 minutes, 15-18 minutes, or 25-5 minutes. I don't have a baseline for SETI without F@H running, but that seems pretty fast. The F@H is slower, but it doesn't seem to get down to 50% normal. I would guess it runs about 75% normal speed, which is still good. (I don't have SETI units continuously, so Folding doesn't have timeout issues.) Both SETI and F@H take about 3-5% of the processor, so when SETI CUDA runs, two cores are pegged full and the third has about 10% distributed computing apps while running everything else.

Now, one thing I've noticed, and I don't have a lot of consistent data to back this up, when I do occasional checks with CPU-Z, the GPU core temperature drops a few degrees when both SETI CUDA and F@H are running.

In any case, I should probably look into improving the cooling for the video card...

Paul D. Buck
Paul D. Buck
Joined: 17 Jan 05
Posts: 754
Credit: 5385205
RAC: 0

To add to my post(s) below.

To add to my post(s) below. I have been using 6.5.0 (Beta) and the overhead on the main CPU is about 0.03 and that CPU is still running other tasks. So, on an 8 core machine I am running 9 tasks. When I moved another GPU capable card into that machine I was running 10 tasks.

The two GPU cards were radically different in capabilities so were not linked. SLI linked cards appear, from comments posted, to be seen as one CPU. What is not clear is if the full capabilities of the second card are used or not. In other words, we need to test more.

As to the comments about cross project credit and resource share models I will only note that the proposals were not acted on for more than one reason. But the primary reason was that the BOINC team would not have incorporated the code even if the concept had been proven. Worse, to prove the concept the projects would have had to be involved.

The larger point is that NIH is alive and well in the BOINC world and is probably the key determinant as to why the participant population is frozen (leaving participants balanced by gains). I was doing a little research and found I was x of 1.5 million ... well, 2-3 years ago we had about that many participants.

BOINC will never become prime time until the developers begin to take seriously the need to listen to the participant community. Heck the application still does not remember the look and feel from the last time I opened the window (windows). How lame is that?

Oh, and I have been doing tasks for GPU Grid and though many failed with initilization errors, those that started running have been running to completion. Of the 11 tasks I ran for SaH they all seem to be validating (still waiting on 4-5 to validate) and each took 9 minutes wall clock time. On my other systems I think the run time using the optimized applications is about 30 min for the same tasks (slower computer) but I do not have a good head to head comparison. Again, anecdotal evidence is that the GPU version is 2-3 times faster than the CPU version.

I just like the idea I can add processing capability to existing systems with relatively inexpensive GPU cards and better, can incrementally add processing speed by replacing older cards without having to junk the whole computer.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2980344047
RAC: 767449

RE: Under Linux the 4+1 or

Message 87129 in response to message 87126

Quote:
Under Linux the 4+1 or 2+1 config runs without problems and is the default in GPUGRID. Under Windows 3+1 or 1+1 is the default. If one forces 4+1 under Windows, GPU performance decreases a lot, because its not feed fast enough. This is a windows scheduling problem, as far as I understood. Under Linux there is no such issue, hence the 4+1 default. The only problem under Linux is that the acemd process (which feeds the GPU) is started with a nice value of 0, which really affects desktop performance. If one sets it back to 19, the desktop isn't choppy and the GPU looses only very little performance.


You could perhaps try setting it to some intermediate value, and get the best of both worlds!

Raistmer has posted code at SETI Beta - it's only one system call plus an error-handler - which on Windows achieves just that: the "below normal" priority is enough to keep feeding the GPU, but not intrusive into foreground applications. I assume Linux has a similar system call which could be added to the #if_.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.