Binary Radio Pulsar Search (Perseus Arm Survey) "BRP5"

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4265
Credit: 244921893
RAC: 16834

RE: Basically I think E@H

Quote:
Basically I think E@H doesn't give much credit for tasks.

Well, according to BOINCStats and BOINC Combined Einstein@Home grants >10x as much credit "per CPU hour" than most other projects. This comparison may be severely flawed for GPUs (what is a CPU second for a GPU App?), but these charts also include GPUGrid, which doesn't have a CPU App at all. So these can hardly be about CPU Apps only.

BM

BM

S@NL - John van Gorsel
S@NL - John van...
Joined: 19 Feb 05
Posts: 5
Credit: 33762692
RAC: 913

RE: If we're sticking to

Quote:

If we're sticking to nice round figures, may I propose 4,000? That would be 456 credits per hour on this host - slightly high, but not exaggeratedly so.

Seems like a fair amount. For my GTX580 this means 1370 cr/hr running under Linux where this same card yields 1500 cr/hr running Seti (x41g) under Win7.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648905
RAC: 1480315

RE: RE: If we're sticking

Quote:
Quote:
If we're sticking to nice round figures, may I propose 4,000? That would be 456 credits per hour on this host - slightly high, but not exaggeratedly so.

That is, I think, good starting point for further discussion.

BRP5 tasks are and will be much longer than BRP4 ones. The probability of errors during computation becomes much higher too. Many users with older GPUs will be forced to give up BRP5 tasks.
In my opinion these longer and more difficult tasks should be rewarded with some bonus points.


I've repeated my 'standard candle' calculations for my GTX 470 'Fermi'

SETI host 4292666 credit per hour 1077 (range 1557 to 592)
Einstein host 1226365 credit per hour 1479 (range 1735 to 1174)
GPUGrid host 43404 credit per hour 4136 (range 4567 to 4045)

For anyone checking my figures, I have doubled the raw values for SETI and Einstein because I run two tasks at once for those projects. The GPUGrid values are for 'short run' tasks, and only for tasks which earned the 'within 24 hours' completion bonus. GPUGrid pays even higher rates for 'long run' tasks, but I think should be excluded as an outlier.

This host has only completed BRP4 GPU tasks so far at Einstein, so no BRP5 comparison is available.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648905
RAC: 1480315

RE: This comparison may be

Quote:

This comparison may be severely flawed for GPUs (what is a CPU second for a GPU App?), but these charts also include GPUGrid, which doesn't have a CPU App at all. So these can hardly be about CPU Apps only.

BM


Since I have my GPUGrid spreadsheet open in front of me, I can report that at GPUGrid, my GTX 470 is awarded in 'credit per CPU hour':

[pre]Median 24,607
Mean 45,006
Max 154,572
Min 23,995[/pre]
The reason for the huge difference between median and mean is that different types of task require different levels of CPU support - CPU time as a %age of runtime ranges from below 3% to nearly 17%.

Similarly, that host is averaging over 7,000 'credits per CPU hour' for BRP4 work on the GPU. It's not a meaningful statistic.

Mumak
Joined: 26 Feb 13
Posts: 325
Credit: 3291781195
RAC: 1523917

RE: Well, according to

Quote:

Well, according to BOINCStats and BOINC Combined Einstein@Home grants >10x as much credit "per CPU hour" than most other projects. This comparison may be severely flawed for GPUs (what is a CPU second for a GPU App?), but these charts also include GPUGrid, which doesn't have a CPU App at all. So these can hardly be about CPU Apps only.
BM

I was considering GPU tasks only. In case of CPU, yes, E@H gives a higher credit than WCG CPU tasks for example.
But BRP5 is GPU-only, so we do care about GPU tasks only now. Making a rough comparison with one of my hosts over a longer period - credit earned running BRP4 tasks only is about 1/10th of credit when I let the same host run GPUGrid, MW@H or WCG HCC tasks.

In either case the comparison needs to be based on a given GPU. For example, there a HUGE difference running DP tasks (like MW@H) on a GPU with higher DP performance (like HD79xx for example) or low-end cards.
Another important point I believe is the optimization of BRP tasks. If it would be possible to port them to CUDA4+, they would produce much better (as was the experience on GPUGrid which offered CUDA3 and CUDA4 apps, where later GPUs performed much better running CUDA4 tasks).

Quote:

The GPUGrid values are for 'short run' tasks, and only for tasks which earned the 'within 24 hours' completion bonus.

GPUGrid short tasks do not earn bonus for completion within 24h. Only long ones + they have a bonus for running so long (to cover potential crash/loss).

-----

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2139
Credit: 2752648905
RAC: 1480315

RE: GPUGrid short tasks do

Quote:
GPUGrid short tasks do not earn bonus for completion within 24h. Only long ones + they have a bonus for running so long (to cover potential crash/loss).


Oh, yes they do. The two tasks I left out because they over-ran the 24 hour mark - 6877153 and 6882959 - were paid about 3,380 credits per (GPU) hour.

Sunny129
Sunny129
Joined: 5 Dec 05
Posts: 162
Credit: 160342159
RAC: 0

RE: While my first batch of

Quote:

While my first batch of three BRP5 WUs on each of two GTX660 hosts spent most of their run time together, I've subsequently had substantial run time combining one BRP4 with two BRP5 or two BRP4 with one BRP5. While my use of throttling and my use of a single GW pure CPU job complicate direct comparison with other people, an initial observation is that a single BRP4 jobs runs much faster when sharing the GPU with two BRP5 than when sharing with two more BRP4.

Also, the BRP5 jobs run much slower when sharing the GPU with BRP4 work.

These effects will considerably complicate calculations of "fair" credit and of most productive workload, as I imagine a large fraction of the BRP5 production capacity will be on hosts running multiple jobs per GPU, and with both BRP4 and BRP5 enabled.


interesting...i'm not seeing this behavior at all on my main host for this project. i have 2 x GTX 580s running 4 BRP tasks each (both at PCIe x8 2.0 bandwidth), fed by an AMD 6-core 1090T CPU, Windows 7 x64. regardless of whether either card is running all BRP4, all BRP5, or some combination thereof, the BRP4 tasks continue to take approx. the same amount of time to crunch as they used to before the BRP5 tasks started to show up. likewise, my BRP5 task run times are incredibly consistent, and do not vary depending on what type or combination of BRP tasks are currently running on that particular GPU. though i'm sure the different behaviors exhibited by our hosts are simply due to the remaining hardware differences between our platforms. at any rate, i'm sure these side-effects will be more pronounced for some folks than others...

Sid
Sid
Joined: 17 Oct 10
Posts: 160
Credit: 920862000
RAC: 285217

RE: interesting...i'm not

Quote:

interesting...i'm not seeing this behavior at all on my main host for this project. i have 2 x GTX 580s running 4 BRP tasks each (both at PCIe x8 2.0 bandwidth), fed by an AMD 6-core 1090T CPU, Windows 7 x64. regardless of whether either card is running all BRP4, all BRP5, or some combination thereof, the BRP4 tasks continue to take approx. the same amount of time to crunch as they used to before the BRP5 tasks started to show up. likewise, my BRP5 task run times are incredibly consistent, and do not vary depending on what type or combination of BRP tasks are currently running on that particular GPU. though i'm sure the different behaviors exhibited by our hosts are simply due to the remaining hardware differences between our platforms. at any rate, i'm sure these side-effects will be more pronounced for some folks than others...

Same here. I have the same amount of time to crunch BRP4 regardless of BRP4 or BRP5 are mixed.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023194931
RAC: 1828359

On the stare decisis notion

On the stare decisis notion of being consistent with previous decisions, I offer that on the GTX660/Sandy Bridge/Windows 7 host for which I have cleanest comparison data, triple simultaneous BRP4 running pure takes an elapsed time average of .914 hours, while triple simultaneous BRP5 takes about 8.427 hours. If one thinks the 500 credits awarded for BRP4 to be correct, and one thinks my host to be reasonably representative (neither high nor low end, somewhat typical save perhaps for my odd choice of running only a single pure CPU BOINC job), this suggests a 4610 credit award for BRP5. Of course, if one thinks BRP4 was already high or low, that would move things around.

Bernd, I suggest that project comparison credit per CPU hour is deeply flawed in cases where GPUs are major contributors, as GPU elapsed time as shared, which is the key resource, correlates poorly with CPU time consumption in the support application. It is not even correlated well for the same project on the same host when comparing differing numbers of simulaneous GPU jobs. Use of this measure would make projects which succeed in off-loading nearly the entire application to the GPU look over-generous, whereas that should be the goal--to the extent feasible.

Stephan Goll
Stephan Goll
Joined: 13 Dec 05
Posts: 25
Credit: 27834196
RAC: 0

PA0050_00441_261_1 -

PA0050_00441_261_1 - 165131666 - 24 May 2013 8:41:29 UTC - 26 May 2013 15:10:28 UTC - Completed, waiting for validation - 60,899.09 - 5,478.15 - 35.01 - pending - Binary Radio Pulsar Search (Perseus Arm Survey) v1.33 (BRP4cuda32nv270)

nVidia GT320, Intel c2d @ 3 GHz, Debian linux, nearly 17 hours. Looks a bit long, but it's okay. The other CUDA WUs will finish in 58xx seconds (around 10 times shorter) so the granted credit should be somewhere in the 5k range.
Stephan

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.