Binary Radio Pulsar Search (Perseus Arm Survey) "BRP5"

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 112

RE: On my slightly

Quote:
On my slightly overclocked Radeon 7850@1050Mhz runtime of BRP5 compared to BRP4 is factor 10 longer.


Hmmm, I took all the single runs from here, added them up, divided them by their amount (15) and have an average runtime of BRP4s for you of 1743.4 seconds.
Then I did the same thing with the single runs from here, added them up, divided them by their amount (8) and the average runtime of the BRP5s is thus far 13,288.5 seconds.

(13,288.5 / 1,743.4) = 7.6221751, which isn't exactly 10.

Now, before you say I didn't count it correctly, I then added up everything from the first pages.
BRP4 has a total of 46,154.83, which divided by 20 is an average of 2,307.7415
BRP5 has a total of 177,677.92, which divided by 10 is an average of 17,767.792

(17,767.792 / 2,307.7415) = 7.6992124 which is also not exactly 10.

Not sure what you do in daily life, but are you telling everyone that you're earning 10 times more, while being paid only 7.6 times more? Would you want to earn 10 times more and only be paid 7.6 times more (before taxes)?

Now, the fun thing, 7.622 times 500 3,811 credit.
7.699 times 500 is 3,849 credit.

You're being paid equal to what you were before. Slightly more even.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7219624931
RAC: 975722

Since I initially reported

Since I initially reported that mixed BRP4/BRP5 loads on my GTX660 hosts showed increased BRP5 elapsed times and decreased BRP4 elapsed times compared to "pure" loads--in all cases with three simultaneous jobs on the GPU, some others have reported not seeing any such effect. As my initial report was non-quantitative, and from poorly controlled observations, I'll supplant it with this carefully controlled set of observations from a single GTX660 host. All the observations were conducted with no throttling active, a single BOINC CPU job (GW Einstein work) on a quad-core Sandy Bridge host running Windows 7 64-bit. I continue to see the effect I described.

[pre]BRP4 BRP5 watts hours/CPU hours/BRP4 hours/BRP5 GPU_load RAC RAC/watt
3 0 177.4 3.900 0.914 - 96% 40938 230.7
2 1 179.2 3.922 0.784 10.926 96% 40929 228.4
1 2 176.7 3.946 0.657 9.183 95% 40694 230.3
0 3 174.5 3.979 - 7.689 93% 38973 223.3[/pre]

The first two columns are job counts, and the third is wall clock elapsed time for the pure CPU GW tasks. The next two are the heart of the matter--wall-clocked elapsed time for the BRP tasks. GPU load is that reported by GPU-Z as average over several hours.

On this host, under these conditions, the elapsed times are very highly repeatable under a single condition of BRP4/BRP5 mix (well under 1% variation). So the range of BRP4 times from .657 to .914 hours is many sigma of that natural variation, as is the BRP5 range from 7.689 to 10.926.

For the RAC columns I've estimated RAC by direct computation from completion times and credits, using the initially implemented 4000 BRP5 credit. For my particular host 4000 is slightly below "break-even" credit of 4210 for the pure load cases.

I speculate that the fact that I run more than two simultaneous jobs on the GTX660 GPU may be material to observing this effect, as I suspect that what is going on is that when one of the three jobs "needs to take a break" (most likely needing some data provided over the motherboard bus) that the choice of which of the two other jobs to run next--combined with how long that job can run before itself needing to take a break--is the crux of the issue. If the choice of which job to run next is equally likely to be the BRP4 or the BRP5 job, then still the BRP5 job will get shortchanged in fraction of time it is running if time it can run before taking a break is shorter. That this is true is hinted at by observations by several of us that under comparable conditions the CPU time consumed by the CPU support task for a BRP5 job is somewhat higher than for a BRP4 job.

Lastly, the slightly lower GPU loading for the pure 3x BRP5 job case may hint that on my system a 4x configuration for BRP5 might be slightly more productive. for pure BRP5 loading. I intend to test this case soon, but not to implement it until I am free of BRP4 work in the mix.

Regarding the "fair pay" discussion, the extension of BRP5 elapsed time in mix-load cases compared to pure load cases would tend to elevate the calculated fair-pay BRP5 credit. In my own case, my failure to control my initial 3x BRP5 to beginning and ending purity had me overstating the breakeven credit for my own system as 4610. On these better controlled results it appears instead to be 4210--which is not far at all from the 4000 initially adopted. A completely uncontrolled estimate which used the mixed loads as they happened to occur would likely be considerably higher yet.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250388525
RAC: 34719

RE: Don't know about seti,

Quote:
Don't know about seti, creditnew is bizarre and erratic to say the least. Probably one of the reasons for the mass exit from the project.

Hm. I thought that by now more than half of the BOINC projects do use CreditNew. Am I mistaken?

BM

BM

Sparrow
Sparrow
Joined: 4 Jul 11
Posts: 29
Credit: 10701417
RAC: 0

My GPU needs about 9 times as

My GPU needs about 9 times as long for BRP5 compared to BRP4 (9900 sec vs 1100 sec). Credits are 8 times as high. Seems okay.

Comparing credits between different projects is nonsense anyways.

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 112

RE: Hm. I thought that by

Quote:

Hm. I thought that by now more than half of the BOINC projects do use CreditNew. Am I mistaken?

BM


As far as I know, only Seti and World Community grid. See this thread where we touched on this not too long ago.

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2955253223
RAC: 717411

RE: RE: Don't know about

Quote:
Quote:
Don't know about seti, creditnew is bizarre and erratic to say the least. Probably one of the reasons for the mass exit from the project.

Hm. I thought that by now more than half of the BOINC projects do use CreditNew. Am I mistaken?

BM


As I said when you asked some of us a similar question by email last month, "I think we ought to be careful about the distinction between projects which use the full-blown CreditNew package, and projects which use the runtime estimation and other server-side components that were introduced around that time, but use one of the alternative credit mechanisms - as you do with Albert."

We all also have to be aware that there are a large number of BOINC projects out there - over 80 active, according to BoincStats. Most readers here will have in-depth knowledge of maybe a dozen of those? (I over-generalise, horribly). There could easily be 40 projects hiding out there that use CreditNew without any of us knowing it. But I think that the answer you'll receive is that a much lower proportion of the big, high-profile, well-established projects has switched to CreditedNew - just as Einstein has been reluctant to make the leap into the unknown, while it remains unknown. The 'early adopters' are more likely to be the smaller, newer, projects which possibly weren't around to start with a different credit system.

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

CreditNew seems like a nice

CreditNew seems like a nice endeavor. Should be made mandatory on all projects once the bugs are ironed out.
Projects trying to attract crunchers by overbidding each other with credits instead of by the merit of the work the project does sickens me.

ritterm
ritterm
Joined: 18 Jun 08
Posts: 23
Credit: 46657826
RAC: 0

RE: My GPU needs about 9

Quote:
My GPU needs about 9 times as long for BRP5 compared to BRP4 (9900 sec vs 1100 sec). Credits are 8 times as high...


My host with a lowly GTX260 takes about 10 times as long for Perseus v. Arecibo and gets 8 times the credit.

Eric_Kaiser
Eric_Kaiser
Joined: 7 Oct 08
Posts: 16
Credit: 25699305
RAC: 0

@Ageless: Thank you very much

@Ageless:
Thank you very much for your math & time. So we all know it exactly now...

Let me point out clear: I'm not complaining on credits given for crunching wus.
Credits are not my motivation spending computing power and energy costs for distributed computing.

I've set up boinc and gimps to run my computer as close to 100% as possible.
Even so Einstein is not the only project I'm supporting.
I'm running 12 wu parallel on cpu and at least 2 wu on gpu (from different project). So there might be interferences.
I couldn't care less wether it is 10 times, 7.xxx times, or whatever times more or less.
And hey, there are many interesting projects distributed on boinc. So I'm not reliant on Einstein solely...

Kind Regards

Eric

Jord
Joined: 26 Jan 05
Posts: 2952
Credit: 5893653
RAC: 112

RE: Let me point out clear:

Quote:
Let me point out clear: I'm not complaining on credits given for crunching wus.
Credits are not my motivation spending computing power and energy costs for distributed computing.


I wasn't doing the math to tell you about the credits, I was doing the math because there's a lot of people in this thread stating their times are ten times longer for BRP5s than they were for BRP4s. And it just ain't true. So yours was just an example.

Because for some it may now all of a sudden be about credits, I then did the math showing that the BRP5 payment was just about the same as they'd always gotten for BRP4, but never complained about.

Sorry to have taken yours as an example and not stating clearly why I did so. By the time I thought of editing my post, my hour was up. :-(

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.