S5GCESSE2 vs BRP3SSE credit

Boomman
Boomman
Joined: 30 Mar 10
Posts: 10
Credit: 36067264
RAC: 0
Topic 195730

Hello.
Please look at following tasks:
http://einsteinathome.org/task/225297498
http://einsteinathome.org/task/225297313

First one is with CPU time 20,839.67 and granted credit 251.25.
Second CPU time is 68,329.72 and granted credit 500.00.

Since run time in second case is three times more, shouldn't granted credit be around 700?

Michael Karlinsky
Michael Karlinsky
Joined: 22 Jan 05
Posts: 888
Credit: 23502182
RAC: 0

S5GCESSE2 vs BRP3SSE credit

Quote:

Hello.
Please look at following tasks:
http://einsteinathome.org/task/225297498
http://einsteinathome.org/task/225297313

First one is with CPU time 20,839.67 and granted credit 251.25.
Second CPU time is 68,329.72 and granted credit 500.00.

Since run time in second case is three times more, shouldn't granted credit be around 700?

Hi Booman,

but BRP runtime on *G*PU is less than 6000s...

Michael

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5874
Credit: 118380308788
RAC: 25580889

RE: RE: 1st result2nd

Quote:
Quote:


1st result
2nd result

First one is with CPU time 20,839.67 and granted credit 251.25.
Second CPU time is 68,329.72 and granted credit 500.00.

Since run time in second case is three times more, shouldn't granted credit be around 700?

but BRP runtime on *G*PU is less than 6000s...


And this is relevant because ...??

Are you saying that if I upgraded some old computing device that was taking a week to crunch a task for 500 credits to a 2021 version ultra computing device (whatever it might be by that time) that could do the exact same job in 3 seconds, I shouldn't expect to get 500 credits every 3 secs?

My personal opinion is that the 'value' of the same job should be constant, irrespective of what device was actually used (and the time it took) to crunch it.

But this is irrelevant because the question was about different jobs (GC vs BRP) where it's harder to strike the proper credit relationship. My personal opinion is that the credit relationship should be determined using as many different platforms as possible to determine the full extent of the variability and then calculating some sort of (possibly weighted) average. The project has a fairly limited set of platforms to experiment with and probably can't afford the time to experiment very much even with what they have, let alone experiment more widely. It's not surprising that the ratio they settled on (500:251) is not a perfect fit for all participants.

I also find with my range of hosts that the ratio should probably be closer to 3:1 than it currently is but not all the way (perhaps around 2.5:1). In a way, it's a bit of a payback for the days when ABP credit was a little on the high side when compared to GW credit. In any case it's all about to change once again when the new S6 run gets going very shortly now.

PS: For the benefit of anybody reading this thread, I've made the OP's URLs clickable although a single link to the full tasks list would probably be a better way to see a more extensive comparison of tasks.

Cheers,
Gary.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4332
Credit: 251666565
RAC: 35776

The problem is that while the

The problem is that while the GW application (S5GCESSE2) runs as fast on AMD as on Intel, the BRP application is noticeably slower on AMD than on Intel CPUs.

I hope that even before the start of the next GW run we'll have the new default BOINC credit scheme installed on Einstein@home, which should automatically distribute credit between applications and application versions depending on their efficiency.

BM

BM

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2980784003
RAC: 755879

RE: The problem is that

Quote:

The problem is that while the GW application (S5GCESSE2) runs as fast on AMD as on Intel, the BRP application is noticeably slower on AMD than on Intel CPUs.

I hope that even before the start of the next GW run we'll have the new default BOINC credit scheme installed on Einstein@home, which should automatically distribute credit between applications and application versions depending on their efficiency.

BM


Bernd, I do rather hope that that was an April Fool's Day joke.

Do you know whether any proper analysis has been done on David's 'CreditNew' schema yet, either statistically on whether the overall project/user/host averages remain constant for equivalent workloads, or psychologically on whether the new method fills the credit role of sustaining volunteer interest?

The only large-scale deployment of 'CreditNew' that I'm aware of is that at SETI, where it has been live on the main project for approaching 10 months, and at Beta for very slightly longer. But SETI certainly doesn't have the spare staff time to perform such an evaluation.

Prior to deployment, SETI had a pretty well-understood and deterministic method of allocating credit by flopcounting. It had its flaws, but at least one knew what a task was 'worth' - and for any one host, credit was near-enough proportional to runtime to discourage cherry-picking.

Under CreditNew, the averages seem to be much as before, but what has been lost is the deterministic relationship between a task and its payback. To a casual glance, individual task credits seem to be near-random. They may settle down after time on fast hosts which have completed thousands, or tens of thousands, of tasks - but that will take time at Einstein, and will never happen for the majority of users. This may affect the psychological evaluation... (read: we see posts on the message boards, when a user is annoyed by receiving an unusually low allocation for an individual task. There are some, but fewer, happy posts from winners when the luck of the draw goes the other way)

There are also problems, which have never been adressed, with the automatic server run-time adjustments to individual app_version DCF. Once dialled in, they work well - but they aren't calculated until the tenth workunit of each new type has been validated. So for every new run at a project like Einstein which changes application regularly, runtime estimates will go haywire for a while, then suffer a nasty, sharp, "DCF squared" transition at the tenth validation. No smoothing.

I have some off-line data from my own researches - be happy to explore the subject further. But this thread probably isn't the place.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.